Jan 27 07:45:32 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 07:45:32 crc restorecon[4754]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:32 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:45:33 crc restorecon[4754]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 07:45:33 crc restorecon[4754]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 07:45:34 crc kubenswrapper[4799]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 07:45:34 crc kubenswrapper[4799]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 07:45:34 crc kubenswrapper[4799]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 07:45:34 crc kubenswrapper[4799]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 07:45:34 crc kubenswrapper[4799]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 07:45:34 crc kubenswrapper[4799]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.119331 4799 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125589 4799 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125631 4799 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125644 4799 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125656 4799 feature_gate.go:330] unrecognized feature gate: Example Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125668 4799 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125682 4799 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125696 4799 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125711 4799 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125724 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125742 4799 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125755 4799 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125768 4799 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125779 4799 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125791 4799 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125815 4799 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125826 4799 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125837 4799 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125848 4799 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125859 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125871 4799 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125882 4799 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125892 4799 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125903 4799 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125913 4799 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125924 4799 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125934 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125945 4799 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125956 4799 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125967 4799 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125978 4799 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125987 4799 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.125997 4799 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126006 4799 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126016 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126026 4799 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126036 4799 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126046 4799 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126056 4799 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126065 4799 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126075 4799 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126085 4799 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126098 4799 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126109 4799 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126125 4799 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126135 4799 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126146 4799 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126160 4799 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126172 4799 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126183 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126194 4799 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126206 4799 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126217 4799 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126229 4799 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126239 4799 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126249 4799 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126261 4799 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126272 4799 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126283 4799 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126292 4799 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126343 4799 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126358 4799 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126370 4799 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126380 4799 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126392 4799 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126402 4799 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126412 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126422 4799 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126432 4799 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126442 4799 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126453 4799 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.126464 4799 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126651 4799 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126675 4799 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126701 4799 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126716 4799 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126734 4799 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126748 4799 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126765 4799 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126780 4799 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126792 4799 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126805 4799 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126819 4799 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126831 4799 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126843 4799 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126855 4799 flags.go:64] FLAG: --cgroup-root="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126866 4799 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126878 4799 flags.go:64] FLAG: --client-ca-file="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126889 4799 flags.go:64] FLAG: --cloud-config="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126901 4799 flags.go:64] FLAG: --cloud-provider="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126912 4799 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126925 4799 flags.go:64] FLAG: --cluster-domain="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126936 4799 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126948 4799 flags.go:64] FLAG: --config-dir="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126959 4799 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126972 4799 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126986 4799 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.126997 4799 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127009 4799 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127020 4799 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127031 4799 flags.go:64] FLAG: --contention-profiling="false" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127042 4799 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127054 4799 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127066 4799 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127077 4799 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127091 4799 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127103 4799 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127114 4799 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127128 4799 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127139 4799 flags.go:64] FLAG: --enable-server="true" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127150 4799 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127166 4799 flags.go:64] FLAG: --event-burst="100" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127179 4799 flags.go:64] FLAG: --event-qps="50" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127190 4799 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127201 4799 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127214 4799 flags.go:64] FLAG: --eviction-hard="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127228 4799 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127239 4799 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127250 4799 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127262 4799 flags.go:64] FLAG: --eviction-soft="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127273 4799 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127285 4799 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127296 4799 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127343 4799 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127383 4799 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127394 4799 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127407 4799 flags.go:64] FLAG: --feature-gates="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127422 4799 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127434 4799 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127446 4799 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127457 4799 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127469 4799 flags.go:64] FLAG: --healthz-port="10248" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127480 4799 flags.go:64] FLAG: --help="false" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127492 4799 flags.go:64] FLAG: --hostname-override="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127503 4799 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127515 4799 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127526 4799 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127537 4799 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127548 4799 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127561 4799 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127575 4799 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127586 4799 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127597 4799 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127608 4799 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127620 4799 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127632 4799 flags.go:64] FLAG: --kube-reserved="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127643 4799 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127654 4799 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127666 4799 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127676 4799 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127688 4799 flags.go:64] FLAG: --lock-file="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127701 4799 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127713 4799 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127725 4799 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127742 4799 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127754 4799 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127765 4799 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127776 4799 flags.go:64] FLAG: --logging-format="text" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127787 4799 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127799 4799 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127810 4799 flags.go:64] FLAG: --manifest-url="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127823 4799 flags.go:64] FLAG: --manifest-url-header="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127837 4799 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127849 4799 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127862 4799 flags.go:64] FLAG: --max-pods="110" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127874 4799 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127886 4799 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127897 4799 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127907 4799 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127917 4799 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127929 4799 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127956 4799 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127982 4799 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.127994 4799 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128005 4799 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128016 4799 flags.go:64] FLAG: --pod-cidr="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128027 4799 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128044 4799 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128055 4799 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128066 4799 flags.go:64] FLAG: --pods-per-core="0" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128078 4799 flags.go:64] FLAG: --port="10250" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128090 4799 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128101 4799 flags.go:64] FLAG: --provider-id="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128112 4799 flags.go:64] FLAG: --qos-reserved="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128124 4799 flags.go:64] FLAG: --read-only-port="10255" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128135 4799 flags.go:64] FLAG: --register-node="true" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128147 4799 flags.go:64] FLAG: --register-schedulable="true" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128160 4799 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128180 4799 flags.go:64] FLAG: --registry-burst="10" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128191 4799 flags.go:64] FLAG: --registry-qps="5" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128202 4799 flags.go:64] FLAG: --reserved-cpus="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128219 4799 flags.go:64] FLAG: --reserved-memory="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128233 4799 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128245 4799 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128256 4799 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128267 4799 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128279 4799 flags.go:64] FLAG: --runonce="false" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128290 4799 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128338 4799 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128350 4799 flags.go:64] FLAG: --seccomp-default="false" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128361 4799 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128371 4799 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128383 4799 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128394 4799 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128409 4799 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128418 4799 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128427 4799 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128436 4799 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128445 4799 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128455 4799 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128464 4799 flags.go:64] FLAG: --system-cgroups="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128473 4799 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128489 4799 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128498 4799 flags.go:64] FLAG: --tls-cert-file="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128507 4799 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128518 4799 flags.go:64] FLAG: --tls-min-version="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128527 4799 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128536 4799 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128545 4799 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128554 4799 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128606 4799 flags.go:64] FLAG: --v="2" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128618 4799 flags.go:64] FLAG: --version="false" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128629 4799 flags.go:64] FLAG: --vmodule="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128647 4799 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.128656 4799 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129171 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129190 4799 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129200 4799 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129210 4799 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129221 4799 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129265 4799 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129277 4799 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129287 4799 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129344 4799 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129355 4799 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129370 4799 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129389 4799 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129439 4799 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129451 4799 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129462 4799 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129472 4799 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129482 4799 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129492 4799 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129542 4799 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129553 4799 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129562 4799 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129572 4799 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129581 4799 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129591 4799 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129641 4799 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129650 4799 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129660 4799 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129670 4799 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129680 4799 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129727 4799 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129741 4799 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129751 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129761 4799 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129771 4799 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129822 4799 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129834 4799 feature_gate.go:330] unrecognized feature gate: Example Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129844 4799 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129853 4799 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129863 4799 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129873 4799 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129920 4799 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129931 4799 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129940 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.129960 4799 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130011 4799 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130025 4799 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130037 4799 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130047 4799 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130057 4799 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130104 4799 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130116 4799 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130126 4799 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130136 4799 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130146 4799 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130155 4799 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130203 4799 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130215 4799 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130225 4799 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130235 4799 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130245 4799 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130255 4799 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130342 4799 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130358 4799 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130368 4799 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130378 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130388 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130440 4799 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130451 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130465 4799 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130477 4799 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.130491 4799 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.130561 4799 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.146204 4799 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.146700 4799 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.146876 4799 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.146896 4799 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.146906 4799 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.146917 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.146927 4799 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.146937 4799 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.146946 4799 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.146955 4799 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.146965 4799 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.146976 4799 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.146986 4799 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.146995 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147005 4799 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147015 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147025 4799 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147034 4799 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147044 4799 feature_gate.go:330] unrecognized feature gate: Example Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147051 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147059 4799 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147067 4799 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147075 4799 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147082 4799 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147090 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147097 4799 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147105 4799 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147113 4799 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147138 4799 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147175 4799 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147192 4799 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147217 4799 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147228 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147242 4799 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147255 4799 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147268 4799 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147296 4799 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147340 4799 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147352 4799 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147364 4799 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147376 4799 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147387 4799 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147397 4799 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147442 4799 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147452 4799 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147463 4799 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147473 4799 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147483 4799 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147492 4799 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147503 4799 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147512 4799 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147523 4799 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147536 4799 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147548 4799 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147560 4799 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147572 4799 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147583 4799 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147595 4799 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147605 4799 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147614 4799 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147624 4799 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147632 4799 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147640 4799 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147662 4799 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147670 4799 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147678 4799 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147686 4799 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147695 4799 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147702 4799 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147710 4799 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147718 4799 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147725 4799 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.147755 4799 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.147769 4799 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148192 4799 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148231 4799 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148244 4799 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148253 4799 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148264 4799 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148276 4799 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148285 4799 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148294 4799 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148345 4799 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148359 4799 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148371 4799 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148382 4799 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148393 4799 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148403 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148412 4799 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148420 4799 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148450 4799 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148471 4799 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148494 4799 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148504 4799 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148514 4799 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148523 4799 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148534 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148544 4799 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148554 4799 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148563 4799 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148573 4799 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148582 4799 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148591 4799 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148601 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148610 4799 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148619 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148629 4799 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148639 4799 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148667 4799 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148678 4799 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148688 4799 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148697 4799 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148707 4799 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148717 4799 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148727 4799 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148737 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148748 4799 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148759 4799 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148768 4799 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148779 4799 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148790 4799 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148798 4799 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148805 4799 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148813 4799 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148838 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148845 4799 feature_gate.go:330] unrecognized feature gate: Example Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148866 4799 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148873 4799 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148881 4799 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148890 4799 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148902 4799 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148912 4799 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148925 4799 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148939 4799 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148952 4799 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148977 4799 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.148990 4799 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.149002 4799 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.149011 4799 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.149021 4799 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.149030 4799 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.149041 4799 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.149050 4799 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.149059 4799 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.149072 4799 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.149088 4799 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.150600 4799 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.158850 4799 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.159028 4799 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.162722 4799 server.go:997] "Starting client certificate rotation" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.162768 4799 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.163012 4799 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-18 16:15:00.176920894 +0000 UTC Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.163154 4799 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.241027 4799 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 07:45:34 crc kubenswrapper[4799]: E0127 07:45:34.243980 4799 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.247929 4799 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.266322 4799 log.go:25] "Validated CRI v1 runtime API" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.305290 4799 log.go:25] "Validated CRI v1 image API" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.308613 4799 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.315031 4799 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-07-40-26-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.315233 4799 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.347393 4799 manager.go:217] Machine: {Timestamp:2026-01-27 07:45:34.342743895 +0000 UTC m=+0.653848030 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:d3817001-797e-409c-8ccf-0b6489f48d4e BootID:908ca879-28d5-4e99-9761-e4bdaff0505d Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:e5:fe:20 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:e5:fe:20 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:43:af:dc Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:46:15:83 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:db:bd:d6 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:c3:12:ec Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:7f:c6:d4 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:da:72:a6:d4:bd:3f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:de:e2:e4:37:b3:99 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.348101 4799 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.348613 4799 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.351272 4799 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.351475 4799 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.351514 4799 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.351773 4799 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.351783 4799 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.352339 4799 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.352371 4799 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.352671 4799 state_mem.go:36] "Initialized new in-memory state store" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.352773 4799 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.358220 4799 kubelet.go:418] "Attempting to sync node with API server" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.358253 4799 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.358279 4799 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.358293 4799 kubelet.go:324] "Adding apiserver pod source" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.358330 4799 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.364352 4799 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.364594 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 27 07:45:34 crc kubenswrapper[4799]: E0127 07:45:34.364727 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.364817 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 27 07:45:34 crc kubenswrapper[4799]: E0127 07:45:34.365153 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.366479 4799 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.369476 4799 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.371517 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.371712 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.371844 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.372002 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.372140 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.372246 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.372475 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.372578 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.372606 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.372626 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.372657 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.372678 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.373899 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.374994 4799 server.go:1280] "Started kubelet" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.375045 4799 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 27 07:45:34 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.380785 4799 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.383094 4799 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.383878 4799 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.384954 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.385169 4799 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.385228 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 01:55:14.27213159 +0000 UTC Jan 27 07:45:34 crc kubenswrapper[4799]: E0127 07:45:34.385598 4799 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.385681 4799 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.385711 4799 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.385733 4799 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 07:45:34 crc kubenswrapper[4799]: E0127 07:45:34.395226 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="200ms" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.395801 4799 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.395843 4799 factory.go:55] Registering systemd factory Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.395858 4799 factory.go:221] Registration of the systemd container factory successfully Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.395777 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 27 07:45:34 crc kubenswrapper[4799]: E0127 07:45:34.396093 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.396291 4799 factory.go:153] Registering CRI-O factory Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.396340 4799 factory.go:221] Registration of the crio container factory successfully Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.396366 4799 factory.go:103] Registering Raw factory Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.396385 4799 manager.go:1196] Started watching for new ooms in manager Jan 27 07:45:34 crc kubenswrapper[4799]: E0127 07:45:34.395228 4799 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e86d0aa7ec038 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 07:45:34.374936632 +0000 UTC m=+0.686040767,LastTimestamp:2026-01-27 07:45:34.374936632 +0000 UTC m=+0.686040767,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.398909 4799 manager.go:319] Starting recovery of all containers Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.399117 4799 server.go:460] "Adding debug handlers to kubelet server" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409358 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409448 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409470 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409494 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409514 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409535 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409553 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409571 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409595 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409614 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409631 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409649 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409666 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409688 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409705 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409725 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409742 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409762 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409781 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409799 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409816 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409835 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409853 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409873 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409891 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409911 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409937 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409959 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409978 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.409997 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.410021 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.410039 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.410057 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.410075 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.410095 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.410113 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.410131 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.410149 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.410166 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.410186 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.410206 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.410224 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.410242 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.410260 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.410278 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.410296 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.410342 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.410366 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418392 4799 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418491 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418521 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418544 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418567 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418601 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418626 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418647 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418667 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418686 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418700 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418722 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418742 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418765 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418785 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418803 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418818 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418835 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418854 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418871 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418886 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418908 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418923 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418940 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418963 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.418983 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419008 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419028 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419047 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419063 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419079 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419096 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419112 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419127 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419143 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419159 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419175 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419197 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419296 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419338 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419356 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419372 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419389 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419413 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419434 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419453 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419471 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419488 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419503 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419519 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419535 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419556 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419576 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419597 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419617 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419637 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419660 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419691 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419711 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419729 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419747 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419763 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419781 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419798 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419817 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419832 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419849 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419869 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419892 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419911 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419926 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419943 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419960 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419975 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.419998 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420015 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420068 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420083 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420099 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420121 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420136 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420152 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420167 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420182 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420199 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420214 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420229 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420243 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420258 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420274 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420289 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420326 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420341 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420358 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420373 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420388 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420402 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420417 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420430 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420446 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420462 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420476 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420509 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420526 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420543 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420565 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420586 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420605 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420623 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420644 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420662 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420676 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420691 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420705 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420719 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420733 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420750 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420764 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420778 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420793 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420806 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420822 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420839 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420858 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420877 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420895 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420911 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420927 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420943 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420959 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420974 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.420990 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421007 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421026 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421078 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421095 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421111 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421126 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421140 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421156 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421170 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421186 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421201 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421216 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421231 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421245 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421260 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421274 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421292 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421328 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421345 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421366 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421384 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421405 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421425 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421444 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421461 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421477 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421493 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421509 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421522 4799 reconstruct.go:97] "Volume reconstruction finished" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.421532 4799 reconciler.go:26] "Reconciler: start to sync state" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.429780 4799 manager.go:324] Recovery completed Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.443329 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.445729 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.445803 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.445817 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.446452 4799 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.449520 4799 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.449558 4799 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.449583 4799 state_mem.go:36] "Initialized new in-memory state store" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.450039 4799 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.450115 4799 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.450164 4799 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 07:45:34 crc kubenswrapper[4799]: E0127 07:45:34.450242 4799 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.451566 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 27 07:45:34 crc kubenswrapper[4799]: E0127 07:45:34.451664 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.471447 4799 policy_none.go:49] "None policy: Start" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.472662 4799 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.472704 4799 state_mem.go:35] "Initializing new in-memory state store" Jan 27 07:45:34 crc kubenswrapper[4799]: E0127 07:45:34.486291 4799 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.533625 4799 manager.go:334] "Starting Device Plugin manager" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.533689 4799 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.533718 4799 server.go:79] "Starting device plugin registration server" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.534234 4799 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.534261 4799 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.534721 4799 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.534816 4799 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.534833 4799 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 07:45:34 crc kubenswrapper[4799]: E0127 07:45:34.548730 4799 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.550884 4799 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.550976 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.552403 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.552431 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.552441 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.552616 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.553256 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.553420 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.553391 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.553532 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.553572 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.553808 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.553935 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.553970 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.554889 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.554910 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.554919 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.554973 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.555016 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.555037 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.555268 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.555434 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.555477 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.555487 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.555551 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.555579 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.556627 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.556665 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.556678 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.556747 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.556762 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.556771 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.556792 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.557048 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.557924 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.560821 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.560865 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.560896 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.561088 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.561133 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.561183 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.561515 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.561567 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.564006 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.564067 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.564083 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:34 crc kubenswrapper[4799]: E0127 07:45:34.596624 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="400ms" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.625483 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.625559 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.625678 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.625741 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.625778 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.625911 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.625956 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.625979 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.626073 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.626099 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.626144 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.626170 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.626237 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.626290 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.626375 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.634418 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.636276 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.636339 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.636357 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.636384 4799 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 07:45:34 crc kubenswrapper[4799]: E0127 07:45:34.636888 4799 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.727934 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728005 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728056 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728088 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728113 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728134 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728165 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728201 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728240 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728265 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728321 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728353 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728360 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728408 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728457 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728381 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728520 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728421 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728554 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728448 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728495 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728500 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728391 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728430 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728612 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728685 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728335 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728756 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.728818 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.729020 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.837524 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.839248 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.839344 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.839360 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.839397 4799 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 07:45:34 crc kubenswrapper[4799]: E0127 07:45:34.839948 4799 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.882758 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.891349 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.911054 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.931649 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.938225 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-cb5bd6cb3c615eb45fa9a3993fd5eeec401b50bfbd5a175398d18646aee59d76 WatchSource:0}: Error finding container cb5bd6cb3c615eb45fa9a3993fd5eeec401b50bfbd5a175398d18646aee59d76: Status 404 returned error can't find the container with id cb5bd6cb3c615eb45fa9a3993fd5eeec401b50bfbd5a175398d18646aee59d76 Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.940057 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-df7419e6c71387ef57763b67c48380c972178a77184c5dafced8ffe585f11737 WatchSource:0}: Error finding container df7419e6c71387ef57763b67c48380c972178a77184c5dafced8ffe585f11737: Status 404 returned error can't find the container with id df7419e6c71387ef57763b67c48380c972178a77184c5dafced8ffe585f11737 Jan 27 07:45:34 crc kubenswrapper[4799]: I0127 07:45:34.940479 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.948438 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-eef7ddfb2ee0901279d57ccc7e42edea096449e7889423010c37665e5b32d7e1 WatchSource:0}: Error finding container eef7ddfb2ee0901279d57ccc7e42edea096449e7889423010c37665e5b32d7e1: Status 404 returned error can't find the container with id eef7ddfb2ee0901279d57ccc7e42edea096449e7889423010c37665e5b32d7e1 Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.954180 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-1bcd38253521d795959bd16a021655f7f53e79b68fc1be912de5252f96d75045 WatchSource:0}: Error finding container 1bcd38253521d795959bd16a021655f7f53e79b68fc1be912de5252f96d75045: Status 404 returned error can't find the container with id 1bcd38253521d795959bd16a021655f7f53e79b68fc1be912de5252f96d75045 Jan 27 07:45:34 crc kubenswrapper[4799]: W0127 07:45:34.963003 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-70bbf0b9e19d063a68da1ef7b6eb82d5dfeb1678efa5c714252c30862d1531a2 WatchSource:0}: Error finding container 70bbf0b9e19d063a68da1ef7b6eb82d5dfeb1678efa5c714252c30862d1531a2: Status 404 returned error can't find the container with id 70bbf0b9e19d063a68da1ef7b6eb82d5dfeb1678efa5c714252c30862d1531a2 Jan 27 07:45:34 crc kubenswrapper[4799]: E0127 07:45:34.998190 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="800ms" Jan 27 07:45:35 crc kubenswrapper[4799]: W0127 07:45:35.216045 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 27 07:45:35 crc kubenswrapper[4799]: E0127 07:45:35.216159 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:45:35 crc kubenswrapper[4799]: I0127 07:45:35.240922 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:35 crc kubenswrapper[4799]: I0127 07:45:35.242765 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:35 crc kubenswrapper[4799]: I0127 07:45:35.242795 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:35 crc kubenswrapper[4799]: I0127 07:45:35.242806 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:35 crc kubenswrapper[4799]: I0127 07:45:35.242832 4799 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 07:45:35 crc kubenswrapper[4799]: E0127 07:45:35.243172 4799 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Jan 27 07:45:35 crc kubenswrapper[4799]: W0127 07:45:35.359518 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 27 07:45:35 crc kubenswrapper[4799]: E0127 07:45:35.359622 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:45:35 crc kubenswrapper[4799]: I0127 07:45:35.376408 4799 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 27 07:45:35 crc kubenswrapper[4799]: I0127 07:45:35.385676 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:42:52.044666801 +0000 UTC Jan 27 07:45:35 crc kubenswrapper[4799]: I0127 07:45:35.456662 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"70bbf0b9e19d063a68da1ef7b6eb82d5dfeb1678efa5c714252c30862d1531a2"} Jan 27 07:45:35 crc kubenswrapper[4799]: I0127 07:45:35.459077 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1bcd38253521d795959bd16a021655f7f53e79b68fc1be912de5252f96d75045"} Jan 27 07:45:35 crc kubenswrapper[4799]: I0127 07:45:35.460111 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"eef7ddfb2ee0901279d57ccc7e42edea096449e7889423010c37665e5b32d7e1"} Jan 27 07:45:35 crc kubenswrapper[4799]: I0127 07:45:35.461080 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"df7419e6c71387ef57763b67c48380c972178a77184c5dafced8ffe585f11737"} Jan 27 07:45:35 crc kubenswrapper[4799]: I0127 07:45:35.461974 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cb5bd6cb3c615eb45fa9a3993fd5eeec401b50bfbd5a175398d18646aee59d76"} Jan 27 07:45:35 crc kubenswrapper[4799]: W0127 07:45:35.522322 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 27 07:45:35 crc kubenswrapper[4799]: E0127 07:45:35.522464 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:45:35 crc kubenswrapper[4799]: E0127 07:45:35.799064 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="1.6s" Jan 27 07:45:36 crc kubenswrapper[4799]: W0127 07:45:36.029637 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 27 07:45:36 crc kubenswrapper[4799]: E0127 07:45:36.029769 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.044234 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.046102 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.046157 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.046170 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.046244 4799 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 07:45:36 crc kubenswrapper[4799]: E0127 07:45:36.048557 4799 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.376481 4799 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.386018 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 09:19:41.002643896 +0000 UTC Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.442854 4799 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 07:45:36 crc kubenswrapper[4799]: E0127 07:45:36.444720 4799 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.465938 4799 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f" exitCode=0 Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.465996 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f"} Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.466112 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.466979 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.467012 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.467024 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.470264 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae"} Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.470313 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa"} Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.470328 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383"} Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.470338 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14"} Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.470395 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.473804 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.473851 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.473861 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.479093 4799 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202" exitCode=0 Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.479179 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202"} Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.479338 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.480832 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.480883 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.480893 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.485677 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.487324 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.487349 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.487371 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.487839 4799 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="f597e8be8ce5125c1221f89be67eb02793d0673551d973741d8d1e1a470fbb14" exitCode=0 Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.487933 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"f597e8be8ce5125c1221f89be67eb02793d0673551d973741d8d1e1a470fbb14"} Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.487921 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.489230 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.489272 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.489287 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.490449 4799 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f" exitCode=0 Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.490496 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f"} Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.490643 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.491554 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.491723 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:36 crc kubenswrapper[4799]: I0127 07:45:36.491795 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.375990 4799 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.386120 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 02:24:09.299409469 +0000 UTC Jan 27 07:45:37 crc kubenswrapper[4799]: E0127 07:45:37.399792 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="3.2s" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.507801 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88"} Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.507887 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042"} Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.507912 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df"} Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.507931 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d"} Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.509698 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7be502e8f64e3e936b549c9bf744c711d41ed82bea32d16c1a605e494d30e273"} Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.509866 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.511760 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.511975 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.511991 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.513051 4799 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5" exitCode=0 Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.513138 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5"} Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.513201 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.514444 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.514495 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.514515 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.515320 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.515779 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.515899 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6e4da3ddd96887cadba76905a55343d81456ea50b7dba277376562c73562d948"} Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.515921 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"19a4d36b1be5b740f1c9a0aab6e46ecea9e21c53d8468379e9bf55bd8cf0721b"} Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.515932 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0af711974c3b7e4794dc83d00cb95ff6ab7d8df85618cbf2dabe33992368b332"} Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.517680 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.517727 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.517741 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.517756 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.517770 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.517772 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.648950 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.650065 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.650114 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.650128 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.650164 4799 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 07:45:37 crc kubenswrapper[4799]: E0127 07:45:37.650626 4799 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Jan 27 07:45:37 crc kubenswrapper[4799]: I0127 07:45:37.757944 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:45:37 crc kubenswrapper[4799]: W0127 07:45:37.866039 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 27 07:45:37 crc kubenswrapper[4799]: E0127 07:45:37.866128 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.386426 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 16:11:17.121074 +0000 UTC Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.520916 4799 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5" exitCode=0 Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.520990 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5"} Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.521122 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.522555 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.522581 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.522590 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.527291 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.527578 4799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.527675 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.527423 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71"} Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.527518 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.527563 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.529337 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.529443 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.529542 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.529626 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.529774 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.529787 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.531189 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.531209 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.531218 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.531465 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.531583 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:38 crc kubenswrapper[4799]: I0127 07:45:38.531694 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:39 crc kubenswrapper[4799]: I0127 07:45:39.387257 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 01:29:58.996360629 +0000 UTC Jan 27 07:45:39 crc kubenswrapper[4799]: I0127 07:45:39.536773 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672"} Jan 27 07:45:39 crc kubenswrapper[4799]: I0127 07:45:39.536839 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507"} Jan 27 07:45:39 crc kubenswrapper[4799]: I0127 07:45:39.536853 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2"} Jan 27 07:45:39 crc kubenswrapper[4799]: I0127 07:45:39.536876 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512"} Jan 27 07:45:39 crc kubenswrapper[4799]: I0127 07:45:39.536852 4799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 07:45:39 crc kubenswrapper[4799]: I0127 07:45:39.536941 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:39 crc kubenswrapper[4799]: I0127 07:45:39.537913 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:39 crc kubenswrapper[4799]: I0127 07:45:39.537967 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:39 crc kubenswrapper[4799]: I0127 07:45:39.537981 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:40 crc kubenswrapper[4799]: I0127 07:45:40.387977 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 10:17:07.203235425 +0000 UTC Jan 27 07:45:40 crc kubenswrapper[4799]: I0127 07:45:40.546447 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94"} Jan 27 07:45:40 crc kubenswrapper[4799]: I0127 07:45:40.546618 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:40 crc kubenswrapper[4799]: I0127 07:45:40.547645 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:40 crc kubenswrapper[4799]: I0127 07:45:40.547683 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:40 crc kubenswrapper[4799]: I0127 07:45:40.547694 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:40 crc kubenswrapper[4799]: I0127 07:45:40.586292 4799 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 07:45:40 crc kubenswrapper[4799]: I0127 07:45:40.851008 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:40 crc kubenswrapper[4799]: I0127 07:45:40.852646 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:40 crc kubenswrapper[4799]: I0127 07:45:40.852682 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:40 crc kubenswrapper[4799]: I0127 07:45:40.852692 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:40 crc kubenswrapper[4799]: I0127 07:45:40.852714 4799 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 07:45:40 crc kubenswrapper[4799]: I0127 07:45:40.875784 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:45:40 crc kubenswrapper[4799]: I0127 07:45:40.876051 4799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 07:45:40 crc kubenswrapper[4799]: I0127 07:45:40.876138 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:40 crc kubenswrapper[4799]: I0127 07:45:40.878332 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:40 crc kubenswrapper[4799]: I0127 07:45:40.878378 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:40 crc kubenswrapper[4799]: I0127 07:45:40.878389 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.064497 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.072088 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.072481 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.074448 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.074513 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.074534 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.388164 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 22:14:29.567995149 +0000 UTC Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.424584 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.424897 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.426857 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.426900 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.426918 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.540204 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.549341 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.549398 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.549474 4799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.549552 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.550441 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.550472 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.550483 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.550699 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.550749 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.550772 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.552223 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.552274 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:41 crc kubenswrapper[4799]: I0127 07:45:41.552332 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:42 crc kubenswrapper[4799]: I0127 07:45:42.388640 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 20:25:00.515714931 +0000 UTC Jan 27 07:45:43 crc kubenswrapper[4799]: I0127 07:45:43.389013 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 00:23:27.247495788 +0000 UTC Jan 27 07:45:43 crc kubenswrapper[4799]: I0127 07:45:43.646703 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 07:45:43 crc kubenswrapper[4799]: I0127 07:45:43.646989 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:43 crc kubenswrapper[4799]: I0127 07:45:43.648896 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:43 crc kubenswrapper[4799]: I0127 07:45:43.648950 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:43 crc kubenswrapper[4799]: I0127 07:45:43.648970 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:43 crc kubenswrapper[4799]: I0127 07:45:43.866100 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:45:43 crc kubenswrapper[4799]: I0127 07:45:43.866525 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:43 crc kubenswrapper[4799]: I0127 07:45:43.868139 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:43 crc kubenswrapper[4799]: I0127 07:45:43.868182 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:43 crc kubenswrapper[4799]: I0127 07:45:43.868204 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:43 crc kubenswrapper[4799]: I0127 07:45:43.876196 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:45:43 crc kubenswrapper[4799]: I0127 07:45:43.960671 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:45:43 crc kubenswrapper[4799]: I0127 07:45:43.960961 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:43 crc kubenswrapper[4799]: I0127 07:45:43.962940 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:43 crc kubenswrapper[4799]: I0127 07:45:43.963028 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:43 crc kubenswrapper[4799]: I0127 07:45:43.963056 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:44 crc kubenswrapper[4799]: I0127 07:45:44.389330 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 00:35:09.601981323 +0000 UTC Jan 27 07:45:44 crc kubenswrapper[4799]: I0127 07:45:44.540118 4799 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 07:45:44 crc kubenswrapper[4799]: I0127 07:45:44.540257 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 07:45:44 crc kubenswrapper[4799]: E0127 07:45:44.548961 4799 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 07:45:44 crc kubenswrapper[4799]: I0127 07:45:44.560471 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:44 crc kubenswrapper[4799]: I0127 07:45:44.562670 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:44 crc kubenswrapper[4799]: I0127 07:45:44.562729 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:44 crc kubenswrapper[4799]: I0127 07:45:44.562754 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:45 crc kubenswrapper[4799]: I0127 07:45:45.389797 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 15:02:36.419422359 +0000 UTC Jan 27 07:45:46 crc kubenswrapper[4799]: I0127 07:45:46.306203 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 07:45:46 crc kubenswrapper[4799]: I0127 07:45:46.306465 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:46 crc kubenswrapper[4799]: I0127 07:45:46.308034 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:46 crc kubenswrapper[4799]: I0127 07:45:46.308081 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:46 crc kubenswrapper[4799]: I0127 07:45:46.308094 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:46 crc kubenswrapper[4799]: I0127 07:45:46.390675 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 17:35:40.552023926 +0000 UTC Jan 27 07:45:47 crc kubenswrapper[4799]: I0127 07:45:47.391675 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 16:16:28.171677441 +0000 UTC Jan 27 07:45:47 crc kubenswrapper[4799]: I0127 07:45:47.763009 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:45:47 crc kubenswrapper[4799]: I0127 07:45:47.763206 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:47 crc kubenswrapper[4799]: I0127 07:45:47.764602 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:47 crc kubenswrapper[4799]: I0127 07:45:47.764661 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:47 crc kubenswrapper[4799]: I0127 07:45:47.764674 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:48 crc kubenswrapper[4799]: W0127 07:45:48.022687 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 27 07:45:48 crc kubenswrapper[4799]: I0127 07:45:48.022891 4799 trace.go:236] Trace[929552604]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 07:45:38.020) (total time: 10001ms): Jan 27 07:45:48 crc kubenswrapper[4799]: Trace[929552604]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:45:48.022) Jan 27 07:45:48 crc kubenswrapper[4799]: Trace[929552604]: [10.001856176s] [10.001856176s] END Jan 27 07:45:48 crc kubenswrapper[4799]: E0127 07:45:48.022949 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 07:45:48 crc kubenswrapper[4799]: W0127 07:45:48.126236 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 27 07:45:48 crc kubenswrapper[4799]: I0127 07:45:48.126405 4799 trace.go:236] Trace[39851113]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 07:45:38.124) (total time: 10001ms): Jan 27 07:45:48 crc kubenswrapper[4799]: Trace[39851113]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:45:48.126) Jan 27 07:45:48 crc kubenswrapper[4799]: Trace[39851113]: [10.001866185s] [10.001866185s] END Jan 27 07:45:48 crc kubenswrapper[4799]: E0127 07:45:48.126448 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 07:45:48 crc kubenswrapper[4799]: I0127 07:45:48.377072 4799 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 27 07:45:48 crc kubenswrapper[4799]: I0127 07:45:48.392456 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 11:28:32.298417282 +0000 UTC Jan 27 07:45:48 crc kubenswrapper[4799]: W0127 07:45:48.669904 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 27 07:45:48 crc kubenswrapper[4799]: I0127 07:45:48.670129 4799 trace.go:236] Trace[1826424494]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 07:45:38.667) (total time: 10002ms): Jan 27 07:45:48 crc kubenswrapper[4799]: Trace[1826424494]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:45:48.669) Jan 27 07:45:48 crc kubenswrapper[4799]: Trace[1826424494]: [10.002119362s] [10.002119362s] END Jan 27 07:45:48 crc kubenswrapper[4799]: E0127 07:45:48.670205 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 07:45:49 crc kubenswrapper[4799]: I0127 07:45:49.293118 4799 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 07:45:49 crc kubenswrapper[4799]: I0127 07:45:49.293218 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 07:45:49 crc kubenswrapper[4799]: I0127 07:45:49.298007 4799 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 07:45:49 crc kubenswrapper[4799]: I0127 07:45:49.298103 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 07:45:49 crc kubenswrapper[4799]: I0127 07:45:49.393507 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 03:55:42.524984391 +0000 UTC Jan 27 07:45:50 crc kubenswrapper[4799]: I0127 07:45:50.394231 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 09:21:40.774898824 +0000 UTC Jan 27 07:45:51 crc kubenswrapper[4799]: I0127 07:45:51.070819 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:45:51 crc kubenswrapper[4799]: I0127 07:45:51.070970 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:51 crc kubenswrapper[4799]: I0127 07:45:51.073448 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:51 crc kubenswrapper[4799]: I0127 07:45:51.073511 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:51 crc kubenswrapper[4799]: I0127 07:45:51.073527 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:51 crc kubenswrapper[4799]: I0127 07:45:51.081095 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:45:51 crc kubenswrapper[4799]: I0127 07:45:51.394889 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 22:24:52.75217948 +0000 UTC Jan 27 07:45:51 crc kubenswrapper[4799]: I0127 07:45:51.580897 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:45:51 crc kubenswrapper[4799]: I0127 07:45:51.582562 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:45:51 crc kubenswrapper[4799]: I0127 07:45:51.582604 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:45:51 crc kubenswrapper[4799]: I0127 07:45:51.582617 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:45:52 crc kubenswrapper[4799]: I0127 07:45:52.395996 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 02:39:17.782170346 +0000 UTC Jan 27 07:45:52 crc kubenswrapper[4799]: I0127 07:45:52.667482 4799 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 07:45:52 crc kubenswrapper[4799]: I0127 07:45:52.719049 4799 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 07:45:53 crc kubenswrapper[4799]: I0127 07:45:53.397178 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 19:47:02.278809444 +0000 UTC Jan 27 07:45:53 crc kubenswrapper[4799]: I0127 07:45:53.478881 4799 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.286092 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.294168 4799 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.298054 4799 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.300136 4799 trace.go:236] Trace[757622073]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 07:45:42.916) (total time: 11383ms): Jan 27 07:45:54 crc kubenswrapper[4799]: Trace[757622073]: ---"Objects listed" error: 11383ms (07:45:54.299) Jan 27 07:45:54 crc kubenswrapper[4799]: Trace[757622073]: [11.383971399s] [11.383971399s] END Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.300181 4799 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.308113 4799 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.352765 4799 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41616->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.352852 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41616->192.168.126.11:17697: read: connection reset by peer" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.353416 4799 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.353524 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.354153 4799 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.354223 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.378505 4799 apiserver.go:52] "Watching apiserver" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.384826 4799 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.385488 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.386614 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.386874 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.387026 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.387179 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.387260 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.387428 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.387520 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.387650 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.387653 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.390066 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.390752 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.390998 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.393267 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.393731 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.394057 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.394297 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.394696 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.394925 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.395509 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.397403 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:39:06.249194095 +0000 UTC Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.398350 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.398408 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.398441 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.398480 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.398509 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.398537 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.398565 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.398594 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.398621 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.398654 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.398683 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.398714 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.398740 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.398775 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.398910 4799 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.398995 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:45:54.898973718 +0000 UTC m=+21.210077793 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.399819 4799 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.400185 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.400650 4799 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.400724 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:45:54.900700975 +0000 UTC m=+21.211805050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.402869 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.403460 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.410619 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.419143 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.420765 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.424054 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.424169 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.424234 4799 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.424440 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:45:54.924376976 +0000 UTC m=+21.235481041 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.424645 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.424665 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.424674 4799 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.424705 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:45:54.924697884 +0000 UTC m=+21.235801949 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.424784 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.428704 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.429552 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.431841 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.444909 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.449336 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.457557 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.465348 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.486822 4799 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.495699 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.500483 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.500570 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.500602 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.500661 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.500695 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:45:55.000661171 +0000 UTC m=+21.311765236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.500751 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.500789 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.500818 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.500845 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.500873 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.500893 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.500912 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.500935 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.500953 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.500976 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501001 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501021 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501043 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501063 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501084 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501105 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501125 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501147 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501170 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501190 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501210 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501228 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501247 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501267 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501288 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501326 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501352 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501377 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501396 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501414 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501438 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501457 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501478 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501498 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501522 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.500959 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.502842 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.500978 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501023 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501375 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501497 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501526 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501715 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.502063 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.502097 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.502327 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.502443 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.502487 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.502519 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.502655 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.502771 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.503129 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.503135 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.502909 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.502934 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.502806 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.501542 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.503860 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.503893 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.503920 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.503948 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.503972 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.503965 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.503974 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504413 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504439 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504461 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504480 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504498 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504515 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504537 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504553 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504570 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504569 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504588 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504605 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504624 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504643 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504665 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504687 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504709 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504730 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504749 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504768 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504792 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504816 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504834 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504852 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504874 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504892 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504912 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504932 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504950 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504969 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504986 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505008 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505025 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505041 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505061 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505078 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505095 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505113 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505132 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505148 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505168 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505187 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505226 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505249 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505267 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505285 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505317 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505336 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505353 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505372 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505389 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505405 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505422 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505441 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505459 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505474 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505506 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505522 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505537 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505556 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505571 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505588 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505605 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505625 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505642 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505657 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505675 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505694 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505712 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505730 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505749 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505767 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505784 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505802 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505824 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505845 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505867 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505889 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505908 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505963 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505982 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.505999 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.506017 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.506034 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.506053 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.506072 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.506089 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.506105 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.506126 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.506143 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.506160 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.506200 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.506219 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.506236 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.506255 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.506275 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.506290 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.506322 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.506341 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.507835 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.508147 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504620 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.524892 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.524963 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504655 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.504125 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.508833 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.525714 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.509118 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.509847 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.525828 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.510587 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.510948 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.511160 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.511602 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.514361 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.514981 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.515290 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.515344 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.515383 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.515749 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.515892 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.516009 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.516447 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.516591 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.516714 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.516843 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.517044 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.517052 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.526172 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.517260 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.517494 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.517539 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.517869 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.517867 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.518742 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.518781 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.519059 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.519231 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.519356 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.519476 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.520326 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.520873 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.521671 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.521843 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.522070 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.522356 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.522436 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.522594 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.522627 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.523442 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.523577 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.523804 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.523986 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.524210 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.524448 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.524821 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.526154 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.526434 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.526560 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.526653 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.527500 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.527593 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.527923 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.527956 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.528137 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.524221 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.529107 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.529189 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.529282 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.529368 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.529450 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.529520 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.529584 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.529655 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.529728 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.529800 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.529869 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.529930 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.530019 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.530082 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.530157 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.530241 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.530351 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.530418 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.530487 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.530612 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.530683 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.530748 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.530811 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.530876 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.530942 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.531006 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.531072 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.531154 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.531222 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.531292 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.531399 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.531464 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.531535 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.531601 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.531668 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.531735 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.531797 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.531863 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.531928 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.531991 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.532060 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537237 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537272 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537293 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537336 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537358 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537380 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537400 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537421 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537508 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537549 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537804 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537817 4799 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537843 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537860 4799 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537871 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537883 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537895 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537905 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537915 4799 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537925 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537936 4799 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537946 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537958 4799 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537971 4799 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537984 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537996 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538011 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538023 4799 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538035 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538048 4799 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538062 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538074 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538087 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538102 4799 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538114 4799 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538127 4799 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538139 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538151 4799 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538163 4799 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538173 4799 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538183 4799 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538206 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538215 4799 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538225 4799 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538234 4799 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538244 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538254 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538264 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538273 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538283 4799 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538294 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538330 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538340 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538352 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538364 4799 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538377 4799 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538389 4799 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538402 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538412 4799 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538421 4799 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538432 4799 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538441 4799 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538451 4799 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538460 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538469 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538478 4799 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538487 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538496 4799 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538505 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538516 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538525 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538536 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538546 4799 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538555 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538565 4799 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538575 4799 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538584 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538594 4799 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538604 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538616 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538628 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538637 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538646 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538656 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538665 4799 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538674 4799 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538682 4799 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538691 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538700 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538709 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538718 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538727 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538737 4799 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538746 4799 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538754 4799 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538763 4799 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538774 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538783 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538792 4799 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.539995 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.540634 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.528985 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.542776 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.529170 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.529219 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.529530 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.529697 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.530349 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.543682 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.532339 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.533398 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.533487 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.533625 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.533871 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.534414 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.534589 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.534877 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.536617 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.536752 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537177 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537218 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.537822 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.538883 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.539115 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.540040 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.540189 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.540394 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.540587 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.540646 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.540919 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.541484 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.543126 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.543221 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.544810 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.545260 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.545879 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.548889 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.549286 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.549520 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.549559 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.549753 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.550175 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.550328 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.550472 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.550616 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.551190 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.551477 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.551478 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.551803 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.552090 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.552459 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.552816 4799 csr.go:261] certificate signing request csr-5bp4d is approved, waiting to be issued Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.553840 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.559470 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.560400 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.560581 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.563736 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.565505 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.566393 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.568274 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.569548 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.571480 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.572544 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.573442 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.576671 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.576858 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.576998 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.577479 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.579800 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.582113 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.582424 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.583124 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.589581 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.591003 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.591417 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.592355 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.592416 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.596283 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.596670 4799 csr.go:257] certificate signing request csr-5bp4d is issued Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.597509 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.602173 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.602332 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.603108 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.604314 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.604401 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.604650 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.605498 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.606087 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.607120 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.609782 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.610666 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.612043 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.604110 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.613231 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.613228 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.615426 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.615532 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.615768 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.616078 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.619407 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.620554 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.620776 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.621083 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.622070 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.622213 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.630373 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.631054 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.638524 4799 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71" exitCode=255 Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.638698 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71"} Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.640509 4799 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.640614 4799 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.640696 4799 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.640780 4799 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.640858 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.640938 4799 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.641014 4799 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.642584 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.642684 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.642762 4799 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.642835 4799 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.642893 4799 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.642968 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.643050 4799 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.643121 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.643204 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.643280 4799 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.643358 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.643634 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.643716 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.643792 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.643874 4799 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.645644 4799 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.645735 4799 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.645815 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.645904 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.645983 4799 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.646055 4799 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.646124 4799 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.646200 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.646343 4799 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.646431 4799 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.646505 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.646587 4799 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.646662 4799 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.646726 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.646798 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.646865 4799 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.646930 4799 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.646998 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.647070 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.647141 4799 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.647233 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.650189 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.650352 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.650390 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.650434 4799 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.650517 4799 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.650539 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.650555 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.649855 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.650568 4799 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.650792 4799 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.650896 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.650990 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.651102 4799 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.651201 4799 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.651317 4799 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.651425 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.651523 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.651876 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.651977 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.652084 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.652185 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.652283 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.652379 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.652461 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.652545 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.652688 4799 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.652769 4799 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.652852 4799 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.652933 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.653006 4799 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.653067 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.653146 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.653254 4799 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.653342 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.653418 4799 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.653503 4799 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.653583 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.653661 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.653738 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.655919 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.656076 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.656139 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.656207 4799 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.656267 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.656348 4799 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.656406 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.656459 4799 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.656517 4799 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.656576 4799 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.656630 4799 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.656688 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.656749 4799 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.656803 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.656863 4799 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.656923 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.656982 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.657036 4799 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.657096 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.657155 4799 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.658475 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.661045 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"32d03600ead10d07c7254684f4c7a3534a99112047f91cbd1b0c48d3779bec1d"} Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.665633 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.670652 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.671081 4799 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.671741 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.688317 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.691919 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.706758 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.714786 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.727404 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: W0127 07:45:54.732417 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-1c985b2dfd7b49087ed601e80f7812552d95a704d9ae0dd0eb0201c6b2cefd82 WatchSource:0}: Error finding container 1c985b2dfd7b49087ed601e80f7812552d95a704d9ae0dd0eb0201c6b2cefd82: Status 404 returned error can't find the container with id 1c985b2dfd7b49087ed601e80f7812552d95a704d9ae0dd0eb0201c6b2cefd82 Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.741604 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.748176 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.751786 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.757998 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.760202 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.760231 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.760243 4799 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.760254 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.764281 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.772939 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.777142 4799 scope.go:117] "RemoveContainer" containerID="a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.778623 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.793841 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.808883 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.821729 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.834040 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.847021 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.864483 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.888668 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.920666 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.960658 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.960931 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.960954 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.960972 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:45:54 crc kubenswrapper[4799]: I0127 07:45:54.960988 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.961092 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.961107 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.961118 4799 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.961143 4799 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.961206 4799 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.961157 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:45:55.961143849 +0000 UTC m=+22.272247914 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.961242 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:45:55.961235792 +0000 UTC m=+22.272339857 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.961252 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:45:55.961246752 +0000 UTC m=+22.272350817 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.961380 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.961401 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.961416 4799 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:45:54 crc kubenswrapper[4799]: E0127 07:45:54.961465 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:45:55.961445617 +0000 UTC m=+22.272549872 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.062038 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:45:55 crc kubenswrapper[4799]: E0127 07:45:55.062213 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:45:56.062184715 +0000 UTC m=+22.373288900 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.397720 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 02:20:56.766174804 +0000 UTC Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.450645 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:45:55 crc kubenswrapper[4799]: E0127 07:45:55.450808 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.486922 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-gc4vh"] Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.487552 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-gc4vh" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.490002 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.490891 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.497464 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.509031 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.525372 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.540040 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.556506 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.567617 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpgp6\" (UniqueName: \"kubernetes.io/projected/7cf6cd90-b4bf-4e62-b758-d31590e43866-kube-api-access-lpgp6\") pod \"node-resolver-gc4vh\" (UID: \"7cf6cd90-b4bf-4e62-b758-d31590e43866\") " pod="openshift-dns/node-resolver-gc4vh" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.567737 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7cf6cd90-b4bf-4e62-b758-d31590e43866-hosts-file\") pod \"node-resolver-gc4vh\" (UID: \"7cf6cd90-b4bf-4e62-b758-d31590e43866\") " pod="openshift-dns/node-resolver-gc4vh" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.570268 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.595884 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.617022 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.632777 4799 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 07:40:54 +0000 UTC, rotation deadline is 2026-12-01 03:48:22.97190283 +0000 UTC Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.632821 4799 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7388h2m27.339084583s for next certificate rotation Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.641767 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.663716 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d906e6f4a33056469a0f44181788fd58a2ce3fc90a980f54360885e4fef064cb"} Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.665280 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6"} Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.665333 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1c985b2dfd7b49087ed601e80f7812552d95a704d9ae0dd0eb0201c6b2cefd82"} Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.667874 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.668083 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpgp6\" (UniqueName: \"kubernetes.io/projected/7cf6cd90-b4bf-4e62-b758-d31590e43866-kube-api-access-lpgp6\") pod \"node-resolver-gc4vh\" (UID: \"7cf6cd90-b4bf-4e62-b758-d31590e43866\") " pod="openshift-dns/node-resolver-gc4vh" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.668107 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7cf6cd90-b4bf-4e62-b758-d31590e43866-hosts-file\") pod \"node-resolver-gc4vh\" (UID: \"7cf6cd90-b4bf-4e62-b758-d31590e43866\") " pod="openshift-dns/node-resolver-gc4vh" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.668172 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7cf6cd90-b4bf-4e62-b758-d31590e43866-hosts-file\") pod \"node-resolver-gc4vh\" (UID: \"7cf6cd90-b4bf-4e62-b758-d31590e43866\") " pod="openshift-dns/node-resolver-gc4vh" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.670213 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc"} Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.670744 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.671330 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.671990 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49"} Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.672125 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f"} Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.708331 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.709050 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpgp6\" (UniqueName: \"kubernetes.io/projected/7cf6cd90-b4bf-4e62-b758-d31590e43866-kube-api-access-lpgp6\") pod \"node-resolver-gc4vh\" (UID: \"7cf6cd90-b4bf-4e62-b758-d31590e43866\") " pod="openshift-dns/node-resolver-gc4vh" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.728557 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.765478 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.784337 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.799240 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-gc4vh" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.815387 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.833272 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.855192 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.872368 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.878568 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-sqpcz"] Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.879321 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.883155 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-tgr7w"] Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.884650 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.886328 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.886570 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.886600 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.887010 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-tgr7w" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.887651 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-8fm6z"] Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.888194 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.888967 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.889213 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.889280 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.889423 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.889834 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.889847 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.891490 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.891540 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.891593 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.906398 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.929289 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.942329 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.959795 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.973115 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.974406 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-host-var-lib-cni-multus\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.974443 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-system-cni-dir\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.974464 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9e09525d-7c34-4bc8-883e-f6dafcd0b4f3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8fm6z\" (UID: \"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\") " pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.974479 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9e09525d-7c34-4bc8-883e-f6dafcd0b4f3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8fm6z\" (UID: \"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\") " pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.974496 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/058f98c8-1b84-48d3-8167-ad1a5584351c-proxy-tls\") pod \"machine-config-daemon-sqpcz\" (UID: \"058f98c8-1b84-48d3-8167-ad1a5584351c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.974511 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/60934e21-bc53-4f80-bb08-bb67af7301cd-multus-daemon-config\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.974529 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9e09525d-7c34-4bc8-883e-f6dafcd0b4f3-cnibin\") pod \"multus-additional-cni-plugins-8fm6z\" (UID: \"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\") " pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.974546 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnc6t\" (UniqueName: \"kubernetes.io/projected/9e09525d-7c34-4bc8-883e-f6dafcd0b4f3-kube-api-access-jnc6t\") pod \"multus-additional-cni-plugins-8fm6z\" (UID: \"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\") " pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.974562 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-hostroot\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.974575 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9e09525d-7c34-4bc8-883e-f6dafcd0b4f3-os-release\") pod \"multus-additional-cni-plugins-8fm6z\" (UID: \"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\") " pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.974599 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.974619 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.974638 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/60934e21-bc53-4f80-bb08-bb67af7301cd-cni-binary-copy\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.974656 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-multus-conf-dir\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.974678 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpwdx\" (UniqueName: \"kubernetes.io/projected/60934e21-bc53-4f80-bb08-bb67af7301cd-kube-api-access-fpwdx\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.974699 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.974768 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-os-release\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:55 crc kubenswrapper[4799]: E0127 07:45:55.974869 4799 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:45:55 crc kubenswrapper[4799]: E0127 07:45:55.974892 4799 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:45:55 crc kubenswrapper[4799]: E0127 07:45:55.974925 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:45:57.974911129 +0000 UTC m=+24.286015194 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:45:55 crc kubenswrapper[4799]: E0127 07:45:55.974991 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:45:57.97494183 +0000 UTC m=+24.286046075 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:45:55 crc kubenswrapper[4799]: E0127 07:45:55.975110 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:45:55 crc kubenswrapper[4799]: E0127 07:45:55.975130 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:45:55 crc kubenswrapper[4799]: E0127 07:45:55.975144 4799 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:45:55 crc kubenswrapper[4799]: E0127 07:45:55.975186 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:45:57.975176026 +0000 UTC m=+24.286280301 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.975253 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/058f98c8-1b84-48d3-8167-ad1a5584351c-rootfs\") pod \"machine-config-daemon-sqpcz\" (UID: \"058f98c8-1b84-48d3-8167-ad1a5584351c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.975376 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-host-run-k8s-cni-cncf-io\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.975421 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-host-var-lib-kubelet\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.975447 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-host-run-netns\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.975480 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-host-run-multus-certs\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.975504 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-multus-socket-dir-parent\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.975527 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-host-var-lib-cni-bin\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.975561 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-etc-kubernetes\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.975580 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e09525d-7c34-4bc8-883e-f6dafcd0b4f3-system-cni-dir\") pod \"multus-additional-cni-plugins-8fm6z\" (UID: \"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\") " pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.975604 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9e09525d-7c34-4bc8-883e-f6dafcd0b4f3-cni-binary-copy\") pod \"multus-additional-cni-plugins-8fm6z\" (UID: \"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\") " pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.975625 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/058f98c8-1b84-48d3-8167-ad1a5584351c-mcd-auth-proxy-config\") pod \"machine-config-daemon-sqpcz\" (UID: \"058f98c8-1b84-48d3-8167-ad1a5584351c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.975658 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:45:55 crc kubenswrapper[4799]: E0127 07:45:55.975770 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:45:55 crc kubenswrapper[4799]: E0127 07:45:55.975796 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:45:55 crc kubenswrapper[4799]: E0127 07:45:55.975807 4799 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.975836 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-cnibin\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.975862 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-multus-cni-dir\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.975887 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrt7z\" (UniqueName: \"kubernetes.io/projected/058f98c8-1b84-48d3-8167-ad1a5584351c-kube-api-access-xrt7z\") pod \"machine-config-daemon-sqpcz\" (UID: \"058f98c8-1b84-48d3-8167-ad1a5584351c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 07:45:55 crc kubenswrapper[4799]: E0127 07:45:55.975922 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:45:57.975911716 +0000 UTC m=+24.287015781 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:45:55 crc kubenswrapper[4799]: I0127 07:45:55.986515 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.003141 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:55Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.021074 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.040524 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.058780 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.076593 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.076713 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-multus-cni-dir\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.076747 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrt7z\" (UniqueName: \"kubernetes.io/projected/058f98c8-1b84-48d3-8167-ad1a5584351c-kube-api-access-xrt7z\") pod \"machine-config-daemon-sqpcz\" (UID: \"058f98c8-1b84-48d3-8167-ad1a5584351c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.076773 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-system-cni-dir\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.076796 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-host-var-lib-cni-multus\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.076818 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/60934e21-bc53-4f80-bb08-bb67af7301cd-multus-daemon-config\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.076821 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.076837 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9e09525d-7c34-4bc8-883e-f6dafcd0b4f3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8fm6z\" (UID: \"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\") " pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.076999 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9e09525d-7c34-4bc8-883e-f6dafcd0b4f3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8fm6z\" (UID: \"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\") " pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077026 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/058f98c8-1b84-48d3-8167-ad1a5584351c-proxy-tls\") pod \"machine-config-daemon-sqpcz\" (UID: \"058f98c8-1b84-48d3-8167-ad1a5584351c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077047 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnc6t\" (UniqueName: \"kubernetes.io/projected/9e09525d-7c34-4bc8-883e-f6dafcd0b4f3-kube-api-access-jnc6t\") pod \"multus-additional-cni-plugins-8fm6z\" (UID: \"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\") " pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077071 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9e09525d-7c34-4bc8-883e-f6dafcd0b4f3-cnibin\") pod \"multus-additional-cni-plugins-8fm6z\" (UID: \"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\") " pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077101 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-hostroot\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077125 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9e09525d-7c34-4bc8-883e-f6dafcd0b4f3-os-release\") pod \"multus-additional-cni-plugins-8fm6z\" (UID: \"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\") " pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077162 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/60934e21-bc53-4f80-bb08-bb67af7301cd-cni-binary-copy\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077182 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-multus-conf-dir\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077202 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpwdx\" (UniqueName: \"kubernetes.io/projected/60934e21-bc53-4f80-bb08-bb67af7301cd-kube-api-access-fpwdx\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077230 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-os-release\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077251 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/058f98c8-1b84-48d3-8167-ad1a5584351c-rootfs\") pod \"machine-config-daemon-sqpcz\" (UID: \"058f98c8-1b84-48d3-8167-ad1a5584351c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077277 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-host-run-k8s-cni-cncf-io\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077320 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-host-var-lib-kubelet\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077345 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-multus-socket-dir-parent\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077370 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-host-run-netns\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077391 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-host-run-multus-certs\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077410 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9e09525d-7c34-4bc8-883e-f6dafcd0b4f3-cni-binary-copy\") pod \"multus-additional-cni-plugins-8fm6z\" (UID: \"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\") " pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077431 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/058f98c8-1b84-48d3-8167-ad1a5584351c-mcd-auth-proxy-config\") pod \"machine-config-daemon-sqpcz\" (UID: \"058f98c8-1b84-48d3-8167-ad1a5584351c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077453 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9e09525d-7c34-4bc8-883e-f6dafcd0b4f3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8fm6z\" (UID: \"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\") " pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077464 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-host-var-lib-cni-bin\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077484 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-etc-kubernetes\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077508 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e09525d-7c34-4bc8-883e-f6dafcd0b4f3-system-cni-dir\") pod \"multus-additional-cni-plugins-8fm6z\" (UID: \"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\") " pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077530 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-cnibin\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077597 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-cnibin\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077636 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/058f98c8-1b84-48d3-8167-ad1a5584351c-rootfs\") pod \"machine-config-daemon-sqpcz\" (UID: \"058f98c8-1b84-48d3-8167-ad1a5584351c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077671 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-host-run-k8s-cni-cncf-io\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077704 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-host-var-lib-kubelet\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077750 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-multus-socket-dir-parent\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077786 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-host-run-netns\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077808 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-os-release\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077819 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-host-run-multus-certs\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.077837 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-hostroot\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.078499 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9e09525d-7c34-4bc8-883e-f6dafcd0b4f3-cni-binary-copy\") pod \"multus-additional-cni-plugins-8fm6z\" (UID: \"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\") " pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.078538 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9e09525d-7c34-4bc8-883e-f6dafcd0b4f3-os-release\") pod \"multus-additional-cni-plugins-8fm6z\" (UID: \"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\") " pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.078575 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-host-var-lib-cni-bin\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.078579 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-etc-kubernetes\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.078616 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e09525d-7c34-4bc8-883e-f6dafcd0b4f3-system-cni-dir\") pod \"multus-additional-cni-plugins-8fm6z\" (UID: \"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\") " pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.078663 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-multus-conf-dir\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: E0127 07:45:56.078703 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:45:58.078673538 +0000 UTC m=+24.389777603 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.078530 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/058f98c8-1b84-48d3-8167-ad1a5584351c-mcd-auth-proxy-config\") pod \"machine-config-daemon-sqpcz\" (UID: \"058f98c8-1b84-48d3-8167-ad1a5584351c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.078916 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-system-cni-dir\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.078970 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-multus-cni-dir\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.078992 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9e09525d-7c34-4bc8-883e-f6dafcd0b4f3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8fm6z\" (UID: \"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\") " pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.079062 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9e09525d-7c34-4bc8-883e-f6dafcd0b4f3-cnibin\") pod \"multus-additional-cni-plugins-8fm6z\" (UID: \"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\") " pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.079146 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/60934e21-bc53-4f80-bb08-bb67af7301cd-host-var-lib-cni-multus\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.079290 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/60934e21-bc53-4f80-bb08-bb67af7301cd-cni-binary-copy\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.079633 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/60934e21-bc53-4f80-bb08-bb67af7301cd-multus-daemon-config\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.086272 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/058f98c8-1b84-48d3-8167-ad1a5584351c-proxy-tls\") pod \"machine-config-daemon-sqpcz\" (UID: \"058f98c8-1b84-48d3-8167-ad1a5584351c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.096685 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrt7z\" (UniqueName: \"kubernetes.io/projected/058f98c8-1b84-48d3-8167-ad1a5584351c-kube-api-access-xrt7z\") pod \"machine-config-daemon-sqpcz\" (UID: \"058f98c8-1b84-48d3-8167-ad1a5584351c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.103882 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpwdx\" (UniqueName: \"kubernetes.io/projected/60934e21-bc53-4f80-bb08-bb67af7301cd-kube-api-access-fpwdx\") pod \"multus-tgr7w\" (UID: \"60934e21-bc53-4f80-bb08-bb67af7301cd\") " pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.108737 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnc6t\" (UniqueName: \"kubernetes.io/projected/9e09525d-7c34-4bc8-883e-f6dafcd0b4f3-kube-api-access-jnc6t\") pod \"multus-additional-cni-plugins-8fm6z\" (UID: \"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\") " pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.119817 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.200388 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 07:45:56 crc kubenswrapper[4799]: W0127 07:45:56.214227 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod058f98c8_1b84_48d3_8167_ad1a5584351c.slice/crio-7fa5ec5e385bef74d236553715b7920cbb0734cf1ad0b70e8ba834287d1a4362 WatchSource:0}: Error finding container 7fa5ec5e385bef74d236553715b7920cbb0734cf1ad0b70e8ba834287d1a4362: Status 404 returned error can't find the container with id 7fa5ec5e385bef74d236553715b7920cbb0734cf1ad0b70e8ba834287d1a4362 Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.215503 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.216402 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-tgr7w" Jan 27 07:45:56 crc kubenswrapper[4799]: W0127 07:45:56.243880 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e09525d_7c34_4bc8_883e_f6dafcd0b4f3.slice/crio-00369b0434683ae3ffda3700116c1f9104b0c833078afcd8ceea3621bc078f29 WatchSource:0}: Error finding container 00369b0434683ae3ffda3700116c1f9104b0c833078afcd8ceea3621bc078f29: Status 404 returned error can't find the container with id 00369b0434683ae3ffda3700116c1f9104b0c833078afcd8ceea3621bc078f29 Jan 27 07:45:56 crc kubenswrapper[4799]: W0127 07:45:56.249527 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60934e21_bc53_4f80_bb08_bb67af7301cd.slice/crio-bdebfd8be0c4a4faf2fc24437e74cf6be515416394ae13dcb54b80e64a160f0f WatchSource:0}: Error finding container bdebfd8be0c4a4faf2fc24437e74cf6be515416394ae13dcb54b80e64a160f0f: Status 404 returned error can't find the container with id bdebfd8be0c4a4faf2fc24437e74cf6be515416394ae13dcb54b80e64a160f0f Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.297163 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hggcd"] Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.297957 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.299878 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.300030 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.300233 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.302050 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.302291 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.302428 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.302529 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.324222 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.340072 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.349019 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.355700 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.363277 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.370615 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.379770 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/836be94a-c1de-4b1c-b98a-7af78a2a4607-ovnkube-config\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.379818 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-systemd-units\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.379843 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-run-systemd\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.379865 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-var-lib-openvswitch\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.379885 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-run-openvswitch\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.379904 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/836be94a-c1de-4b1c-b98a-7af78a2a4607-env-overrides\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.379919 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/836be94a-c1de-4b1c-b98a-7af78a2a4607-ovnkube-script-lib\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.379935 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/836be94a-c1de-4b1c-b98a-7af78a2a4607-ovn-node-metrics-cert\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.379954 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-cni-bin\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.379970 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-run-netns\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.379994 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-etc-openvswitch\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.380012 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-log-socket\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.380027 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-kubelet\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.380042 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-slash\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.380064 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-node-log\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.380081 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-run-ovn-kubernetes\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.380107 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnc94\" (UniqueName: \"kubernetes.io/projected/836be94a-c1de-4b1c-b98a-7af78a2a4607-kube-api-access-nnc94\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.380129 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-run-ovn\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.380143 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-cni-netd\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.380162 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.386164 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.398925 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 14:40:02.22011573 +0000 UTC Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.407441 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.431652 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.448291 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.448599 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.450756 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:45:56 crc kubenswrapper[4799]: E0127 07:45:56.450902 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.450997 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:45:56 crc kubenswrapper[4799]: E0127 07:45:56.451074 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.456552 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.457126 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.458494 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.459242 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.460345 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.460927 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.461569 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.462519 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.463219 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.464659 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.465691 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.466228 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.467481 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.467996 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.468566 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.469503 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.470034 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.470974 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.471499 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.472051 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.473069 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.473578 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.474521 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.474945 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.476498 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.477072 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.477752 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.478738 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.479266 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.479961 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.480548 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.480726 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-systemd-units\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.480771 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-run-systemd\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.480791 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-var-lib-openvswitch\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.480817 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-run-openvswitch\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.480840 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/836be94a-c1de-4b1c-b98a-7af78a2a4607-env-overrides\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.480858 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/836be94a-c1de-4b1c-b98a-7af78a2a4607-ovnkube-script-lib\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.480879 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/836be94a-c1de-4b1c-b98a-7af78a2a4607-ovn-node-metrics-cert\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.480915 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-cni-bin\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.480937 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-run-netns\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.480947 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-systemd-units\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.480985 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-var-lib-openvswitch\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481046 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-etc-openvswitch\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.480999 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-etc-openvswitch\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481084 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-run-openvswitch\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481135 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-log-socket\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481226 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-kubelet\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481188 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-log-socket\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481260 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-slash\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481323 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-node-log\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481325 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-run-systemd\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481358 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-run-ovn-kubernetes\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481383 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-slash\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481414 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnc94\" (UniqueName: \"kubernetes.io/projected/836be94a-c1de-4b1c-b98a-7af78a2a4607-kube-api-access-nnc94\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481412 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-cni-bin\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481432 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-kubelet\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481462 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-run-ovn\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481501 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-node-log\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481513 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-cni-netd\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481524 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-run-netns\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481572 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-cni-netd\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481573 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481616 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-run-ovn-kubernetes\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481652 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481691 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-run-ovn\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.481927 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/836be94a-c1de-4b1c-b98a-7af78a2a4607-ovnkube-config\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.482930 4799 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.483177 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.483330 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/836be94a-c1de-4b1c-b98a-7af78a2a4607-env-overrides\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.484060 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/836be94a-c1de-4b1c-b98a-7af78a2a4607-ovnkube-config\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.484713 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/836be94a-c1de-4b1c-b98a-7af78a2a4607-ovnkube-script-lib\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.485807 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.486599 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.487162 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.490623 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/836be94a-c1de-4b1c-b98a-7af78a2a4607-ovn-node-metrics-cert\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.495323 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.496363 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.503832 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.504435 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.505602 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.506646 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.507852 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.508504 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.509983 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.510604 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.515617 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnc94\" (UniqueName: \"kubernetes.io/projected/836be94a-c1de-4b1c-b98a-7af78a2a4607-kube-api-access-nnc94\") pod \"ovnkube-node-hggcd\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.516390 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.517167 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.518632 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.520546 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.522158 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.523943 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.525943 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.526534 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.529199 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.529849 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.530381 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.536174 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.553534 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.575895 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.589285 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.605459 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.622272 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.623220 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.664566 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.677050 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerStarted","Data":"b37a897f1d7c7fd61b602fd229b5ec7f496fbebd5c4bc0144407a024fe391418"} Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.678213 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tgr7w" event={"ID":"60934e21-bc53-4f80-bb08-bb67af7301cd","Type":"ContainerStarted","Data":"10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3"} Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.678242 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tgr7w" event={"ID":"60934e21-bc53-4f80-bb08-bb67af7301cd","Type":"ContainerStarted","Data":"bdebfd8be0c4a4faf2fc24437e74cf6be515416394ae13dcb54b80e64a160f0f"} Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.681118 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" event={"ID":"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3","Type":"ContainerStarted","Data":"980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f"} Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.681175 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" event={"ID":"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3","Type":"ContainerStarted","Data":"00369b0434683ae3ffda3700116c1f9104b0c833078afcd8ceea3621bc078f29"} Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.686350 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107"} Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.686442 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d"} Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.686459 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"7fa5ec5e385bef74d236553715b7920cbb0734cf1ad0b70e8ba834287d1a4362"} Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.687966 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-gc4vh" event={"ID":"7cf6cd90-b4bf-4e62-b758-d31590e43866","Type":"ContainerStarted","Data":"2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495"} Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.688027 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-gc4vh" event={"ID":"7cf6cd90-b4bf-4e62-b758-d31590e43866","Type":"ContainerStarted","Data":"d031cd6012f1b91425b25af672fc6bc8205528d2dca1120ccfcce62d3066bcda"} Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.691040 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.715072 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.732157 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.808492 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.827132 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.848753 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.891711 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.910044 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.923172 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.942896 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.955691 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.972540 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:56 crc kubenswrapper[4799]: I0127 07:45:56.991281 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:56Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.007667 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.021663 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.045114 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.060464 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.077043 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.091231 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.106531 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.123681 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.135345 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.150669 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.401017 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 23:45:04.621786052 +0000 UTC Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.450677 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:45:57 crc kubenswrapper[4799]: E0127 07:45:57.450897 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.694344 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f"} Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.696391 4799 generic.go:334] "Generic (PLEG): container finished" podID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerID="35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95" exitCode=0 Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.696469 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerDied","Data":"35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95"} Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.699028 4799 generic.go:334] "Generic (PLEG): container finished" podID="9e09525d-7c34-4bc8-883e-f6dafcd0b4f3" containerID="980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f" exitCode=0 Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.699116 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" event={"ID":"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3","Type":"ContainerDied","Data":"980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f"} Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.712077 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.738046 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.750826 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.765146 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.779176 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.791546 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.810347 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.825028 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.832015 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-w5s6n"] Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.832600 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-w5s6n" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.835333 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.838449 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.838697 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.838816 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.857636 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.870070 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.889804 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.903334 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.918111 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.938166 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.953975 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.967465 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:57 crc kubenswrapper[4799]: I0127 07:45:57.982601 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:57Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.000606 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.000653 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmlwh\" (UniqueName: \"kubernetes.io/projected/fb74b5d4-3624-4b27-9621-2d38cc2c6f3d-kube-api-access-rmlwh\") pod \"node-ca-w5s6n\" (UID: \"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\") " pod="openshift-image-registry/node-ca-w5s6n" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.000674 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb74b5d4-3624-4b27-9621-2d38cc2c6f3d-host\") pod \"node-ca-w5s6n\" (UID: \"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\") " pod="openshift-image-registry/node-ca-w5s6n" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.000693 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.000712 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.000742 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.000762 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fb74b5d4-3624-4b27-9621-2d38cc2c6f3d-serviceca\") pod \"node-ca-w5s6n\" (UID: \"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\") " pod="openshift-image-registry/node-ca-w5s6n" Jan 27 07:45:58 crc kubenswrapper[4799]: E0127 07:45:58.000834 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:45:58 crc kubenswrapper[4799]: E0127 07:45:58.000865 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:45:58 crc kubenswrapper[4799]: E0127 07:45:58.000880 4799 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:45:58 crc kubenswrapper[4799]: E0127 07:45:58.000944 4799 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:45:58 crc kubenswrapper[4799]: E0127 07:45:58.000946 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:46:02.000928346 +0000 UTC m=+28.312032411 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:45:58 crc kubenswrapper[4799]: E0127 07:45:58.001002 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:46:02.000987478 +0000 UTC m=+28.312091553 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:45:58 crc kubenswrapper[4799]: E0127 07:45:58.000849 4799 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:45:58 crc kubenswrapper[4799]: E0127 07:45:58.001026 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:46:02.001019929 +0000 UTC m=+28.312123994 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:45:58 crc kubenswrapper[4799]: E0127 07:45:58.001146 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:45:58 crc kubenswrapper[4799]: E0127 07:45:58.001231 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:45:58 crc kubenswrapper[4799]: E0127 07:45:58.001251 4799 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:45:58 crc kubenswrapper[4799]: E0127 07:45:58.001403 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:46:02.001354988 +0000 UTC m=+28.312459053 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.005450 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.022438 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.041887 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.071455 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.101617 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.101779 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmlwh\" (UniqueName: \"kubernetes.io/projected/fb74b5d4-3624-4b27-9621-2d38cc2c6f3d-kube-api-access-rmlwh\") pod \"node-ca-w5s6n\" (UID: \"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\") " pod="openshift-image-registry/node-ca-w5s6n" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.101898 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb74b5d4-3624-4b27-9621-2d38cc2c6f3d-host\") pod \"node-ca-w5s6n\" (UID: \"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\") " pod="openshift-image-registry/node-ca-w5s6n" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.101997 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fb74b5d4-3624-4b27-9621-2d38cc2c6f3d-serviceca\") pod \"node-ca-w5s6n\" (UID: \"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\") " pod="openshift-image-registry/node-ca-w5s6n" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.102572 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb74b5d4-3624-4b27-9621-2d38cc2c6f3d-host\") pod \"node-ca-w5s6n\" (UID: \"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\") " pod="openshift-image-registry/node-ca-w5s6n" Jan 27 07:45:58 crc kubenswrapper[4799]: E0127 07:45:58.102572 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:46:02.102532788 +0000 UTC m=+28.413636863 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.103150 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fb74b5d4-3624-4b27-9621-2d38cc2c6f3d-serviceca\") pod \"node-ca-w5s6n\" (UID: \"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\") " pod="openshift-image-registry/node-ca-w5s6n" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.107860 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.140513 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmlwh\" (UniqueName: \"kubernetes.io/projected/fb74b5d4-3624-4b27-9621-2d38cc2c6f3d-kube-api-access-rmlwh\") pod \"node-ca-w5s6n\" (UID: \"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\") " pod="openshift-image-registry/node-ca-w5s6n" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.170750 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.209180 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.234989 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-w5s6n" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.249946 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.300801 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.340922 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.369881 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.401494 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 13:55:57.041449566 +0000 UTC Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.418057 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.450447 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.450482 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:45:58 crc kubenswrapper[4799]: E0127 07:45:58.450576 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:45:58 crc kubenswrapper[4799]: E0127 07:45:58.450688 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.704453 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-w5s6n" event={"ID":"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d","Type":"ContainerStarted","Data":"190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f"} Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.704502 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-w5s6n" event={"ID":"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d","Type":"ContainerStarted","Data":"5835a46abd406e7a43e109c18ba69ba6bdf7bf4ca6e54f3be15c88feaf71a12e"} Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.706810 4799 generic.go:334] "Generic (PLEG): container finished" podID="9e09525d-7c34-4bc8-883e-f6dafcd0b4f3" containerID="1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06" exitCode=0 Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.706877 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" event={"ID":"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3","Type":"ContainerDied","Data":"1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06"} Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.719552 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerStarted","Data":"1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6"} Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.719652 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerStarted","Data":"60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee"} Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.719676 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerStarted","Data":"4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3"} Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.719696 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerStarted","Data":"5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5"} Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.719715 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerStarted","Data":"14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909"} Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.719735 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerStarted","Data":"495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211"} Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.724361 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.739758 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.758123 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.770188 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.793464 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.809409 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.829184 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.845264 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.868614 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.885598 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.901867 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.922007 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.935424 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:58 crc kubenswrapper[4799]: I0127 07:45:58.966613 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:58Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.010755 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.067180 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.093279 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.140411 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.172214 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.211920 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.252260 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.294352 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.330156 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.374633 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.402169 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 19:49:27.17378903 +0000 UTC Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.409990 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.451022 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:45:59 crc kubenswrapper[4799]: E0127 07:45:59.451197 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.454959 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.490778 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.531131 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.568637 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.608321 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.724870 4799 generic.go:334] "Generic (PLEG): container finished" podID="9e09525d-7c34-4bc8-883e-f6dafcd0b4f3" containerID="a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25" exitCode=0 Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.724915 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" event={"ID":"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3","Type":"ContainerDied","Data":"a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25"} Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.749275 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.764022 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.777924 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.797328 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.815719 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.848372 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.891903 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.933848 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:45:59 crc kubenswrapper[4799]: I0127 07:45:59.970416 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:45:59Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.016018 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.056735 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.089384 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.132646 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.175151 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.218146 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.402957 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 08:01:01.814354476 +0000 UTC Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.451071 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.451088 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:00 crc kubenswrapper[4799]: E0127 07:46:00.451348 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:00 crc kubenswrapper[4799]: E0127 07:46:00.451632 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.694799 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.699356 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.699436 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.699456 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.700553 4799 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.713589 4799 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.714027 4799 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.715858 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.715903 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.715922 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.715951 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.715972 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:00Z","lastTransitionTime":"2026-01-27T07:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.735770 4799 generic.go:334] "Generic (PLEG): container finished" podID="9e09525d-7c34-4bc8-883e-f6dafcd0b4f3" containerID="8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b" exitCode=0 Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.735844 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" event={"ID":"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3","Type":"ContainerDied","Data":"8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b"} Jan 27 07:46:00 crc kubenswrapper[4799]: E0127 07:46:00.744083 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.753915 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.754000 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.754027 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.754064 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.754087 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:00Z","lastTransitionTime":"2026-01-27T07:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.765611 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: E0127 07:46:00.778491 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.788010 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.788102 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.788185 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.788220 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.788241 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:00Z","lastTransitionTime":"2026-01-27T07:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.792587 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.808133 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: E0127 07:46:00.814397 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.825408 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.825568 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.825600 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.825643 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.825673 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:00Z","lastTransitionTime":"2026-01-27T07:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.830117 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: E0127 07:46:00.844330 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.849009 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.849042 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.849054 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.849074 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.849089 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:00Z","lastTransitionTime":"2026-01-27T07:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.856075 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: E0127 07:46:00.863636 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: E0127 07:46:00.863748 4799 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.866631 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.866664 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.866677 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.866693 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.866705 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:00Z","lastTransitionTime":"2026-01-27T07:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.874146 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.898663 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.914739 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.932520 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.947559 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.963357 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.969678 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.969713 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.969724 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.969741 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.969751 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:00Z","lastTransitionTime":"2026-01-27T07:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.982075 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:00 crc kubenswrapper[4799]: I0127 07:46:00.998777 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:00Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.013162 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.023164 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.073759 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.073809 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.073824 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.073845 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.073859 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:01Z","lastTransitionTime":"2026-01-27T07:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.176531 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.176572 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.176581 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.176598 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.176609 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:01Z","lastTransitionTime":"2026-01-27T07:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.279102 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.279154 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.279164 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.279181 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.279193 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:01Z","lastTransitionTime":"2026-01-27T07:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.382160 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.382230 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.382240 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.382254 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.382262 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:01Z","lastTransitionTime":"2026-01-27T07:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.403659 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 23:01:48.072788121 +0000 UTC Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.451351 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:01 crc kubenswrapper[4799]: E0127 07:46:01.451493 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.484225 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.484268 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.484278 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.484293 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.484327 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:01Z","lastTransitionTime":"2026-01-27T07:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.587919 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.587968 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.587983 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.588006 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.588022 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:01Z","lastTransitionTime":"2026-01-27T07:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.691251 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.691324 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.691338 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.691357 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.691370 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:01Z","lastTransitionTime":"2026-01-27T07:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.743151 4799 generic.go:334] "Generic (PLEG): container finished" podID="9e09525d-7c34-4bc8-883e-f6dafcd0b4f3" containerID="b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870" exitCode=0 Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.743209 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" event={"ID":"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3","Type":"ContainerDied","Data":"b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870"} Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.750350 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerStarted","Data":"b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8"} Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.770502 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.793665 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.794477 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.794552 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.794571 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.794599 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.794619 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:01Z","lastTransitionTime":"2026-01-27T07:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.812196 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.828032 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.852188 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.867873 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.899183 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.911843 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.911910 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.911930 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.911965 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.911987 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:01Z","lastTransitionTime":"2026-01-27T07:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.922993 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.940422 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.957945 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.971457 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:01 crc kubenswrapper[4799]: I0127 07:46:01.986574 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:01Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.008238 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.015421 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.015467 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.015478 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.015499 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.015511 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:02Z","lastTransitionTime":"2026-01-27T07:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.025082 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.043244 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.045082 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.045172 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.045223 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.045277 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:02 crc kubenswrapper[4799]: E0127 07:46:02.045362 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:46:02 crc kubenswrapper[4799]: E0127 07:46:02.045414 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:46:02 crc kubenswrapper[4799]: E0127 07:46:02.045438 4799 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:46:02 crc kubenswrapper[4799]: E0127 07:46:02.045484 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:46:02 crc kubenswrapper[4799]: E0127 07:46:02.045502 4799 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:46:02 crc kubenswrapper[4799]: E0127 07:46:02.045519 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:46:10.04549499 +0000 UTC m=+36.356599265 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:46:02 crc kubenswrapper[4799]: E0127 07:46:02.045506 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:46:02 crc kubenswrapper[4799]: E0127 07:46:02.045587 4799 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:46:02 crc kubenswrapper[4799]: E0127 07:46:02.045607 4799 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:46:02 crc kubenswrapper[4799]: E0127 07:46:02.045589 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:46:10.045562422 +0000 UTC m=+36.356666517 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:46:02 crc kubenswrapper[4799]: E0127 07:46:02.045664 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:46:10.045650044 +0000 UTC m=+36.356754349 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:46:02 crc kubenswrapper[4799]: E0127 07:46:02.045693 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:46:10.045681555 +0000 UTC m=+36.356785890 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.118949 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.119010 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.119024 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.119047 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.119063 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:02Z","lastTransitionTime":"2026-01-27T07:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.146512 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:46:02 crc kubenswrapper[4799]: E0127 07:46:02.146751 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:46:10.14671011 +0000 UTC m=+36.457814185 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.221897 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.221986 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.222010 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.222044 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.222064 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:02Z","lastTransitionTime":"2026-01-27T07:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.326035 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.326118 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.326137 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.326170 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.326189 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:02Z","lastTransitionTime":"2026-01-27T07:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.404810 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:14:13.272999065 +0000 UTC Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.429560 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.429635 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.429655 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.429867 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.429890 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:02Z","lastTransitionTime":"2026-01-27T07:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.450932 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.451127 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:02 crc kubenswrapper[4799]: E0127 07:46:02.451361 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:02 crc kubenswrapper[4799]: E0127 07:46:02.451649 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.533016 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.533107 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.533127 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.533174 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.533198 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:02Z","lastTransitionTime":"2026-01-27T07:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.636824 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.636894 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.636908 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.636935 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.636951 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:02Z","lastTransitionTime":"2026-01-27T07:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.741197 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.741255 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.741266 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.741284 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.741295 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:02Z","lastTransitionTime":"2026-01-27T07:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.763470 4799 generic.go:334] "Generic (PLEG): container finished" podID="9e09525d-7c34-4bc8-883e-f6dafcd0b4f3" containerID="67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca" exitCode=0 Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.763528 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" event={"ID":"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3","Type":"ContainerDied","Data":"67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca"} Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.792851 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.814209 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.829379 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.843649 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.844861 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.844926 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.844939 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.844963 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.844977 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:02Z","lastTransitionTime":"2026-01-27T07:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.859878 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.879368 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.895231 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.909257 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.967008 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.968956 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.968990 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.969001 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.969018 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.969035 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:02Z","lastTransitionTime":"2026-01-27T07:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:02 crc kubenswrapper[4799]: I0127 07:46:02.987225 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.002940 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.017252 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.032462 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.049548 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.063953 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.073360 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.073406 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.073420 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.073443 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.073458 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:03Z","lastTransitionTime":"2026-01-27T07:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.176842 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.176883 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.176892 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.176908 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.176917 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:03Z","lastTransitionTime":"2026-01-27T07:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.279967 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.280007 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.280017 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.280037 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.280049 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:03Z","lastTransitionTime":"2026-01-27T07:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.382091 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.382136 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.382146 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.382164 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.382175 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:03Z","lastTransitionTime":"2026-01-27T07:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.405418 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:47:39.055857615 +0000 UTC Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.450960 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:03 crc kubenswrapper[4799]: E0127 07:46:03.451081 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.485391 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.485449 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.485464 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.485485 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.485497 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:03Z","lastTransitionTime":"2026-01-27T07:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.588547 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.588585 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.588593 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.588610 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.588621 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:03Z","lastTransitionTime":"2026-01-27T07:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.691609 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.691651 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.691660 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.691678 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.691688 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:03Z","lastTransitionTime":"2026-01-27T07:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.772707 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerStarted","Data":"7a234725f7dfad3b2b5f78cdf1622af61db7159b587a4eef2709cba84802c96a"} Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.773463 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.782725 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" event={"ID":"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3","Type":"ContainerStarted","Data":"641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900"} Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.788916 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.794529 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.794569 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.794578 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.794598 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.794614 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:03Z","lastTransitionTime":"2026-01-27T07:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.796403 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a234725f7dfad3b2b5f78cdf1622af61db7159b587a4eef2709cba84802c96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.802213 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.819668 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.849209 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.867957 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.888678 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.897920 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.897961 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.897973 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.897991 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.898002 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:03Z","lastTransitionTime":"2026-01-27T07:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.919416 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.935273 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.950596 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.968652 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.980633 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:03 crc kubenswrapper[4799]: I0127 07:46:03.995072 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:03Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.000080 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.000122 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.000137 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.000162 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.000178 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:04Z","lastTransitionTime":"2026-01-27T07:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.008525 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.022667 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.038527 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.050591 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.063646 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.077528 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.095838 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.103441 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.103500 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.103513 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.103537 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.103553 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:04Z","lastTransitionTime":"2026-01-27T07:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.110955 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.142102 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.159235 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.161611 4799 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.206275 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.206334 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.206345 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.206368 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.206380 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:04Z","lastTransitionTime":"2026-01-27T07:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.309370 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.309419 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.309428 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.309447 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.309456 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:04Z","lastTransitionTime":"2026-01-27T07:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.406555 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 14:38:02.07380394 +0000 UTC Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.412910 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.412952 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.412963 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.412982 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.412998 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:04Z","lastTransitionTime":"2026-01-27T07:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.451445 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:04 crc kubenswrapper[4799]: E0127 07:46:04.451581 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.452004 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:04 crc kubenswrapper[4799]: E0127 07:46:04.452082 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.516057 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.516098 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.516106 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.516365 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.516379 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:04Z","lastTransitionTime":"2026-01-27T07:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.619854 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.619898 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.619909 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.619926 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.619935 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:04Z","lastTransitionTime":"2026-01-27T07:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.722899 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.722956 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.722968 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.722988 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.723001 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:04Z","lastTransitionTime":"2026-01-27T07:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.790174 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.813801 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.825776 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.825829 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.825843 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.825862 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.825876 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:04Z","lastTransitionTime":"2026-01-27T07:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.928491 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.928538 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.928550 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.928573 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:04 crc kubenswrapper[4799]: I0127 07:46:04.928586 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:04Z","lastTransitionTime":"2026-01-27T07:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.031133 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.031194 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.031210 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.031232 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.031249 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:05Z","lastTransitionTime":"2026-01-27T07:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.133594 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.133630 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.133640 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.133654 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.133666 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:05Z","lastTransitionTime":"2026-01-27T07:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.236114 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.236152 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.236176 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.236197 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.236209 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:05Z","lastTransitionTime":"2026-01-27T07:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.248614 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a234725f7dfad3b2b5f78cdf1622af61db7159b587a4eef2709cba84802c96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.264422 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.279395 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.292977 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.307694 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.322491 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.337787 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.339074 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.339129 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.339145 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.339166 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.339181 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:05Z","lastTransitionTime":"2026-01-27T07:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.354586 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.369437 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.393785 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.407378 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 12:21:59.276865634 +0000 UTC Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.410317 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.430558 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a234725f7dfad3b2b5f78cdf1622af61db7159b587a4eef2709cba84802c96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.442078 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.442128 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.442140 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.442162 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.442176 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:05Z","lastTransitionTime":"2026-01-27T07:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.449241 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.450330 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:05 crc kubenswrapper[4799]: E0127 07:46:05.450443 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.463736 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.480499 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.494054 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.507270 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.519006 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.528682 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.536016 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.544385 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.544421 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.544435 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.544452 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.544464 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:05Z","lastTransitionTime":"2026-01-27T07:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.548032 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.557958 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.566266 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.575363 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.589102 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.599464 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.608987 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.619943 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.637634 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.646413 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.646463 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.646474 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.646491 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.646502 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:05Z","lastTransitionTime":"2026-01-27T07:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.655404 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.674936 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a234725f7dfad3b2b5f78cdf1622af61db7159b587a4eef2709cba84802c96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.687514 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.700586 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.717395 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.728751 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.741191 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.749342 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.749379 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.749390 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.749407 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.749423 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:05Z","lastTransitionTime":"2026-01-27T07:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.754580 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.765196 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.772862 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:05Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.851466 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.851499 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.851507 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.851520 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.851529 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:05Z","lastTransitionTime":"2026-01-27T07:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.953579 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.953612 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.953624 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.953639 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:05 crc kubenswrapper[4799]: I0127 07:46:05.953650 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:05Z","lastTransitionTime":"2026-01-27T07:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.055835 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.055871 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.055879 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.055895 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.055904 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:06Z","lastTransitionTime":"2026-01-27T07:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.158185 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.158217 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.158227 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.158244 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.158253 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:06Z","lastTransitionTime":"2026-01-27T07:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.260464 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.260503 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.260513 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.260528 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.260538 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:06Z","lastTransitionTime":"2026-01-27T07:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.362281 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.362346 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.362359 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.362379 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.362392 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:06Z","lastTransitionTime":"2026-01-27T07:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.407944 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:27:22.414069611 +0000 UTC Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.451326 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:06 crc kubenswrapper[4799]: E0127 07:46:06.451452 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.451336 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:06 crc kubenswrapper[4799]: E0127 07:46:06.451629 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.464736 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.464774 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.464784 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.464795 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.464808 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:06Z","lastTransitionTime":"2026-01-27T07:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.567467 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.567530 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.567547 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.567573 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.567595 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:06Z","lastTransitionTime":"2026-01-27T07:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.670055 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.670125 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.670148 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.670181 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.670204 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:06Z","lastTransitionTime":"2026-01-27T07:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.772719 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.772760 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.772774 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.772791 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.772803 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:06Z","lastTransitionTime":"2026-01-27T07:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.794316 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hggcd_836be94a-c1de-4b1c-b98a-7af78a2a4607/ovnkube-controller/0.log" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.796096 4799 generic.go:334] "Generic (PLEG): container finished" podID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerID="7a234725f7dfad3b2b5f78cdf1622af61db7159b587a4eef2709cba84802c96a" exitCode=1 Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.796129 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerDied","Data":"7a234725f7dfad3b2b5f78cdf1622af61db7159b587a4eef2709cba84802c96a"} Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.796871 4799 scope.go:117] "RemoveContainer" containerID="7a234725f7dfad3b2b5f78cdf1622af61db7159b587a4eef2709cba84802c96a" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.815807 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.828710 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.844947 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.856617 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.869970 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.875228 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.875262 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.875272 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.875287 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.875314 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:06Z","lastTransitionTime":"2026-01-27T07:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.889962 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.904224 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.927421 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a234725f7dfad3b2b5f78cdf1622af61db7159b587a4eef2709cba84802c96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a234725f7dfad3b2b5f78cdf1622af61db7159b587a4eef2709cba84802c96a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:06Z\\\",\\\"message\\\":\\\"andler 1 for removal\\\\nI0127 07:46:06.203951 6110 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 07:46:06.203974 6110 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 07:46:06.203983 6110 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 07:46:06.203985 6110 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 07:46:06.204021 6110 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 07:46:06.204041 6110 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 07:46:06.204095 6110 factory.go:656] Stopping watch factory\\\\nI0127 07:46:06.204093 6110 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 07:46:06.204102 6110 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 07:46:06.204126 6110 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 07:46:06.204127 6110 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 07:46:06.204136 6110 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 07:46:06.204146 6110 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 07:46:06.204239 6110 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.957586 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.973867 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.977940 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.977978 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.977987 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.978002 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.978012 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:06Z","lastTransitionTime":"2026-01-27T07:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.986002 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:06 crc kubenswrapper[4799]: I0127 07:46:06.998021 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:06Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.014487 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.028349 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.040799 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.080607 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.080642 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.080651 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.080667 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.080677 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:07Z","lastTransitionTime":"2026-01-27T07:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.182475 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.182524 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.182536 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.182552 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.182564 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:07Z","lastTransitionTime":"2026-01-27T07:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.285013 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.285060 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.285070 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.285124 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.285137 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:07Z","lastTransitionTime":"2026-01-27T07:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.388144 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.388187 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.388198 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.388216 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.388226 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:07Z","lastTransitionTime":"2026-01-27T07:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.409059 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:04:52.053404393 +0000 UTC Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.451200 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:07 crc kubenswrapper[4799]: E0127 07:46:07.451367 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.490280 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.490345 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.490361 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.490380 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.490393 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:07Z","lastTransitionTime":"2026-01-27T07:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.593023 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.593117 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.593131 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.593151 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.593165 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:07Z","lastTransitionTime":"2026-01-27T07:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.695625 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.695673 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.695684 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.695705 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.695716 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:07Z","lastTransitionTime":"2026-01-27T07:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.798165 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.798201 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.798209 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.798223 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.798233 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:07Z","lastTransitionTime":"2026-01-27T07:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.802065 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hggcd_836be94a-c1de-4b1c-b98a-7af78a2a4607/ovnkube-controller/0.log" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.805252 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerStarted","Data":"d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1"} Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.805711 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.820065 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.831834 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.843731 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.856001 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.877962 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.900105 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.901333 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.901403 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.901418 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.901437 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.901451 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:07Z","lastTransitionTime":"2026-01-27T07:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.924428 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a234725f7dfad3b2b5f78cdf1622af61db7159b587a4eef2709cba84802c96a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:06Z\\\",\\\"message\\\":\\\"andler 1 for removal\\\\nI0127 07:46:06.203951 6110 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 07:46:06.203974 6110 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 07:46:06.203983 6110 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 07:46:06.203985 6110 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 07:46:06.204021 6110 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 07:46:06.204041 6110 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 07:46:06.204095 6110 factory.go:656] Stopping watch factory\\\\nI0127 07:46:06.204093 6110 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 07:46:06.204102 6110 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 07:46:06.204126 6110 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 07:46:06.204127 6110 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 07:46:06.204136 6110 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 07:46:06.204146 6110 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 07:46:06.204239 6110 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.938191 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.952389 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.964760 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.978139 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:07 crc kubenswrapper[4799]: I0127 07:46:07.994054 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:07Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.004135 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.004165 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.004176 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.004193 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.004202 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:08Z","lastTransitionTime":"2026-01-27T07:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.012239 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.025641 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.048899 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.106624 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.106651 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.106660 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.106672 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.106681 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:08Z","lastTransitionTime":"2026-01-27T07:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.208864 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.208942 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.208963 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.209000 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.209020 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:08Z","lastTransitionTime":"2026-01-27T07:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.259283 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd"] Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.260096 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.262069 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.262386 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.276444 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.293562 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.309218 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.311340 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.311389 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.311411 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.311436 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.311456 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:08Z","lastTransitionTime":"2026-01-27T07:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.321655 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997b05f-6093-45cc-aa37-f988051c7f32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.332990 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7997b05f-6093-45cc-aa37-f988051c7f32-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4vzbd\" (UID: \"7997b05f-6093-45cc-aa37-f988051c7f32\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.333136 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvtsq\" (UniqueName: \"kubernetes.io/projected/7997b05f-6093-45cc-aa37-f988051c7f32-kube-api-access-fvtsq\") pod \"ovnkube-control-plane-749d76644c-4vzbd\" (UID: \"7997b05f-6093-45cc-aa37-f988051c7f32\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.333239 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7997b05f-6093-45cc-aa37-f988051c7f32-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4vzbd\" (UID: \"7997b05f-6093-45cc-aa37-f988051c7f32\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.333391 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7997b05f-6093-45cc-aa37-f988051c7f32-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4vzbd\" (UID: \"7997b05f-6093-45cc-aa37-f988051c7f32\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.335703 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.349372 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.361869 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.373278 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.394784 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.410112 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 16:09:04.009410714 +0000 UTC Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.410883 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.413421 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.413460 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.413472 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.413487 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.413518 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:08Z","lastTransitionTime":"2026-01-27T07:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.434087 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7997b05f-6093-45cc-aa37-f988051c7f32-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4vzbd\" (UID: \"7997b05f-6093-45cc-aa37-f988051c7f32\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.434126 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvtsq\" (UniqueName: \"kubernetes.io/projected/7997b05f-6093-45cc-aa37-f988051c7f32-kube-api-access-fvtsq\") pod \"ovnkube-control-plane-749d76644c-4vzbd\" (UID: \"7997b05f-6093-45cc-aa37-f988051c7f32\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.434159 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7997b05f-6093-45cc-aa37-f988051c7f32-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4vzbd\" (UID: \"7997b05f-6093-45cc-aa37-f988051c7f32\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.434192 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7997b05f-6093-45cc-aa37-f988051c7f32-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4vzbd\" (UID: \"7997b05f-6093-45cc-aa37-f988051c7f32\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.434942 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7997b05f-6093-45cc-aa37-f988051c7f32-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4vzbd\" (UID: \"7997b05f-6093-45cc-aa37-f988051c7f32\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.435153 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7997b05f-6093-45cc-aa37-f988051c7f32-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4vzbd\" (UID: \"7997b05f-6093-45cc-aa37-f988051c7f32\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.438974 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a234725f7dfad3b2b5f78cdf1622af61db7159b587a4eef2709cba84802c96a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:06Z\\\",\\\"message\\\":\\\"andler 1 for removal\\\\nI0127 07:46:06.203951 6110 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 07:46:06.203974 6110 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 07:46:06.203983 6110 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 07:46:06.203985 6110 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 07:46:06.204021 6110 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 07:46:06.204041 6110 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 07:46:06.204095 6110 factory.go:656] Stopping watch factory\\\\nI0127 07:46:06.204093 6110 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 07:46:06.204102 6110 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 07:46:06.204126 6110 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 07:46:06.204127 6110 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 07:46:06.204136 6110 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 07:46:06.204146 6110 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 07:46:06.204239 6110 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.440940 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7997b05f-6093-45cc-aa37-f988051c7f32-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4vzbd\" (UID: \"7997b05f-6093-45cc-aa37-f988051c7f32\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.451441 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.451514 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:08 crc kubenswrapper[4799]: E0127 07:46:08.451554 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:08 crc kubenswrapper[4799]: E0127 07:46:08.451676 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.452583 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvtsq\" (UniqueName: \"kubernetes.io/projected/7997b05f-6093-45cc-aa37-f988051c7f32-kube-api-access-fvtsq\") pod \"ovnkube-control-plane-749d76644c-4vzbd\" (UID: \"7997b05f-6093-45cc-aa37-f988051c7f32\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.461628 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.472935 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.484866 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.496661 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.508980 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.516071 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.516131 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.516153 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.516181 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.516206 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:08Z","lastTransitionTime":"2026-01-27T07:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.585092 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.656827 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.656994 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.657025 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.657110 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.657138 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:08Z","lastTransitionTime":"2026-01-27T07:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.759378 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.759678 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.759784 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.759887 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.759974 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:08Z","lastTransitionTime":"2026-01-27T07:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.809092 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hggcd_836be94a-c1de-4b1c-b98a-7af78a2a4607/ovnkube-controller/1.log" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.809824 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hggcd_836be94a-c1de-4b1c-b98a-7af78a2a4607/ovnkube-controller/0.log" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.812348 4799 generic.go:334] "Generic (PLEG): container finished" podID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerID="d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1" exitCode=1 Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.812417 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerDied","Data":"d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1"} Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.812475 4799 scope.go:117] "RemoveContainer" containerID="7a234725f7dfad3b2b5f78cdf1622af61db7159b587a4eef2709cba84802c96a" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.813310 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" event={"ID":"7997b05f-6093-45cc-aa37-f988051c7f32","Type":"ContainerStarted","Data":"059ce0ea21187369506c836bcb1cb32f3f50c5bd810901593496cfe7903580cc"} Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.813473 4799 scope.go:117] "RemoveContainer" containerID="d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1" Jan 27 07:46:08 crc kubenswrapper[4799]: E0127 07:46:08.813860 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.826143 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.840220 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.856874 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.863257 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.863372 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.863541 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.863742 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.863830 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:08Z","lastTransitionTime":"2026-01-27T07:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.869845 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.883202 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.895698 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.905628 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.916372 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997b05f-6093-45cc-aa37-f988051c7f32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.927873 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.940640 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.952525 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.966174 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.966214 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.966229 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.966246 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.966259 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:08Z","lastTransitionTime":"2026-01-27T07:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.969687 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:08 crc kubenswrapper[4799]: I0127 07:46:08.980564 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.001901 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a234725f7dfad3b2b5f78cdf1622af61db7159b587a4eef2709cba84802c96a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:06Z\\\",\\\"message\\\":\\\"andler 1 for removal\\\\nI0127 07:46:06.203951 6110 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 07:46:06.203974 6110 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 07:46:06.203983 6110 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 07:46:06.203985 6110 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 07:46:06.204021 6110 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 07:46:06.204041 6110 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 07:46:06.204095 6110 factory.go:656] Stopping watch factory\\\\nI0127 07:46:06.204093 6110 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 07:46:06.204102 6110 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 07:46:06.204126 6110 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 07:46:06.204127 6110 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 07:46:06.204136 6110 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 07:46:06.204146 6110 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 07:46:06.204239 6110 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"message\\\":\\\"oadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}\\\\nI0127 07:46:07.781685 6246 services_controller.go:360] Finished syncing service packageserver-service on namespace openshift-operator-lifecycle-manager for network=default : 913.636µs\\\\nI0127 07:46:07.781688 6246 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-xd92c): added port \\\\u0026{name:openshift-network-diagnostics_network-check-target-xd92c uuid:61897e97-c771-4738-8709-09636387cb00 logicalSwitch:crc ips:[0xc008f50c60] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04\\\\nI0127 07:46:07.781710 6246 services_controller.go:356] Processing sync for service openshift-authentication/oauth-openshift for network=default\\\\nI0127 07:46:07.781717 6246 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI0127 07:46:07.781726 6246 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nF0127 07:46:07.781669 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:08Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.026904 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:09Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.038454 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:09Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.068682 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.068728 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.068740 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.068758 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.068771 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:09Z","lastTransitionTime":"2026-01-27T07:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.170847 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.170890 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.170902 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.170919 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.170932 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:09Z","lastTransitionTime":"2026-01-27T07:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.272971 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.273000 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.273008 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.273021 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.273030 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:09Z","lastTransitionTime":"2026-01-27T07:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.375279 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.375335 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.375346 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.375363 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.375373 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:09Z","lastTransitionTime":"2026-01-27T07:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.410952 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 14:25:11.335490829 +0000 UTC Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.450439 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:09 crc kubenswrapper[4799]: E0127 07:46:09.450579 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.478276 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.478329 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.478339 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.478354 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.478365 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:09Z","lastTransitionTime":"2026-01-27T07:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.581429 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.581513 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.581531 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.581555 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.581572 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:09Z","lastTransitionTime":"2026-01-27T07:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.684138 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.684213 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.684238 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.684268 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.684288 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:09Z","lastTransitionTime":"2026-01-27T07:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.787034 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.787349 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.787415 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.787481 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.787550 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:09Z","lastTransitionTime":"2026-01-27T07:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.819786 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hggcd_836be94a-c1de-4b1c-b98a-7af78a2a4607/ovnkube-controller/1.log" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.827269 4799 scope.go:117] "RemoveContainer" containerID="d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1" Jan 27 07:46:09 crc kubenswrapper[4799]: E0127 07:46:09.827549 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.834033 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" event={"ID":"7997b05f-6093-45cc-aa37-f988051c7f32","Type":"ContainerStarted","Data":"3c56068aa0157d1e112901534ebf61c7bb646d76fc4bfa77f6f68fc63b4b44cb"} Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.834106 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" event={"ID":"7997b05f-6093-45cc-aa37-f988051c7f32","Type":"ContainerStarted","Data":"30ba992e2bfa7a985a725ee707991b95bf535cdc46bd800e5ca71fde162563cc"} Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.851981 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:09Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.864977 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:09Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.876079 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:09Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.886957 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:09Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.889707 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.889759 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.889776 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.889800 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.889817 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:09Z","lastTransitionTime":"2026-01-27T07:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.901818 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:09Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.919944 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"message\\\":\\\"oadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}\\\\nI0127 07:46:07.781685 6246 services_controller.go:360] Finished syncing service packageserver-service on namespace openshift-operator-lifecycle-manager for network=default : 913.636µs\\\\nI0127 07:46:07.781688 6246 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-xd92c): added port \\\\u0026{name:openshift-network-diagnostics_network-check-target-xd92c uuid:61897e97-c771-4738-8709-09636387cb00 logicalSwitch:crc ips:[0xc008f50c60] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04\\\\nI0127 07:46:07.781710 6246 services_controller.go:356] Processing sync for service openshift-authentication/oauth-openshift for network=default\\\\nI0127 07:46:07.781717 6246 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI0127 07:46:07.781726 6246 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nF0127 07:46:07.781669 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:09Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.939013 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:09Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.950153 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:09Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.961453 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:09Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.973751 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:09Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.986501 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:09Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.992402 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.992446 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.992460 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.992481 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.992496 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:09Z","lastTransitionTime":"2026-01-27T07:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:09 crc kubenswrapper[4799]: I0127 07:46:09.997883 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:09Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.009994 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.021606 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.033693 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.047597 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997b05f-6093-45cc-aa37-f988051c7f32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.060057 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.067058 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.067101 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.067125 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.067158 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.067215 4799 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.067281 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:46:26.067261412 +0000 UTC m=+52.378365497 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.067288 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.067328 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.067346 4799 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.067386 4799 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.067399 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:46:26.067381075 +0000 UTC m=+52.378485340 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.067288 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.067518 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.067553 4799 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.067433 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:46:26.067419226 +0000 UTC m=+52.378523491 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.067658 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:46:26.067642442 +0000 UTC m=+52.378746517 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.079148 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.091406 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.095055 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.095095 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.095104 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.095118 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.095127 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:10Z","lastTransitionTime":"2026-01-27T07:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.104925 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.122063 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.136877 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997b05f-6093-45cc-aa37-f988051c7f32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ba992e2bfa7a985a725ee707991b95bf535cdc46bd800e5ca71fde162563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56068aa0157d1e112901534ebf61c7bb646d76fc4bfa77f6f68fc63b4b44cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.155458 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.167572 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.167691 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-qq7cx"] Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.167842 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:46:26.167801614 +0000 UTC m=+52.478905739 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.168636 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.168770 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.169575 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.180404 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.193601 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.197037 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.197171 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.197271 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.197432 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.197539 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:10Z","lastTransitionTime":"2026-01-27T07:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.206238 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.220181 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.235735 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.257442 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.268842 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b4hh\" (UniqueName: \"kubernetes.io/projected/0af5040b-0391-423c-b87d-90df4965f58f-kube-api-access-8b4hh\") pod \"network-metrics-daemon-qq7cx\" (UID: \"0af5040b-0391-423c-b87d-90df4965f58f\") " pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.269044 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs\") pod \"network-metrics-daemon-qq7cx\" (UID: \"0af5040b-0391-423c-b87d-90df4965f58f\") " pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.277507 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.299436 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.299501 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.299522 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.299548 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.299570 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:10Z","lastTransitionTime":"2026-01-27T07:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.301080 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"message\\\":\\\"oadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}\\\\nI0127 07:46:07.781685 6246 services_controller.go:360] Finished syncing service packageserver-service on namespace openshift-operator-lifecycle-manager for network=default : 913.636µs\\\\nI0127 07:46:07.781688 6246 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-xd92c): added port \\\\u0026{name:openshift-network-diagnostics_network-check-target-xd92c uuid:61897e97-c771-4738-8709-09636387cb00 logicalSwitch:crc ips:[0xc008f50c60] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04\\\\nI0127 07:46:07.781710 6246 services_controller.go:356] Processing sync for service openshift-authentication/oauth-openshift for network=default\\\\nI0127 07:46:07.781717 6246 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI0127 07:46:07.781726 6246 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nF0127 07:46:07.781669 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.326391 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.340425 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.369663 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs\") pod \"network-metrics-daemon-qq7cx\" (UID: \"0af5040b-0391-423c-b87d-90df4965f58f\") " pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.369757 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b4hh\" (UniqueName: \"kubernetes.io/projected/0af5040b-0391-423c-b87d-90df4965f58f-kube-api-access-8b4hh\") pod \"network-metrics-daemon-qq7cx\" (UID: \"0af5040b-0391-423c-b87d-90df4965f58f\") " pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.369947 4799 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.370086 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs podName:0af5040b-0391-423c-b87d-90df4965f58f nodeName:}" failed. No retries permitted until 2026-01-27 07:46:10.87004599 +0000 UTC m=+37.181150085 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs") pod "network-metrics-daemon-qq7cx" (UID: "0af5040b-0391-423c-b87d-90df4965f58f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.372279 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"message\\\":\\\"oadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}\\\\nI0127 07:46:07.781685 6246 services_controller.go:360] Finished syncing service packageserver-service on namespace openshift-operator-lifecycle-manager for network=default : 913.636µs\\\\nI0127 07:46:07.781688 6246 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-xd92c): added port \\\\u0026{name:openshift-network-diagnostics_network-check-target-xd92c uuid:61897e97-c771-4738-8709-09636387cb00 logicalSwitch:crc ips:[0xc008f50c60] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04\\\\nI0127 07:46:07.781710 6246 services_controller.go:356] Processing sync for service openshift-authentication/oauth-openshift for network=default\\\\nI0127 07:46:07.781717 6246 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI0127 07:46:07.781726 6246 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nF0127 07:46:07.781669 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.392394 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.402061 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.402127 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.402141 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.402157 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.402170 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:10Z","lastTransitionTime":"2026-01-27T07:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.403035 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b4hh\" (UniqueName: \"kubernetes.io/projected/0af5040b-0391-423c-b87d-90df4965f58f-kube-api-access-8b4hh\") pod \"network-metrics-daemon-qq7cx\" (UID: \"0af5040b-0391-423c-b87d-90df4965f58f\") " pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.411568 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 09:39:35.374709974 +0000 UTC Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.416364 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.436553 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.451026 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.451217 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.451278 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.451373 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.453976 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.468808 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.481247 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qq7cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af5040b-0391-423c-b87d-90df4965f58f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qq7cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.495853 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.505905 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.505950 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.505964 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.505983 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.505994 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:10Z","lastTransitionTime":"2026-01-27T07:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.508130 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.517635 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.528367 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997b05f-6093-45cc-aa37-f988051c7f32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ba992e2bfa7a985a725ee707991b95bf535cdc46bd800e5ca71fde162563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56068aa0157d1e112901534ebf61c7bb646d76fc4bfa77f6f68fc63b4b44cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.544983 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.555624 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.568046 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.579912 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.608853 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.608927 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.608952 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.608985 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.609009 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:10Z","lastTransitionTime":"2026-01-27T07:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.714962 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.715005 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.715017 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.715039 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.715167 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:10Z","lastTransitionTime":"2026-01-27T07:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.818388 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.818424 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.818436 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.818453 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.818465 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:10Z","lastTransitionTime":"2026-01-27T07:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.872931 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs\") pod \"network-metrics-daemon-qq7cx\" (UID: \"0af5040b-0391-423c-b87d-90df4965f58f\") " pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.873713 4799 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.873952 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs podName:0af5040b-0391-423c-b87d-90df4965f58f nodeName:}" failed. No retries permitted until 2026-01-27 07:46:11.873913323 +0000 UTC m=+38.185017448 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs") pod "network-metrics-daemon-qq7cx" (UID: "0af5040b-0391-423c-b87d-90df4965f58f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.908659 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.908745 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.908765 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.908793 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.908811 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:10Z","lastTransitionTime":"2026-01-27T07:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.926736 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.931624 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.931675 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.931693 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.931720 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.931741 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:10Z","lastTransitionTime":"2026-01-27T07:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.947895 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.952815 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.952881 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.952900 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.952924 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.952940 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:10Z","lastTransitionTime":"2026-01-27T07:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.972121 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.976400 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.976437 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.976450 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.976470 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.976480 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:10Z","lastTransitionTime":"2026-01-27T07:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:10 crc kubenswrapper[4799]: E0127 07:46:10.993694 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:10Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.997268 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.997321 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.997351 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.997368 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:10 crc kubenswrapper[4799]: I0127 07:46:10.997380 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:10Z","lastTransitionTime":"2026-01-27T07:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:11 crc kubenswrapper[4799]: E0127 07:46:11.009821 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:11Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:11 crc kubenswrapper[4799]: E0127 07:46:11.009968 4799 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.011524 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.011567 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.011581 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.011601 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.011615 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:11Z","lastTransitionTime":"2026-01-27T07:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.114033 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.114086 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.114097 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.114114 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.114128 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:11Z","lastTransitionTime":"2026-01-27T07:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.216754 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.216818 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.216837 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.216861 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.216879 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:11Z","lastTransitionTime":"2026-01-27T07:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.319787 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.319842 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.319864 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.319898 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.319923 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:11Z","lastTransitionTime":"2026-01-27T07:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.413187 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 08:22:39.773008964 +0000 UTC Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.422075 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.422162 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.422181 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.422202 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.422217 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:11Z","lastTransitionTime":"2026-01-27T07:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.450637 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.450637 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:11 crc kubenswrapper[4799]: E0127 07:46:11.450778 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:11 crc kubenswrapper[4799]: E0127 07:46:11.450879 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.525136 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.525174 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.525183 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.525199 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.525208 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:11Z","lastTransitionTime":"2026-01-27T07:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.628025 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.628076 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.628091 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.628109 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.628123 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:11Z","lastTransitionTime":"2026-01-27T07:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.730673 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.730709 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.730719 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.730734 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.730744 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:11Z","lastTransitionTime":"2026-01-27T07:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.833324 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.833374 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.833388 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.833409 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.833424 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:11Z","lastTransitionTime":"2026-01-27T07:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.882877 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs\") pod \"network-metrics-daemon-qq7cx\" (UID: \"0af5040b-0391-423c-b87d-90df4965f58f\") " pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:11 crc kubenswrapper[4799]: E0127 07:46:11.883019 4799 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:46:11 crc kubenswrapper[4799]: E0127 07:46:11.883111 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs podName:0af5040b-0391-423c-b87d-90df4965f58f nodeName:}" failed. No retries permitted until 2026-01-27 07:46:13.883090739 +0000 UTC m=+40.194194814 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs") pod "network-metrics-daemon-qq7cx" (UID: "0af5040b-0391-423c-b87d-90df4965f58f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.941551 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.941584 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.941592 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.941606 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:11 crc kubenswrapper[4799]: I0127 07:46:11.941615 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:11Z","lastTransitionTime":"2026-01-27T07:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.044436 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.044503 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.044521 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.044556 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.044575 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:12Z","lastTransitionTime":"2026-01-27T07:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.147684 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.147727 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.147736 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.147752 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.147767 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:12Z","lastTransitionTime":"2026-01-27T07:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.249641 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.249701 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.249720 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.249746 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.249762 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:12Z","lastTransitionTime":"2026-01-27T07:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.352139 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.352210 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.352228 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.352253 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.352272 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:12Z","lastTransitionTime":"2026-01-27T07:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.413994 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 16:26:16.015205635 +0000 UTC Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.451494 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.452048 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:12 crc kubenswrapper[4799]: E0127 07:46:12.452117 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:12 crc kubenswrapper[4799]: E0127 07:46:12.452212 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.455130 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.455187 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.455200 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.455221 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.455330 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:12Z","lastTransitionTime":"2026-01-27T07:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.557723 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.557794 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.557812 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.557837 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.557855 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:12Z","lastTransitionTime":"2026-01-27T07:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.660207 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.660269 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.660287 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.660333 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.660349 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:12Z","lastTransitionTime":"2026-01-27T07:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.763341 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.763380 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.763390 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.763409 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.763420 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:12Z","lastTransitionTime":"2026-01-27T07:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.865424 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.865501 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.865518 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.865545 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.865563 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:12Z","lastTransitionTime":"2026-01-27T07:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.967561 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.967610 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.967625 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.967646 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:12 crc kubenswrapper[4799]: I0127 07:46:12.967662 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:12Z","lastTransitionTime":"2026-01-27T07:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.070900 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.070981 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.071006 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.071038 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.071060 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:13Z","lastTransitionTime":"2026-01-27T07:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.173738 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.173796 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.173846 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.173864 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.173875 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:13Z","lastTransitionTime":"2026-01-27T07:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.276398 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.276447 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.276460 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.276479 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.276490 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:13Z","lastTransitionTime":"2026-01-27T07:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.379211 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.379262 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.379285 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.379334 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.379350 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:13Z","lastTransitionTime":"2026-01-27T07:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.415206 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 11:57:58.191610115 +0000 UTC Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.450824 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.450824 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:13 crc kubenswrapper[4799]: E0127 07:46:13.450994 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:13 crc kubenswrapper[4799]: E0127 07:46:13.451093 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.482003 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.482082 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.482107 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.482226 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.482259 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:13Z","lastTransitionTime":"2026-01-27T07:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.585138 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.585188 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.585200 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.585218 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.585229 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:13Z","lastTransitionTime":"2026-01-27T07:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.687934 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.687989 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.688000 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.688018 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.688028 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:13Z","lastTransitionTime":"2026-01-27T07:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.790790 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.790850 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.790867 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.790892 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.790909 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:13Z","lastTransitionTime":"2026-01-27T07:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.894212 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.894265 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.894276 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.894312 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.894326 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:13Z","lastTransitionTime":"2026-01-27T07:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.899002 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs\") pod \"network-metrics-daemon-qq7cx\" (UID: \"0af5040b-0391-423c-b87d-90df4965f58f\") " pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:13 crc kubenswrapper[4799]: E0127 07:46:13.899210 4799 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:46:13 crc kubenswrapper[4799]: E0127 07:46:13.899334 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs podName:0af5040b-0391-423c-b87d-90df4965f58f nodeName:}" failed. No retries permitted until 2026-01-27 07:46:17.899280571 +0000 UTC m=+44.210384826 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs") pod "network-metrics-daemon-qq7cx" (UID: "0af5040b-0391-423c-b87d-90df4965f58f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.965236 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.983707 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.998581 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.998703 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.998731 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.998762 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:13 crc kubenswrapper[4799]: I0127 07:46:13.998783 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:13Z","lastTransitionTime":"2026-01-27T07:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.000819 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.014892 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.032245 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.051900 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.065883 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.081344 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.091364 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.100858 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.100902 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.100914 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.100932 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.100944 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:14Z","lastTransitionTime":"2026-01-27T07:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.104091 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997b05f-6093-45cc-aa37-f988051c7f32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ba992e2bfa7a985a725ee707991b95bf535cdc46bd800e5ca71fde162563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56068aa0157d1e112901534ebf61c7bb646d76fc4bfa77f6f68fc63b4b44cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.118817 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qq7cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af5040b-0391-423c-b87d-90df4965f58f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qq7cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.134558 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.152327 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.162312 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.172075 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.188855 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.200218 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.203574 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.203618 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.203627 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.203644 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.203654 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:14Z","lastTransitionTime":"2026-01-27T07:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.217193 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"message\\\":\\\"oadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}\\\\nI0127 07:46:07.781685 6246 services_controller.go:360] Finished syncing service packageserver-service on namespace openshift-operator-lifecycle-manager for network=default : 913.636µs\\\\nI0127 07:46:07.781688 6246 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-xd92c): added port \\\\u0026{name:openshift-network-diagnostics_network-check-target-xd92c uuid:61897e97-c771-4738-8709-09636387cb00 logicalSwitch:crc ips:[0xc008f50c60] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04\\\\nI0127 07:46:07.781710 6246 services_controller.go:356] Processing sync for service openshift-authentication/oauth-openshift for network=default\\\\nI0127 07:46:07.781717 6246 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI0127 07:46:07.781726 6246 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nF0127 07:46:07.781669 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.306142 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.306177 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.306188 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.306207 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.306218 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:14Z","lastTransitionTime":"2026-01-27T07:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.409243 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.409314 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.409329 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.409348 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.409359 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:14Z","lastTransitionTime":"2026-01-27T07:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.416409 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 23:10:20.0206242 +0000 UTC Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.451850 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.451911 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:14 crc kubenswrapper[4799]: E0127 07:46:14.452009 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:14 crc kubenswrapper[4799]: E0127 07:46:14.452127 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.465288 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.476193 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.492389 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.504142 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.512152 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.512192 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.512202 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.512218 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.512228 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:14Z","lastTransitionTime":"2026-01-27T07:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.517715 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.527407 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.537729 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997b05f-6093-45cc-aa37-f988051c7f32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ba992e2bfa7a985a725ee707991b95bf535cdc46bd800e5ca71fde162563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56068aa0157d1e112901534ebf61c7bb646d76fc4bfa77f6f68fc63b4b44cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.553932 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qq7cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af5040b-0391-423c-b87d-90df4965f58f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qq7cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.573007 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.585375 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.596598 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.608065 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.616165 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.616206 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.616219 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.616238 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.616251 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:14Z","lastTransitionTime":"2026-01-27T07:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.618213 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.630630 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.659428 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.673216 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.691287 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"message\\\":\\\"oadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}\\\\nI0127 07:46:07.781685 6246 services_controller.go:360] Finished syncing service packageserver-service on namespace openshift-operator-lifecycle-manager for network=default : 913.636µs\\\\nI0127 07:46:07.781688 6246 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-xd92c): added port \\\\u0026{name:openshift-network-diagnostics_network-check-target-xd92c uuid:61897e97-c771-4738-8709-09636387cb00 logicalSwitch:crc ips:[0xc008f50c60] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04\\\\nI0127 07:46:07.781710 6246 services_controller.go:356] Processing sync for service openshift-authentication/oauth-openshift for network=default\\\\nI0127 07:46:07.781717 6246 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI0127 07:46:07.781726 6246 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nF0127 07:46:07.781669 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.718354 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.718386 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.718395 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.718409 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.718418 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:14Z","lastTransitionTime":"2026-01-27T07:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.820197 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.820229 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.820238 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.820251 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.820260 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:14Z","lastTransitionTime":"2026-01-27T07:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.922778 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.922837 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.922846 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.922861 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:14 crc kubenswrapper[4799]: I0127 07:46:14.922873 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:14Z","lastTransitionTime":"2026-01-27T07:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.025929 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.025983 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.025997 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.026024 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.026040 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:15Z","lastTransitionTime":"2026-01-27T07:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.128699 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.128758 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.128776 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.128802 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.128820 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:15Z","lastTransitionTime":"2026-01-27T07:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.232076 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.232166 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.232208 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.232241 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.232264 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:15Z","lastTransitionTime":"2026-01-27T07:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.335058 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.335101 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.335110 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.335124 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.335134 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:15Z","lastTransitionTime":"2026-01-27T07:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.416599 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 12:14:42.580973599 +0000 UTC Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.437830 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.437874 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.437889 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.437910 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.437924 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:15Z","lastTransitionTime":"2026-01-27T07:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.451132 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:15 crc kubenswrapper[4799]: E0127 07:46:15.451363 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.451141 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:15 crc kubenswrapper[4799]: E0127 07:46:15.451641 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.541014 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.541050 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.541062 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.541080 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.541092 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:15Z","lastTransitionTime":"2026-01-27T07:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.643205 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.643248 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.643258 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.643285 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.643295 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:15Z","lastTransitionTime":"2026-01-27T07:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.745504 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.745549 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.745562 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.745580 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.745591 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:15Z","lastTransitionTime":"2026-01-27T07:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.847689 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.847720 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.847729 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.847743 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.847752 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:15Z","lastTransitionTime":"2026-01-27T07:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.950042 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.950077 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.950086 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.950101 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:15 crc kubenswrapper[4799]: I0127 07:46:15.950113 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:15Z","lastTransitionTime":"2026-01-27T07:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.053150 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.053191 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.053200 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.053218 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.053227 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:16Z","lastTransitionTime":"2026-01-27T07:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.155876 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.155928 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.155943 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.155961 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.155972 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:16Z","lastTransitionTime":"2026-01-27T07:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.258515 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.258563 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.258572 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.258587 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.258640 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:16Z","lastTransitionTime":"2026-01-27T07:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.361422 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.361462 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.361472 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.361489 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.361499 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:16Z","lastTransitionTime":"2026-01-27T07:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.417210 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 06:05:52.796385573 +0000 UTC Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.450898 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.450912 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:16 crc kubenswrapper[4799]: E0127 07:46:16.451121 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:16 crc kubenswrapper[4799]: E0127 07:46:16.451167 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.464231 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.464279 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.464294 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.464335 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.464349 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:16Z","lastTransitionTime":"2026-01-27T07:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.566958 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.567004 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.567013 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.567034 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.567046 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:16Z","lastTransitionTime":"2026-01-27T07:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.669139 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.669178 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.669188 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.669203 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.669213 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:16Z","lastTransitionTime":"2026-01-27T07:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.771409 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.771457 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.771467 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.771482 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.771491 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:16Z","lastTransitionTime":"2026-01-27T07:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.873226 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.873267 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.873276 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.873292 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.873330 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:16Z","lastTransitionTime":"2026-01-27T07:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.975750 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.975786 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.975796 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.975810 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:16 crc kubenswrapper[4799]: I0127 07:46:16.975821 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:16Z","lastTransitionTime":"2026-01-27T07:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.078853 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.079185 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.079206 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.079226 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.079246 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:17Z","lastTransitionTime":"2026-01-27T07:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.182897 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.182940 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.182950 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.182967 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.182980 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:17Z","lastTransitionTime":"2026-01-27T07:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.286017 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.286448 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.286574 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.286667 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.286754 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:17Z","lastTransitionTime":"2026-01-27T07:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.390047 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.390100 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.390112 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.390132 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.390145 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:17Z","lastTransitionTime":"2026-01-27T07:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.417616 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 05:10:35.760437285 +0000 UTC Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.450543 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.450578 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:17 crc kubenswrapper[4799]: E0127 07:46:17.450698 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:17 crc kubenswrapper[4799]: E0127 07:46:17.450805 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.492831 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.492878 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.492889 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.492906 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.492915 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:17Z","lastTransitionTime":"2026-01-27T07:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.595882 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.595990 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.596004 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.596022 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.596036 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:17Z","lastTransitionTime":"2026-01-27T07:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.699244 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.699473 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.699529 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.699553 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.699794 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:17Z","lastTransitionTime":"2026-01-27T07:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.841235 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.841340 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.841367 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.841399 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.841421 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:17Z","lastTransitionTime":"2026-01-27T07:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.943454 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs\") pod \"network-metrics-daemon-qq7cx\" (UID: \"0af5040b-0391-423c-b87d-90df4965f58f\") " pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:17 crc kubenswrapper[4799]: E0127 07:46:17.943589 4799 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:46:17 crc kubenswrapper[4799]: E0127 07:46:17.943634 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs podName:0af5040b-0391-423c-b87d-90df4965f58f nodeName:}" failed. No retries permitted until 2026-01-27 07:46:25.943620347 +0000 UTC m=+52.254724412 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs") pod "network-metrics-daemon-qq7cx" (UID: "0af5040b-0391-423c-b87d-90df4965f58f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.945245 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.945265 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.945273 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.945330 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:17 crc kubenswrapper[4799]: I0127 07:46:17.945339 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:17Z","lastTransitionTime":"2026-01-27T07:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.047997 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.048042 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.048118 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.048175 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.048189 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:18Z","lastTransitionTime":"2026-01-27T07:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.150535 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.150585 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.150596 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.150612 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.150656 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:18Z","lastTransitionTime":"2026-01-27T07:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.252976 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.253017 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.253028 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.253043 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.253054 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:18Z","lastTransitionTime":"2026-01-27T07:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.355184 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.355241 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.355256 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.355277 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.355292 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:18Z","lastTransitionTime":"2026-01-27T07:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.417748 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 21:46:51.784761277 +0000 UTC Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.450609 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.450633 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:18 crc kubenswrapper[4799]: E0127 07:46:18.450721 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:18 crc kubenswrapper[4799]: E0127 07:46:18.450853 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.457387 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.457436 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.457446 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.457461 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.457472 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:18Z","lastTransitionTime":"2026-01-27T07:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.560071 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.560121 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.560134 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.560150 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.560165 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:18Z","lastTransitionTime":"2026-01-27T07:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.662882 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.662963 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.662973 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.662987 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.662996 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:18Z","lastTransitionTime":"2026-01-27T07:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.765359 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.765393 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.765401 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.765413 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.765422 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:18Z","lastTransitionTime":"2026-01-27T07:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.867246 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.867284 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.867294 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.867326 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.867336 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:18Z","lastTransitionTime":"2026-01-27T07:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.970731 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.970813 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.970836 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.970867 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:18 crc kubenswrapper[4799]: I0127 07:46:18.970893 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:18Z","lastTransitionTime":"2026-01-27T07:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.073497 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.073545 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.073555 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.073573 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.073583 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:19Z","lastTransitionTime":"2026-01-27T07:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.176233 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.176276 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.176284 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.176315 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.176325 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:19Z","lastTransitionTime":"2026-01-27T07:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.279283 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.279339 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.279349 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.279366 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.279377 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:19Z","lastTransitionTime":"2026-01-27T07:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.382244 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.382325 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.382336 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.382353 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.382362 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:19Z","lastTransitionTime":"2026-01-27T07:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.418632 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 16:41:15.776288707 +0000 UTC Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.451238 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.451245 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:19 crc kubenswrapper[4799]: E0127 07:46:19.451406 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:19 crc kubenswrapper[4799]: E0127 07:46:19.451478 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.485570 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.485629 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.485646 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.485664 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.485675 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:19Z","lastTransitionTime":"2026-01-27T07:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.588532 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.588612 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.588623 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.588677 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.588690 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:19Z","lastTransitionTime":"2026-01-27T07:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.691004 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.691040 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.691052 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.691067 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.691079 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:19Z","lastTransitionTime":"2026-01-27T07:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.793455 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.793529 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.793555 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.793589 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.793615 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:19Z","lastTransitionTime":"2026-01-27T07:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.896781 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.896874 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.896899 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.896935 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:19 crc kubenswrapper[4799]: I0127 07:46:19.896959 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:19Z","lastTransitionTime":"2026-01-27T07:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.000028 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.000106 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.000125 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.000149 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.000166 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:20Z","lastTransitionTime":"2026-01-27T07:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.103232 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.103266 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.103275 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.103289 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.103312 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:20Z","lastTransitionTime":"2026-01-27T07:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.205511 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.205570 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.205584 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.205608 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.205621 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:20Z","lastTransitionTime":"2026-01-27T07:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.308084 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.308338 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.308443 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.308513 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.308570 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:20Z","lastTransitionTime":"2026-01-27T07:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.410958 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.411021 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.411042 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.411068 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.411085 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:20Z","lastTransitionTime":"2026-01-27T07:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.419330 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 12:14:01.268150291 +0000 UTC Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.450992 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.451094 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:20 crc kubenswrapper[4799]: E0127 07:46:20.451156 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:20 crc kubenswrapper[4799]: E0127 07:46:20.451214 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.513364 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.513412 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.513421 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.513437 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.513448 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:20Z","lastTransitionTime":"2026-01-27T07:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.615750 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.615808 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.615823 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.615841 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.615854 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:20Z","lastTransitionTime":"2026-01-27T07:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.718667 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.718711 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.718724 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.718741 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.718752 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:20Z","lastTransitionTime":"2026-01-27T07:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.821225 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.821276 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.821292 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.821349 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.821367 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:20Z","lastTransitionTime":"2026-01-27T07:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.925141 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.925411 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.925523 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.925612 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:20 crc kubenswrapper[4799]: I0127 07:46:20.925702 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:20Z","lastTransitionTime":"2026-01-27T07:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.028571 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.028612 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.028623 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.028641 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.028653 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:21Z","lastTransitionTime":"2026-01-27T07:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.075674 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.088556 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.103968 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.122446 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.131164 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.131198 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.131207 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.131224 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.131234 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:21Z","lastTransitionTime":"2026-01-27T07:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.174980 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"message\\\":\\\"oadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}\\\\nI0127 07:46:07.781685 6246 services_controller.go:360] Finished syncing service packageserver-service on namespace openshift-operator-lifecycle-manager for network=default : 913.636µs\\\\nI0127 07:46:07.781688 6246 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-xd92c): added port \\\\u0026{name:openshift-network-diagnostics_network-check-target-xd92c uuid:61897e97-c771-4738-8709-09636387cb00 logicalSwitch:crc ips:[0xc008f50c60] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04\\\\nI0127 07:46:07.781710 6246 services_controller.go:356] Processing sync for service openshift-authentication/oauth-openshift for network=default\\\\nI0127 07:46:07.781717 6246 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI0127 07:46:07.781726 6246 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nF0127 07:46:07.781669 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.214105 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.230608 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.233448 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.233507 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.233523 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.233594 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.233613 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:21Z","lastTransitionTime":"2026-01-27T07:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.243861 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.257285 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.270002 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.281000 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.291880 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997b05f-6093-45cc-aa37-f988051c7f32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ba992e2bfa7a985a725ee707991b95bf535cdc46bd800e5ca71fde162563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56068aa0157d1e112901534ebf61c7bb646d76fc4bfa77f6f68fc63b4b44cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.302375 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qq7cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af5040b-0391-423c-b87d-90df4965f58f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qq7cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.315433 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.326910 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.336080 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.336116 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.336126 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.336147 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.336158 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:21Z","lastTransitionTime":"2026-01-27T07:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.342664 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.355486 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.366558 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.377832 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.399190 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.399221 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.399232 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.399249 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.399262 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:21Z","lastTransitionTime":"2026-01-27T07:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:21 crc kubenswrapper[4799]: E0127 07:46:21.413738 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.417066 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.417100 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.417108 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.417123 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.417134 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:21Z","lastTransitionTime":"2026-01-27T07:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.420293 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 00:14:10.975417486 +0000 UTC Jan 27 07:46:21 crc kubenswrapper[4799]: E0127 07:46:21.428697 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.436598 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.436638 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.436645 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.436661 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.436671 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:21Z","lastTransitionTime":"2026-01-27T07:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:21 crc kubenswrapper[4799]: E0127 07:46:21.448121 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.451159 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.451226 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:21 crc kubenswrapper[4799]: E0127 07:46:21.451280 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:21 crc kubenswrapper[4799]: E0127 07:46:21.451359 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.451829 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.452019 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.452190 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.452337 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.452493 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:21Z","lastTransitionTime":"2026-01-27T07:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.453122 4799 scope.go:117] "RemoveContainer" containerID="d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1" Jan 27 07:46:21 crc kubenswrapper[4799]: E0127 07:46:21.466063 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.473746 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.473783 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.473792 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.473807 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.473818 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:21Z","lastTransitionTime":"2026-01-27T07:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:21 crc kubenswrapper[4799]: E0127 07:46:21.486387 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: E0127 07:46:21.486507 4799 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.489343 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.489563 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.489573 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.489614 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.489626 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:21Z","lastTransitionTime":"2026-01-27T07:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.591946 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.591981 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.591990 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.592011 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.592021 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:21Z","lastTransitionTime":"2026-01-27T07:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.694542 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.694615 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.694636 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.694664 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.694680 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:21Z","lastTransitionTime":"2026-01-27T07:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.797947 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.798001 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.798014 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.798033 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.798044 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:21Z","lastTransitionTime":"2026-01-27T07:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.868894 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hggcd_836be94a-c1de-4b1c-b98a-7af78a2a4607/ovnkube-controller/1.log" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.871262 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerStarted","Data":"af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d"} Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.884006 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.895239 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997b05f-6093-45cc-aa37-f988051c7f32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ba992e2bfa7a985a725ee707991b95bf535cdc46bd800e5ca71fde162563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56068aa0157d1e112901534ebf61c7bb646d76fc4bfa77f6f68fc63b4b44cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.900663 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.900693 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.900703 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.900757 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.900769 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:21Z","lastTransitionTime":"2026-01-27T07:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.905603 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qq7cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af5040b-0391-423c-b87d-90df4965f58f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qq7cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.918416 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.929603 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0048c044-fea4-4d5a-8fa0-4d5c00dd8814\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af711974c3b7e4794dc83d00cb95ff6ab7d8df85618cbf2dabe33992368b332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a4d36b1be5b740f1c9a0aab6e46ecea9e21c53d8468379e9bf55bd8cf0721b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4da3ddd96887cadba76905a55343d81456ea50b7dba277376562c73562d948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.941043 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.954611 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.966936 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:21 crc kubenswrapper[4799]: I0127 07:46:21.990164 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:21Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.002654 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.002696 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.002705 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.002718 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.002727 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:22Z","lastTransitionTime":"2026-01-27T07:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.007180 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.023814 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.035028 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.054382 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"message\\\":\\\"oadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}\\\\nI0127 07:46:07.781685 6246 services_controller.go:360] Finished syncing service packageserver-service on namespace openshift-operator-lifecycle-manager for network=default : 913.636µs\\\\nI0127 07:46:07.781688 6246 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-xd92c): added port \\\\u0026{name:openshift-network-diagnostics_network-check-target-xd92c uuid:61897e97-c771-4738-8709-09636387cb00 logicalSwitch:crc ips:[0xc008f50c60] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04\\\\nI0127 07:46:07.781710 6246 services_controller.go:356] Processing sync for service openshift-authentication/oauth-openshift for network=default\\\\nI0127 07:46:07.781717 6246 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI0127 07:46:07.781726 6246 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nF0127 07:46:07.781669 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.069024 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.082926 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.095040 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.105428 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.105669 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.105733 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.105804 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.105864 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:22Z","lastTransitionTime":"2026-01-27T07:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.107458 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.119077 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.208407 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.208589 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.208657 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.208717 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.208817 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:22Z","lastTransitionTime":"2026-01-27T07:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.311162 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.311406 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.311433 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.311451 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.311469 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:22Z","lastTransitionTime":"2026-01-27T07:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.414382 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.414447 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.414466 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.414492 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.414505 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:22Z","lastTransitionTime":"2026-01-27T07:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.420833 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 21:54:45.056169122 +0000 UTC Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.451165 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.451204 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:22 crc kubenswrapper[4799]: E0127 07:46:22.451378 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:22 crc kubenswrapper[4799]: E0127 07:46:22.451550 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.516954 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.517013 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.517022 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.517037 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.517048 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:22Z","lastTransitionTime":"2026-01-27T07:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.619435 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.619507 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.619519 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.619537 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.619548 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:22Z","lastTransitionTime":"2026-01-27T07:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.722473 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.722514 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.722522 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.722536 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.722546 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:22Z","lastTransitionTime":"2026-01-27T07:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.825265 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.825340 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.825357 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.825378 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.825393 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:22Z","lastTransitionTime":"2026-01-27T07:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.878679 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hggcd_836be94a-c1de-4b1c-b98a-7af78a2a4607/ovnkube-controller/2.log" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.879487 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hggcd_836be94a-c1de-4b1c-b98a-7af78a2a4607/ovnkube-controller/1.log" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.882683 4799 generic.go:334] "Generic (PLEG): container finished" podID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerID="af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d" exitCode=1 Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.882720 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerDied","Data":"af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d"} Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.882758 4799 scope.go:117] "RemoveContainer" containerID="d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.883473 4799 scope.go:117] "RemoveContainer" containerID="af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d" Jan 27 07:46:22 crc kubenswrapper[4799]: E0127 07:46:22.883641 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.903908 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.920246 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.927774 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.927806 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.927816 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.927838 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.927848 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:22Z","lastTransitionTime":"2026-01-27T07:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.937285 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"message\\\":\\\"oadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}\\\\nI0127 07:46:07.781685 6246 services_controller.go:360] Finished syncing service packageserver-service on namespace openshift-operator-lifecycle-manager for network=default : 913.636µs\\\\nI0127 07:46:07.781688 6246 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-xd92c): added port \\\\u0026{name:openshift-network-diagnostics_network-check-target-xd92c uuid:61897e97-c771-4738-8709-09636387cb00 logicalSwitch:crc ips:[0xc008f50c60] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04\\\\nI0127 07:46:07.781710 6246 services_controller.go:356] Processing sync for service openshift-authentication/oauth-openshift for network=default\\\\nI0127 07:46:07.781717 6246 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI0127 07:46:07.781726 6246 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nF0127 07:46:07.781669 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:22Z\\\",\\\"message\\\":\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0127 07:46:22.184812 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.950843 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.963104 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.975084 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:22 crc kubenswrapper[4799]: I0127 07:46:22.988399 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.001014 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:22Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.013490 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:23Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.025133 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:23Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.032049 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.032099 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.032113 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.032132 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.032145 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:23Z","lastTransitionTime":"2026-01-27T07:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.037642 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997b05f-6093-45cc-aa37-f988051c7f32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ba992e2bfa7a985a725ee707991b95bf535cdc46bd800e5ca71fde162563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56068aa0157d1e112901534ebf61c7bb646d76fc4bfa77f6f68fc63b4b44cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:23Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.047770 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qq7cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af5040b-0391-423c-b87d-90df4965f58f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qq7cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:23Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.061132 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:23Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.072025 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0048c044-fea4-4d5a-8fa0-4d5c00dd8814\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af711974c3b7e4794dc83d00cb95ff6ab7d8df85618cbf2dabe33992368b332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a4d36b1be5b740f1c9a0aab6e46ecea9e21c53d8468379e9bf55bd8cf0721b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4da3ddd96887cadba76905a55343d81456ea50b7dba277376562c73562d948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:23Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.082734 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:23Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.094913 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:23Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.106562 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:23Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.115558 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:23Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.134338 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.134370 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.134378 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.134393 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.134419 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:23Z","lastTransitionTime":"2026-01-27T07:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.237165 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.237201 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.237211 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.237228 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.237239 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:23Z","lastTransitionTime":"2026-01-27T07:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.341041 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.341105 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.341123 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.341151 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.341168 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:23Z","lastTransitionTime":"2026-01-27T07:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.421885 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:48:03.491593174 +0000 UTC Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.444414 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.444482 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.444500 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.444528 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.444549 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:23Z","lastTransitionTime":"2026-01-27T07:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.450699 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.450781 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:23 crc kubenswrapper[4799]: E0127 07:46:23.450830 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:23 crc kubenswrapper[4799]: E0127 07:46:23.450953 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.547355 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.547387 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.547395 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.547409 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.547420 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:23Z","lastTransitionTime":"2026-01-27T07:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.650014 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.650049 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.650059 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.650074 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.650083 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:23Z","lastTransitionTime":"2026-01-27T07:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.753029 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.753072 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.753084 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.753099 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.753113 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:23Z","lastTransitionTime":"2026-01-27T07:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.855159 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.855191 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.855199 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.855218 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.855229 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:23Z","lastTransitionTime":"2026-01-27T07:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.887931 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hggcd_836be94a-c1de-4b1c-b98a-7af78a2a4607/ovnkube-controller/2.log" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.959855 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.959947 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.959968 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.959994 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:23 crc kubenswrapper[4799]: I0127 07:46:23.960014 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:23Z","lastTransitionTime":"2026-01-27T07:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.063351 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.063700 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.063807 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.063890 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.063976 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:24Z","lastTransitionTime":"2026-01-27T07:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.167387 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.167433 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.167443 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.167459 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.167471 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:24Z","lastTransitionTime":"2026-01-27T07:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.270775 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.270854 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.270874 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.270903 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.271264 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:24Z","lastTransitionTime":"2026-01-27T07:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.373984 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.374325 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.374407 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.374487 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.374578 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:24Z","lastTransitionTime":"2026-01-27T07:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.422932 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:32:50.942630156 +0000 UTC Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.450645 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:24 crc kubenswrapper[4799]: E0127 07:46:24.450778 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.451129 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:24 crc kubenswrapper[4799]: E0127 07:46:24.451267 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.464074 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:24Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.476602 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997b05f-6093-45cc-aa37-f988051c7f32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ba992e2bfa7a985a725ee707991b95bf535cdc46bd800e5ca71fde162563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56068aa0157d1e112901534ebf61c7bb646d76fc4bfa77f6f68fc63b4b44cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:24Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.478216 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.478275 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.478288 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.478335 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.478350 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:24Z","lastTransitionTime":"2026-01-27T07:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.488003 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qq7cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af5040b-0391-423c-b87d-90df4965f58f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qq7cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:24Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.504770 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:24Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.518209 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0048c044-fea4-4d5a-8fa0-4d5c00dd8814\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af711974c3b7e4794dc83d00cb95ff6ab7d8df85618cbf2dabe33992368b332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a4d36b1be5b740f1c9a0aab6e46ecea9e21c53d8468379e9bf55bd8cf0721b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4da3ddd96887cadba76905a55343d81456ea50b7dba277376562c73562d948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:24Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.533710 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:24Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.547525 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:24Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.562359 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:24Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.573263 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:24Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.581409 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.581736 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.581751 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.581778 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.581793 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:24Z","lastTransitionTime":"2026-01-27T07:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.584967 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:24Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.603782 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:24Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.616263 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:24Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.637430 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6e45105a17008e878638e94685e646f8350db1760c9097e5c962728ed7a07d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"message\\\":\\\"oadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}\\\\nI0127 07:46:07.781685 6246 services_controller.go:360] Finished syncing service packageserver-service on namespace openshift-operator-lifecycle-manager for network=default : 913.636µs\\\\nI0127 07:46:07.781688 6246 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-xd92c): added port \\\\u0026{name:openshift-network-diagnostics_network-check-target-xd92c uuid:61897e97-c771-4738-8709-09636387cb00 logicalSwitch:crc ips:[0xc008f50c60] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04\\\\nI0127 07:46:07.781710 6246 services_controller.go:356] Processing sync for service openshift-authentication/oauth-openshift for network=default\\\\nI0127 07:46:07.781717 6246 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI0127 07:46:07.781726 6246 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nF0127 07:46:07.781669 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:22Z\\\",\\\"message\\\":\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0127 07:46:22.184812 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:24Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.651439 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:24Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.665560 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:24Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.678729 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:24Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.684856 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.684921 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.684939 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.684964 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.684979 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:24Z","lastTransitionTime":"2026-01-27T07:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.692027 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:24Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.711708 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:24Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.788045 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.788095 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.788105 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.788121 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.788133 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:24Z","lastTransitionTime":"2026-01-27T07:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.889968 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.890009 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.890017 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.890031 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.890041 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:24Z","lastTransitionTime":"2026-01-27T07:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.992654 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.992695 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.992706 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.992724 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:24 crc kubenswrapper[4799]: I0127 07:46:24.992737 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:24Z","lastTransitionTime":"2026-01-27T07:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.095568 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.095621 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.095633 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.095651 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.095665 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:25Z","lastTransitionTime":"2026-01-27T07:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.198601 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.198685 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.198705 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.198736 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.198755 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:25Z","lastTransitionTime":"2026-01-27T07:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.302285 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.302363 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.302375 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.302394 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.302405 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:25Z","lastTransitionTime":"2026-01-27T07:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.405718 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.405764 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.405773 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.405789 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.405798 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:25Z","lastTransitionTime":"2026-01-27T07:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.424632 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 09:26:44.408151134 +0000 UTC Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.451509 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.451509 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:25 crc kubenswrapper[4799]: E0127 07:46:25.451772 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:25 crc kubenswrapper[4799]: E0127 07:46:25.451880 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.509451 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.509525 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.509550 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.509583 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.509614 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:25Z","lastTransitionTime":"2026-01-27T07:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.612716 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.612796 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.612816 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.612850 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.612879 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:25Z","lastTransitionTime":"2026-01-27T07:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.716460 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.716542 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.716560 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.716593 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.716618 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:25Z","lastTransitionTime":"2026-01-27T07:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.820185 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.820222 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.820231 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.820244 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.820252 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:25Z","lastTransitionTime":"2026-01-27T07:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.922630 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.922671 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.922681 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.922696 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:25 crc kubenswrapper[4799]: I0127 07:46:25.922708 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:25Z","lastTransitionTime":"2026-01-27T07:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.025487 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.025543 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.025554 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.025578 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.025591 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:26Z","lastTransitionTime":"2026-01-27T07:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.026240 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs\") pod \"network-metrics-daemon-qq7cx\" (UID: \"0af5040b-0391-423c-b87d-90df4965f58f\") " pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:26 crc kubenswrapper[4799]: E0127 07:46:26.026428 4799 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:46:26 crc kubenswrapper[4799]: E0127 07:46:26.026501 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs podName:0af5040b-0391-423c-b87d-90df4965f58f nodeName:}" failed. No retries permitted until 2026-01-27 07:46:42.026481633 +0000 UTC m=+68.337585698 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs") pod "network-metrics-daemon-qq7cx" (UID: "0af5040b-0391-423c-b87d-90df4965f58f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.127070 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.127123 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.127161 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.127195 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:26 crc kubenswrapper[4799]: E0127 07:46:26.127209 4799 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:46:26 crc kubenswrapper[4799]: E0127 07:46:26.127292 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:46:58.127273153 +0000 UTC m=+84.438377218 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:46:26 crc kubenswrapper[4799]: E0127 07:46:26.127337 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:46:26 crc kubenswrapper[4799]: E0127 07:46:26.127345 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:46:26 crc kubenswrapper[4799]: E0127 07:46:26.127364 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:46:26 crc kubenswrapper[4799]: E0127 07:46:26.127377 4799 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:46:26 crc kubenswrapper[4799]: E0127 07:46:26.127391 4799 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:46:26 crc kubenswrapper[4799]: E0127 07:46:26.127431 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:46:58.127411476 +0000 UTC m=+84.438515541 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:46:26 crc kubenswrapper[4799]: E0127 07:46:26.127354 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:46:26 crc kubenswrapper[4799]: E0127 07:46:26.127449 4799 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:46:26 crc kubenswrapper[4799]: E0127 07:46:26.127450 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:46:58.127442507 +0000 UTC m=+84.438546822 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:46:26 crc kubenswrapper[4799]: E0127 07:46:26.127471 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:46:58.127463178 +0000 UTC m=+84.438567243 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.128819 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.128840 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.128849 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.128863 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.128878 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:26Z","lastTransitionTime":"2026-01-27T07:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.227818 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:46:26 crc kubenswrapper[4799]: E0127 07:46:26.228068 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:46:58.22800781 +0000 UTC m=+84.539111935 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.231824 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.231862 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.231875 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.231892 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.231901 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:26Z","lastTransitionTime":"2026-01-27T07:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.334403 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.334439 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.334449 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.334468 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.334479 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:26Z","lastTransitionTime":"2026-01-27T07:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.425129 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 05:20:54.615235192 +0000 UTC Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.437230 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.437287 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.437333 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.437350 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.437361 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:26Z","lastTransitionTime":"2026-01-27T07:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.450955 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.450980 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:26 crc kubenswrapper[4799]: E0127 07:46:26.451111 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:26 crc kubenswrapper[4799]: E0127 07:46:26.451182 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.539979 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.540039 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.540052 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.540077 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.540090 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:26Z","lastTransitionTime":"2026-01-27T07:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.643297 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.643399 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.643409 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.643426 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.643447 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:26Z","lastTransitionTime":"2026-01-27T07:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.745689 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.745737 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.745751 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.745771 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.745784 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:26Z","lastTransitionTime":"2026-01-27T07:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.848450 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.848499 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.848512 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.848530 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.848543 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:26Z","lastTransitionTime":"2026-01-27T07:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.951345 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.951391 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.951401 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.951415 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:26 crc kubenswrapper[4799]: I0127 07:46:26.951425 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:26Z","lastTransitionTime":"2026-01-27T07:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.053601 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.053632 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.053641 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.053653 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.053662 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:27Z","lastTransitionTime":"2026-01-27T07:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.156083 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.156110 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.156117 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.156131 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.156139 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:27Z","lastTransitionTime":"2026-01-27T07:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.258040 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.258474 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.258483 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.258500 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.258509 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:27Z","lastTransitionTime":"2026-01-27T07:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.360072 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.360109 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.360120 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.360134 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.360145 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:27Z","lastTransitionTime":"2026-01-27T07:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.425709 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 04:50:36.90343554 +0000 UTC Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.450685 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:27 crc kubenswrapper[4799]: E0127 07:46:27.450801 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.451165 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:27 crc kubenswrapper[4799]: E0127 07:46:27.451220 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.462705 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.462731 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.462740 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.462752 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.462760 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:27Z","lastTransitionTime":"2026-01-27T07:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.565114 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.565152 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.565160 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.565174 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.565182 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:27Z","lastTransitionTime":"2026-01-27T07:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.666945 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.666988 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.666998 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.667013 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.667023 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:27Z","lastTransitionTime":"2026-01-27T07:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.769600 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.769644 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.769652 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.769665 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.769674 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:27Z","lastTransitionTime":"2026-01-27T07:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.871916 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.871979 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.871992 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.872012 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.872026 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:27Z","lastTransitionTime":"2026-01-27T07:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.973932 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.973973 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.973982 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.973997 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:27 crc kubenswrapper[4799]: I0127 07:46:27.974006 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:27Z","lastTransitionTime":"2026-01-27T07:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.076586 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.076628 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.076641 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.076658 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.076671 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:28Z","lastTransitionTime":"2026-01-27T07:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.179149 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.179203 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.179216 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.179233 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.179246 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:28Z","lastTransitionTime":"2026-01-27T07:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.282419 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.282496 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.282518 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.282545 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.282563 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:28Z","lastTransitionTime":"2026-01-27T07:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.385539 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.385594 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.385608 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.385628 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.385641 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:28Z","lastTransitionTime":"2026-01-27T07:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.426086 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:55:23.157045694 +0000 UTC Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.450638 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.450821 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:28 crc kubenswrapper[4799]: E0127 07:46:28.450940 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:28 crc kubenswrapper[4799]: E0127 07:46:28.451031 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.487924 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.487968 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.487979 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.487997 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.488009 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:28Z","lastTransitionTime":"2026-01-27T07:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.591532 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.591582 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.591591 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.591610 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.591623 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:28Z","lastTransitionTime":"2026-01-27T07:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.694642 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.694728 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.694750 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.694782 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.694804 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:28Z","lastTransitionTime":"2026-01-27T07:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.798489 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.798545 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.798578 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.798599 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.798645 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:28Z","lastTransitionTime":"2026-01-27T07:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.901652 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.901691 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.901703 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.901718 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:28 crc kubenswrapper[4799]: I0127 07:46:28.901729 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:28Z","lastTransitionTime":"2026-01-27T07:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.004040 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.004070 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.004080 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.004096 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.004108 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:29Z","lastTransitionTime":"2026-01-27T07:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.106847 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.106900 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.106914 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.106932 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.106949 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:29Z","lastTransitionTime":"2026-01-27T07:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.210520 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.210609 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.210636 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.210671 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.210695 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:29Z","lastTransitionTime":"2026-01-27T07:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.313357 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.313413 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.313426 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.313462 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.313478 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:29Z","lastTransitionTime":"2026-01-27T07:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.416367 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.416421 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.416432 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.416454 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.416468 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:29Z","lastTransitionTime":"2026-01-27T07:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.426896 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 14:47:43.086493032 +0000 UTC Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.450668 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.450742 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:29 crc kubenswrapper[4799]: E0127 07:46:29.450818 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:29 crc kubenswrapper[4799]: E0127 07:46:29.450933 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.519161 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.519200 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.519209 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.519223 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.519234 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:29Z","lastTransitionTime":"2026-01-27T07:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.621151 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.621208 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.621226 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.621250 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.621269 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:29Z","lastTransitionTime":"2026-01-27T07:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.723893 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.723936 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.723949 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.723964 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.723977 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:29Z","lastTransitionTime":"2026-01-27T07:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.827775 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.827831 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.827846 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.827870 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.827882 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:29Z","lastTransitionTime":"2026-01-27T07:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.929750 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.929800 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.929809 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.929826 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:29 crc kubenswrapper[4799]: I0127 07:46:29.929836 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:29Z","lastTransitionTime":"2026-01-27T07:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.033197 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.034065 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.034338 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.034394 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.034422 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:30Z","lastTransitionTime":"2026-01-27T07:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.138434 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.138496 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.138506 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.138523 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.138533 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:30Z","lastTransitionTime":"2026-01-27T07:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.241960 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.242028 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.242048 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.242074 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.242095 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:30Z","lastTransitionTime":"2026-01-27T07:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.344740 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.344789 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.344799 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.344815 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.344825 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:30Z","lastTransitionTime":"2026-01-27T07:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.428109 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 05:36:26.801093652 +0000 UTC Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.448070 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.448147 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.448164 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.448186 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.448200 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:30Z","lastTransitionTime":"2026-01-27T07:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.451364 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.451711 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:30 crc kubenswrapper[4799]: E0127 07:46:30.451760 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:30 crc kubenswrapper[4799]: E0127 07:46:30.452267 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.550479 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.550527 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.550539 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.550559 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.550571 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:30Z","lastTransitionTime":"2026-01-27T07:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.654386 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.654430 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.654439 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.654456 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.654467 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:30Z","lastTransitionTime":"2026-01-27T07:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.757810 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.757878 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.757895 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.757921 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.757940 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:30Z","lastTransitionTime":"2026-01-27T07:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.862751 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.862820 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.862840 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.862867 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.862885 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:30Z","lastTransitionTime":"2026-01-27T07:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.965738 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.965786 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.965795 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.965811 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:30 crc kubenswrapper[4799]: I0127 07:46:30.965826 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:30Z","lastTransitionTime":"2026-01-27T07:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.068552 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.068593 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.068601 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.068619 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.068628 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:31Z","lastTransitionTime":"2026-01-27T07:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.170685 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.170726 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.170736 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.170753 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.170767 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:31Z","lastTransitionTime":"2026-01-27T07:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.273851 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.273914 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.273933 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.273959 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.273977 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:31Z","lastTransitionTime":"2026-01-27T07:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.378177 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.378281 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.378344 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.378385 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.378421 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:31Z","lastTransitionTime":"2026-01-27T07:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.429221 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 05:31:06.554694435 +0000 UTC Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.450498 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.450512 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:31 crc kubenswrapper[4799]: E0127 07:46:31.450614 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:31 crc kubenswrapper[4799]: E0127 07:46:31.450686 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.480891 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.480923 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.480931 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.480945 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.480999 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:31Z","lastTransitionTime":"2026-01-27T07:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.583954 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.583990 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.583998 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.584011 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.584020 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:31Z","lastTransitionTime":"2026-01-27T07:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.686327 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.686369 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.686388 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.686406 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.686418 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:31Z","lastTransitionTime":"2026-01-27T07:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.785714 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.785767 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.785781 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.785804 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.785820 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:31Z","lastTransitionTime":"2026-01-27T07:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:31 crc kubenswrapper[4799]: E0127 07:46:31.797883 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.801265 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.801317 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.801330 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.801350 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.801362 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:31Z","lastTransitionTime":"2026-01-27T07:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:31 crc kubenswrapper[4799]: E0127 07:46:31.812239 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.815396 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.815426 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.815435 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.815449 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.815459 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:31Z","lastTransitionTime":"2026-01-27T07:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:31 crc kubenswrapper[4799]: E0127 07:46:31.832699 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.836732 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.836764 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.836773 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.836787 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.836797 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:31Z","lastTransitionTime":"2026-01-27T07:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:31 crc kubenswrapper[4799]: E0127 07:46:31.847248 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.850329 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.850357 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.850365 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.850377 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.850387 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:31Z","lastTransitionTime":"2026-01-27T07:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:31 crc kubenswrapper[4799]: E0127 07:46:31.863263 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:31Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:31 crc kubenswrapper[4799]: E0127 07:46:31.863410 4799 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.865165 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.865195 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.865207 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.865219 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.865227 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:31Z","lastTransitionTime":"2026-01-27T07:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.967882 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.967950 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.967961 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.967980 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:31 crc kubenswrapper[4799]: I0127 07:46:31.967993 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:31Z","lastTransitionTime":"2026-01-27T07:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.071095 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.071158 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.071167 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.071182 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.071196 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:32Z","lastTransitionTime":"2026-01-27T07:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.174490 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.174528 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.174537 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.174551 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.174560 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:32Z","lastTransitionTime":"2026-01-27T07:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.277001 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.277081 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.277099 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.277130 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.277155 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:32Z","lastTransitionTime":"2026-01-27T07:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.380121 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.380175 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.380185 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.380203 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.380217 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:32Z","lastTransitionTime":"2026-01-27T07:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.430345 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 21:29:50.255430812 +0000 UTC Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.450918 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.450918 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:32 crc kubenswrapper[4799]: E0127 07:46:32.451168 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:32 crc kubenswrapper[4799]: E0127 07:46:32.451268 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.483452 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.483532 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.483558 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.483587 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.483611 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:32Z","lastTransitionTime":"2026-01-27T07:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.587028 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.587081 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.587094 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.587112 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.587123 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:32Z","lastTransitionTime":"2026-01-27T07:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.689695 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.689754 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.689767 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.689790 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.689804 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:32Z","lastTransitionTime":"2026-01-27T07:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.793452 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.793508 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.793521 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.793542 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.793557 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:32Z","lastTransitionTime":"2026-01-27T07:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.897418 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.897484 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.897504 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.897533 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:32 crc kubenswrapper[4799]: I0127 07:46:32.897555 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:32Z","lastTransitionTime":"2026-01-27T07:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.001971 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.002038 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.002059 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.002085 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.002103 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:33Z","lastTransitionTime":"2026-01-27T07:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.104965 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.105009 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.105019 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.105035 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.105046 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:33Z","lastTransitionTime":"2026-01-27T07:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.208652 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.208698 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.208711 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.208729 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.208742 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:33Z","lastTransitionTime":"2026-01-27T07:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.312914 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.312978 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.312997 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.313023 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.313041 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:33Z","lastTransitionTime":"2026-01-27T07:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.417148 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.417239 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.417261 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.417290 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.417347 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:33Z","lastTransitionTime":"2026-01-27T07:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.431402 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 05:50:34.224645707 +0000 UTC Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.450931 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:33 crc kubenswrapper[4799]: E0127 07:46:33.451126 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.450950 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:33 crc kubenswrapper[4799]: E0127 07:46:33.451472 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.521064 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.521144 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.521162 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.521193 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.521215 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:33Z","lastTransitionTime":"2026-01-27T07:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.624639 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.624709 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.624735 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.624768 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.624794 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:33Z","lastTransitionTime":"2026-01-27T07:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.728009 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.728087 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.728113 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.728145 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.728170 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:33Z","lastTransitionTime":"2026-01-27T07:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.789460 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.790254 4799 scope.go:117] "RemoveContainer" containerID="af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d" Jan 27 07:46:33 crc kubenswrapper[4799]: E0127 07:46:33.790467 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.811172 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:33Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.831012 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.831072 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.831089 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.831118 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.831149 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:33Z","lastTransitionTime":"2026-01-27T07:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.832007 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:33Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.855196 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:33Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.881092 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:33Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.903014 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:33Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.928198 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:33Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.934807 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.934872 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.934892 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.934919 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.934941 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:33Z","lastTransitionTime":"2026-01-27T07:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.948070 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0048c044-fea4-4d5a-8fa0-4d5c00dd8814\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af711974c3b7e4794dc83d00cb95ff6ab7d8df85618cbf2dabe33992368b332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a4d36b1be5b740f1c9a0aab6e46ecea9e21c53d8468379e9bf55bd8cf0721b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4da3ddd96887cadba76905a55343d81456ea50b7dba277376562c73562d948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:33Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.970951 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:33Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:33 crc kubenswrapper[4799]: I0127 07:46:33.987814 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:33Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.004141 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997b05f-6093-45cc-aa37-f988051c7f32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ba992e2bfa7a985a725ee707991b95bf535cdc46bd800e5ca71fde162563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56068aa0157d1e112901534ebf61c7bb646d76fc4bfa77f6f68fc63b4b44cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.019538 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qq7cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af5040b-0391-423c-b87d-90df4965f58f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qq7cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.036168 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.038904 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.038938 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.038952 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.038973 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.038987 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:34Z","lastTransitionTime":"2026-01-27T07:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.056067 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.071176 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.088612 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.105228 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.143276 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:22Z\\\",\\\"message\\\":\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0127 07:46:22.184812 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.144244 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.144306 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.144322 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.144342 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.144380 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:34Z","lastTransitionTime":"2026-01-27T07:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.185263 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.248135 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.248236 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.248264 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.248357 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.248390 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:34Z","lastTransitionTime":"2026-01-27T07:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.352585 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.352632 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.352645 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.352663 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.352674 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:34Z","lastTransitionTime":"2026-01-27T07:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.432380 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 15:57:00.522434791 +0000 UTC Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.450638 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.450748 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:34 crc kubenswrapper[4799]: E0127 07:46:34.450879 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:34 crc kubenswrapper[4799]: E0127 07:46:34.451617 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.457221 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.457251 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.457260 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.457273 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.457283 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:34Z","lastTransitionTime":"2026-01-27T07:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.475140 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.486619 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.497801 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.510023 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.529134 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.542694 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.563038 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:22Z\\\",\\\"message\\\":\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0127 07:46:22.184812 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.564989 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.565425 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.566488 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.566576 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.566612 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:34Z","lastTransitionTime":"2026-01-27T07:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.582441 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.599147 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.613338 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.626595 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.651268 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.666665 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.670442 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.670547 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.670608 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.670670 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.670727 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:34Z","lastTransitionTime":"2026-01-27T07:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.679141 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0048c044-fea4-4d5a-8fa0-4d5c00dd8814\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af711974c3b7e4794dc83d00cb95ff6ab7d8df85618cbf2dabe33992368b332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a4d36b1be5b740f1c9a0aab6e46ecea9e21c53d8468379e9bf55bd8cf0721b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4da3ddd96887cadba76905a55343d81456ea50b7dba277376562c73562d948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.693163 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.703809 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.715000 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997b05f-6093-45cc-aa37-f988051c7f32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ba992e2bfa7a985a725ee707991b95bf535cdc46bd800e5ca71fde162563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56068aa0157d1e112901534ebf61c7bb646d76fc4bfa77f6f68fc63b4b44cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.726414 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qq7cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af5040b-0391-423c-b87d-90df4965f58f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qq7cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:34Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.774102 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.774188 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.774211 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.774246 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.774271 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:34Z","lastTransitionTime":"2026-01-27T07:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.877815 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.877875 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.877888 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.877908 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.877924 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:34Z","lastTransitionTime":"2026-01-27T07:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.980787 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.980836 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.980849 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.980869 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:34 crc kubenswrapper[4799]: I0127 07:46:34.980880 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:34Z","lastTransitionTime":"2026-01-27T07:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.084175 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.084221 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.084234 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.084251 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.084261 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:35Z","lastTransitionTime":"2026-01-27T07:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.186785 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.186847 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.186890 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.186914 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.186931 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:35Z","lastTransitionTime":"2026-01-27T07:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.290808 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.290854 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.290868 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.290887 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.290900 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:35Z","lastTransitionTime":"2026-01-27T07:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.393606 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.393671 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.393690 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.393713 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.393729 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:35Z","lastTransitionTime":"2026-01-27T07:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.434084 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:17:13.414551265 +0000 UTC Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.450958 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.450991 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:35 crc kubenswrapper[4799]: E0127 07:46:35.451084 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:35 crc kubenswrapper[4799]: E0127 07:46:35.451203 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.497915 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.498006 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.498041 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.498077 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.498103 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:35Z","lastTransitionTime":"2026-01-27T07:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.602285 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.602393 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.602414 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.602444 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.602463 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:35Z","lastTransitionTime":"2026-01-27T07:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.705747 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.705795 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.705807 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.705828 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.705841 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:35Z","lastTransitionTime":"2026-01-27T07:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.809227 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.809274 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.809291 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.809322 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.809334 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:35Z","lastTransitionTime":"2026-01-27T07:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.912824 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.912892 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.912931 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.912985 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:35 crc kubenswrapper[4799]: I0127 07:46:35.913002 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:35Z","lastTransitionTime":"2026-01-27T07:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.016500 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.016579 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.016601 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.016631 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.016655 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:36Z","lastTransitionTime":"2026-01-27T07:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.128027 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.128090 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.128111 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.128152 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.128171 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:36Z","lastTransitionTime":"2026-01-27T07:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.231842 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.232436 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.232689 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.232904 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.233170 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:36Z","lastTransitionTime":"2026-01-27T07:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.337077 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.337129 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.337148 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.337173 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.337193 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:36Z","lastTransitionTime":"2026-01-27T07:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.434820 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 21:29:44.613130891 +0000 UTC Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.440778 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.440889 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.440914 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.440946 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.440972 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:36Z","lastTransitionTime":"2026-01-27T07:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.450988 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.451032 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:36 crc kubenswrapper[4799]: E0127 07:46:36.451116 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:36 crc kubenswrapper[4799]: E0127 07:46:36.451334 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.545233 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.545596 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.545689 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.545793 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.545873 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:36Z","lastTransitionTime":"2026-01-27T07:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.654476 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.654824 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.654836 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.654853 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.654865 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:36Z","lastTransitionTime":"2026-01-27T07:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.758159 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.758218 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.758231 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.758246 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.758256 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:36Z","lastTransitionTime":"2026-01-27T07:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.861075 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.861128 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.861143 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.861167 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.861182 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:36Z","lastTransitionTime":"2026-01-27T07:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.964536 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.964580 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.964589 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.964607 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:36 crc kubenswrapper[4799]: I0127 07:46:36.964619 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:36Z","lastTransitionTime":"2026-01-27T07:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.067079 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.067115 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.067123 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.067141 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.067150 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:37Z","lastTransitionTime":"2026-01-27T07:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.169745 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.169805 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.169830 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.169855 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.169874 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:37Z","lastTransitionTime":"2026-01-27T07:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.272805 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.273168 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.273253 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.273431 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.273546 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:37Z","lastTransitionTime":"2026-01-27T07:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.376335 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.376590 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.376653 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.376715 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.376785 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:37Z","lastTransitionTime":"2026-01-27T07:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.434996 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 18:13:15.573795583 +0000 UTC Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.450490 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:37 crc kubenswrapper[4799]: E0127 07:46:37.450642 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.450494 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:37 crc kubenswrapper[4799]: E0127 07:46:37.450961 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.478632 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.478914 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.479050 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.479444 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.479543 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:37Z","lastTransitionTime":"2026-01-27T07:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.582254 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.582283 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.582291 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.582321 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.582334 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:37Z","lastTransitionTime":"2026-01-27T07:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.683808 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.683837 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.683845 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.683859 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.683868 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:37Z","lastTransitionTime":"2026-01-27T07:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.786686 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.786728 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.786737 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.786752 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.786763 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:37Z","lastTransitionTime":"2026-01-27T07:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.889041 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.889120 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.889143 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.889173 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.889193 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:37Z","lastTransitionTime":"2026-01-27T07:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.992271 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.992342 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.992356 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.992376 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:37 crc kubenswrapper[4799]: I0127 07:46:37.992389 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:37Z","lastTransitionTime":"2026-01-27T07:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.095139 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.095178 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.095186 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.095221 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.095230 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:38Z","lastTransitionTime":"2026-01-27T07:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.197798 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.197846 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.197859 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.197900 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.197918 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:38Z","lastTransitionTime":"2026-01-27T07:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.300055 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.300099 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.300111 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.300128 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.300141 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:38Z","lastTransitionTime":"2026-01-27T07:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.402803 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.402852 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.402863 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.402885 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.402896 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:38Z","lastTransitionTime":"2026-01-27T07:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.436554 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 00:27:48.652360184 +0000 UTC Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.450866 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.450866 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:38 crc kubenswrapper[4799]: E0127 07:46:38.451012 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:38 crc kubenswrapper[4799]: E0127 07:46:38.451053 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.505412 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.505478 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.505488 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.505505 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.505516 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:38Z","lastTransitionTime":"2026-01-27T07:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.608065 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.608157 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.608168 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.608189 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.608201 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:38Z","lastTransitionTime":"2026-01-27T07:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.710852 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.710899 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.710909 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.710927 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.710938 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:38Z","lastTransitionTime":"2026-01-27T07:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.813910 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.814206 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.814357 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.814532 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.814665 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:38Z","lastTransitionTime":"2026-01-27T07:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.917395 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.917460 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.917473 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.917492 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:38 crc kubenswrapper[4799]: I0127 07:46:38.917502 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:38Z","lastTransitionTime":"2026-01-27T07:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.020756 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.020791 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.020802 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.020818 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.020829 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:39Z","lastTransitionTime":"2026-01-27T07:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.123230 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.123272 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.123283 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.123319 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.123332 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:39Z","lastTransitionTime":"2026-01-27T07:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.225627 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.225691 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.225700 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.225713 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.225722 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:39Z","lastTransitionTime":"2026-01-27T07:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.328235 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.328266 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.328276 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.328293 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.328332 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:39Z","lastTransitionTime":"2026-01-27T07:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.431107 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.431150 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.431166 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.431186 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.431200 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:39Z","lastTransitionTime":"2026-01-27T07:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.437283 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 19:06:42.68962667 +0000 UTC Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.450671 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:39 crc kubenswrapper[4799]: E0127 07:46:39.450793 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.451167 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:39 crc kubenswrapper[4799]: E0127 07:46:39.451240 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.533662 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.533691 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.533699 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.533711 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.533722 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:39Z","lastTransitionTime":"2026-01-27T07:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.635885 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.635925 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.635937 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.635953 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.635962 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:39Z","lastTransitionTime":"2026-01-27T07:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.738287 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.738361 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.738373 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.738391 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.738405 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:39Z","lastTransitionTime":"2026-01-27T07:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.840754 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.840816 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.840834 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.840852 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.840864 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:39Z","lastTransitionTime":"2026-01-27T07:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.943632 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.943683 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.943691 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.943713 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:39 crc kubenswrapper[4799]: I0127 07:46:39.943722 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:39Z","lastTransitionTime":"2026-01-27T07:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.045982 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.046027 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.046038 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.046053 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.046063 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:40Z","lastTransitionTime":"2026-01-27T07:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.148291 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.148365 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.148375 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.148391 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.148403 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:40Z","lastTransitionTime":"2026-01-27T07:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.251076 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.251133 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.251147 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.251170 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.251181 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:40Z","lastTransitionTime":"2026-01-27T07:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.353349 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.353394 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.353403 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.353419 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.353429 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:40Z","lastTransitionTime":"2026-01-27T07:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.438051 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 23:04:08.651796638 +0000 UTC Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.451418 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:40 crc kubenswrapper[4799]: E0127 07:46:40.451551 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.451714 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:40 crc kubenswrapper[4799]: E0127 07:46:40.451872 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.455384 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.455442 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.455461 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.455486 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.455505 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:40Z","lastTransitionTime":"2026-01-27T07:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.557312 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.557358 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.557369 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.557386 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.557402 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:40Z","lastTransitionTime":"2026-01-27T07:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.660218 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.660270 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.660281 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.660329 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.660342 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:40Z","lastTransitionTime":"2026-01-27T07:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.762900 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.762970 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.762988 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.763012 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.763032 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:40Z","lastTransitionTime":"2026-01-27T07:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.865590 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.865640 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.865651 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.865671 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.865688 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:40Z","lastTransitionTime":"2026-01-27T07:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.968528 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.968567 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.968577 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.968589 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:40 crc kubenswrapper[4799]: I0127 07:46:40.968600 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:40Z","lastTransitionTime":"2026-01-27T07:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.070608 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.070651 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.070661 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.070675 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.070684 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:41Z","lastTransitionTime":"2026-01-27T07:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.173440 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.173663 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.173729 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.173827 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.173888 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:41Z","lastTransitionTime":"2026-01-27T07:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.276625 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.276658 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.276667 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.276682 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.276692 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:41Z","lastTransitionTime":"2026-01-27T07:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.378586 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.378636 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.378647 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.378668 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.378682 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:41Z","lastTransitionTime":"2026-01-27T07:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.439185 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 15:35:20.0663663 +0000 UTC Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.451035 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.451037 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:41 crc kubenswrapper[4799]: E0127 07:46:41.451240 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:41 crc kubenswrapper[4799]: E0127 07:46:41.451545 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.481686 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.481759 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.481778 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.481803 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.481820 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:41Z","lastTransitionTime":"2026-01-27T07:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.584592 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.584668 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.584683 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.584703 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.584715 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:41Z","lastTransitionTime":"2026-01-27T07:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.687780 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.687839 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.687852 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.687868 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.687881 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:41Z","lastTransitionTime":"2026-01-27T07:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.790497 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.790541 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.790549 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.790564 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.790572 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:41Z","lastTransitionTime":"2026-01-27T07:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.892899 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.892929 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.892937 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.892952 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.892961 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:41Z","lastTransitionTime":"2026-01-27T07:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.995197 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.995829 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.996088 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.996278 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:41 crc kubenswrapper[4799]: I0127 07:46:41.996425 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:41Z","lastTransitionTime":"2026-01-27T07:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.099167 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.099226 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.099240 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.099262 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.099277 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:42Z","lastTransitionTime":"2026-01-27T07:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.111817 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs\") pod \"network-metrics-daemon-qq7cx\" (UID: \"0af5040b-0391-423c-b87d-90df4965f58f\") " pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:42 crc kubenswrapper[4799]: E0127 07:46:42.111952 4799 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:46:42 crc kubenswrapper[4799]: E0127 07:46:42.112015 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs podName:0af5040b-0391-423c-b87d-90df4965f58f nodeName:}" failed. No retries permitted until 2026-01-27 07:47:14.111999703 +0000 UTC m=+100.423103768 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs") pod "network-metrics-daemon-qq7cx" (UID: "0af5040b-0391-423c-b87d-90df4965f58f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.135283 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.135374 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.135390 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.135412 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.135425 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:42Z","lastTransitionTime":"2026-01-27T07:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:42 crc kubenswrapper[4799]: E0127 07:46:42.147205 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:42Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.150250 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.150394 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.150434 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.150461 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.150484 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:42Z","lastTransitionTime":"2026-01-27T07:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:42 crc kubenswrapper[4799]: E0127 07:46:42.162084 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:42Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.165572 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.165599 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.165628 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.165644 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.165652 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:42Z","lastTransitionTime":"2026-01-27T07:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:42 crc kubenswrapper[4799]: E0127 07:46:42.177936 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:42Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.181634 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.181741 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.181842 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.181953 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.182057 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:42Z","lastTransitionTime":"2026-01-27T07:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:42 crc kubenswrapper[4799]: E0127 07:46:42.193091 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:42Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.196844 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.196879 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.196890 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.196904 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.196915 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:42Z","lastTransitionTime":"2026-01-27T07:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:42 crc kubenswrapper[4799]: E0127 07:46:42.207720 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:42Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:42 crc kubenswrapper[4799]: E0127 07:46:42.207858 4799 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.208995 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.209110 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.209198 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.209287 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.209403 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:42Z","lastTransitionTime":"2026-01-27T07:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.311574 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.311647 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.311662 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.311682 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.311697 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:42Z","lastTransitionTime":"2026-01-27T07:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.413845 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.414149 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.414215 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.414289 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.414379 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:42Z","lastTransitionTime":"2026-01-27T07:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.440194 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 16:26:42.954118326 +0000 UTC Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.450578 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.450578 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:42 crc kubenswrapper[4799]: E0127 07:46:42.450705 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:42 crc kubenswrapper[4799]: E0127 07:46:42.450752 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.518035 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.518081 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.518093 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.518116 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.518127 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:42Z","lastTransitionTime":"2026-01-27T07:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.620464 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.620529 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.620541 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.620555 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.620565 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:42Z","lastTransitionTime":"2026-01-27T07:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.723343 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.723409 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.723420 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.723434 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.723443 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:42Z","lastTransitionTime":"2026-01-27T07:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.825916 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.825954 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.825962 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.825979 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.825992 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:42Z","lastTransitionTime":"2026-01-27T07:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.929180 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.929434 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.929449 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.929467 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:42 crc kubenswrapper[4799]: I0127 07:46:42.929479 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:42Z","lastTransitionTime":"2026-01-27T07:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.032342 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.032385 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.032396 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.032412 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.032425 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:43Z","lastTransitionTime":"2026-01-27T07:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.135463 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.135503 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.135512 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.135530 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.135544 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:43Z","lastTransitionTime":"2026-01-27T07:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.238643 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.238693 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.238704 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.238722 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.239015 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:43Z","lastTransitionTime":"2026-01-27T07:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.342020 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.342057 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.342066 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.342079 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.342089 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:43Z","lastTransitionTime":"2026-01-27T07:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.441254 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 15:43:54.907370834 +0000 UTC Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.443724 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.443745 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.443753 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.443767 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.443775 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:43Z","lastTransitionTime":"2026-01-27T07:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.451503 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.451580 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:43 crc kubenswrapper[4799]: E0127 07:46:43.451688 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:43 crc kubenswrapper[4799]: E0127 07:46:43.451796 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.545902 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.545941 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.545951 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.545966 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.545975 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:43Z","lastTransitionTime":"2026-01-27T07:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.648160 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.648213 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.648225 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.648240 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.648252 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:43Z","lastTransitionTime":"2026-01-27T07:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.750201 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.750283 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.750293 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.750330 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.750342 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:43Z","lastTransitionTime":"2026-01-27T07:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.852691 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.852766 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.852779 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.852796 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.852808 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:43Z","lastTransitionTime":"2026-01-27T07:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.954318 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.954362 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.954372 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.954391 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.954401 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:43Z","lastTransitionTime":"2026-01-27T07:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.956117 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tgr7w_60934e21-bc53-4f80-bb08-bb67af7301cd/kube-multus/0.log" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.956161 4799 generic.go:334] "Generic (PLEG): container finished" podID="60934e21-bc53-4f80-bb08-bb67af7301cd" containerID="10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3" exitCode=1 Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.956185 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tgr7w" event={"ID":"60934e21-bc53-4f80-bb08-bb67af7301cd","Type":"ContainerDied","Data":"10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3"} Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.956558 4799 scope.go:117] "RemoveContainer" containerID="10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.973481 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:43Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:43 crc kubenswrapper[4799]: I0127 07:46:43.989774 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:43Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.005968 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:43Z\\\",\\\"message\\\":\\\"2026-01-27T07:45:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30d81e92-d000-4f28-836e-fac87078fffe\\\\n2026-01-27T07:45:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30d81e92-d000-4f28-836e-fac87078fffe to /host/opt/cni/bin/\\\\n2026-01-27T07:45:58Z [verbose] multus-daemon started\\\\n2026-01-27T07:45:58Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:46:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.022256 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.036820 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.052318 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.056699 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.056731 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.056743 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.056757 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.056767 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:44Z","lastTransitionTime":"2026-01-27T07:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.064186 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0048c044-fea4-4d5a-8fa0-4d5c00dd8814\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af711974c3b7e4794dc83d00cb95ff6ab7d8df85618cbf2dabe33992368b332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a4d36b1be5b740f1c9a0aab6e46ecea9e21c53d8468379e9bf55bd8cf0721b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4da3ddd96887cadba76905a55343d81456ea50b7dba277376562c73562d948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.078929 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.092292 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.104724 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997b05f-6093-45cc-aa37-f988051c7f32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ba992e2bfa7a985a725ee707991b95bf535cdc46bd800e5ca71fde162563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56068aa0157d1e112901534ebf61c7bb646d76fc4bfa77f6f68fc63b4b44cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.114661 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qq7cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af5040b-0391-423c-b87d-90df4965f58f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qq7cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.126470 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.137919 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.150956 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.159004 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.159029 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.159037 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.159052 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.159063 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:44Z","lastTransitionTime":"2026-01-27T07:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.163693 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.176064 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.198330 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:22Z\\\",\\\"message\\\":\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0127 07:46:22.184812 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.220905 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.261977 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.262025 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.262038 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.262058 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.262071 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:44Z","lastTransitionTime":"2026-01-27T07:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.364976 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.365035 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.365052 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.365074 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.365089 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:44Z","lastTransitionTime":"2026-01-27T07:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.441987 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:02:40.130578513 +0000 UTC Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.451576 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:44 crc kubenswrapper[4799]: E0127 07:46:44.451882 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.452453 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:44 crc kubenswrapper[4799]: E0127 07:46:44.452658 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.468476 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.468518 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.468531 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.468552 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.468566 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:44Z","lastTransitionTime":"2026-01-27T07:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.476961 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.496581 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0048c044-fea4-4d5a-8fa0-4d5c00dd8814\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af711974c3b7e4794dc83d00cb95ff6ab7d8df85618cbf2dabe33992368b332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a4d36b1be5b740f1c9a0aab6e46ecea9e21c53d8468379e9bf55bd8cf0721b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4da3ddd96887cadba76905a55343d81456ea50b7dba277376562c73562d948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.518423 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.532411 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.549472 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997b05f-6093-45cc-aa37-f988051c7f32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ba992e2bfa7a985a725ee707991b95bf535cdc46bd800e5ca71fde162563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56068aa0157d1e112901534ebf61c7bb646d76fc4bfa77f6f68fc63b4b44cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.565862 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qq7cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af5040b-0391-423c-b87d-90df4965f58f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qq7cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.571562 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.571602 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.571613 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.571632 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.571648 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:44Z","lastTransitionTime":"2026-01-27T07:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.579513 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.596634 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.610935 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.623853 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.652446 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.669074 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.674661 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.674726 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.674743 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.674772 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.674790 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:44Z","lastTransitionTime":"2026-01-27T07:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.694445 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:22Z\\\",\\\"message\\\":\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0127 07:46:22.184812 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.709443 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.724394 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.742851 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.764636 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:43Z\\\",\\\"message\\\":\\\"2026-01-27T07:45:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30d81e92-d000-4f28-836e-fac87078fffe\\\\n2026-01-27T07:45:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30d81e92-d000-4f28-836e-fac87078fffe to /host/opt/cni/bin/\\\\n2026-01-27T07:45:58Z [verbose] multus-daemon started\\\\n2026-01-27T07:45:58Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:46:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.777898 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.777957 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.777975 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.778001 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.778015 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:44Z","lastTransitionTime":"2026-01-27T07:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.790779 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.880335 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.880835 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.880849 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.880871 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.880884 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:44Z","lastTransitionTime":"2026-01-27T07:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.960944 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tgr7w_60934e21-bc53-4f80-bb08-bb67af7301cd/kube-multus/0.log" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.960999 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tgr7w" event={"ID":"60934e21-bc53-4f80-bb08-bb67af7301cd","Type":"ContainerStarted","Data":"49f68d9971ee77d48b2b7db56c05766ea054be9dbf688bf2110af470179aacfb"} Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.983712 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.983764 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.983785 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.983801 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.983811 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:44Z","lastTransitionTime":"2026-01-27T07:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.984483 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:44 crc kubenswrapper[4799]: I0127 07:46:44.998376 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:44Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.020003 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:22Z\\\",\\\"message\\\":\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0127 07:46:22.184812 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.035877 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.049129 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.063728 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.078898 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f68d9971ee77d48b2b7db56c05766ea054be9dbf688bf2110af470179aacfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:43Z\\\",\\\"message\\\":\\\"2026-01-27T07:45:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30d81e92-d000-4f28-836e-fac87078fffe\\\\n2026-01-27T07:45:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30d81e92-d000-4f28-836e-fac87078fffe to /host/opt/cni/bin/\\\\n2026-01-27T07:45:58Z [verbose] multus-daemon started\\\\n2026-01-27T07:45:58Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:46:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.086205 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.086243 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.086255 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.086272 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.086286 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:45Z","lastTransitionTime":"2026-01-27T07:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.095246 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.109842 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.123561 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0048c044-fea4-4d5a-8fa0-4d5c00dd8814\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af711974c3b7e4794dc83d00cb95ff6ab7d8df85618cbf2dabe33992368b332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a4d36b1be5b740f1c9a0aab6e46ecea9e21c53d8468379e9bf55bd8cf0721b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4da3ddd96887cadba76905a55343d81456ea50b7dba277376562c73562d948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.139821 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.152796 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.165729 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997b05f-6093-45cc-aa37-f988051c7f32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ba992e2bfa7a985a725ee707991b95bf535cdc46bd800e5ca71fde162563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56068aa0157d1e112901534ebf61c7bb646d76fc4bfa77f6f68fc63b4b44cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.178559 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qq7cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af5040b-0391-423c-b87d-90df4965f58f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qq7cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.189129 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.189425 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.189489 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.189592 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.189654 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:45Z","lastTransitionTime":"2026-01-27T07:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.192679 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.204977 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.216330 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.228101 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.292626 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.292670 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.292682 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.292701 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.292713 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:45Z","lastTransitionTime":"2026-01-27T07:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.395383 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.395421 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.395430 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.395444 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.395455 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:45Z","lastTransitionTime":"2026-01-27T07:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.442254 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 20:22:41.507169101 +0000 UTC Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.451417 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:45 crc kubenswrapper[4799]: E0127 07:46:45.451588 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.452761 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:45 crc kubenswrapper[4799]: E0127 07:46:45.452930 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.453395 4799 scope.go:117] "RemoveContainer" containerID="af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.499084 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.499323 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.499431 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.499583 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.499724 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:45Z","lastTransitionTime":"2026-01-27T07:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.603064 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.603116 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.603129 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.603151 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.603165 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:45Z","lastTransitionTime":"2026-01-27T07:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.706390 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.706434 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.706445 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.706460 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.706471 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:45Z","lastTransitionTime":"2026-01-27T07:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.809256 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.809322 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.809334 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.809350 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.809361 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:45Z","lastTransitionTime":"2026-01-27T07:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.913023 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.913066 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.913075 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.913095 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.913107 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:45Z","lastTransitionTime":"2026-01-27T07:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.966767 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hggcd_836be94a-c1de-4b1c-b98a-7af78a2a4607/ovnkube-controller/2.log" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.970171 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerStarted","Data":"b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3"} Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.971118 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:46:45 crc kubenswrapper[4799]: I0127 07:46:45.993770 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:45Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.011991 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.015575 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.015621 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.015630 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.015646 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.015655 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:46Z","lastTransitionTime":"2026-01-27T07:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.030272 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:22Z\\\",\\\"message\\\":\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0127 07:46:22.184812 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.043154 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.053374 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.064388 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.074340 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f68d9971ee77d48b2b7db56c05766ea054be9dbf688bf2110af470179aacfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:43Z\\\",\\\"message\\\":\\\"2026-01-27T07:45:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30d81e92-d000-4f28-836e-fac87078fffe\\\\n2026-01-27T07:45:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30d81e92-d000-4f28-836e-fac87078fffe to /host/opt/cni/bin/\\\\n2026-01-27T07:45:58Z [verbose] multus-daemon started\\\\n2026-01-27T07:45:58Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:46:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.088188 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.105660 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997b05f-6093-45cc-aa37-f988051c7f32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ba992e2bfa7a985a725ee707991b95bf535cdc46bd800e5ca71fde162563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56068aa0157d1e112901534ebf61c7bb646d76fc4bfa77f6f68fc63b4b44cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.115890 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qq7cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af5040b-0391-423c-b87d-90df4965f58f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qq7cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.117656 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.117681 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.117689 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.117702 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.117711 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:46Z","lastTransitionTime":"2026-01-27T07:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.128016 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.138245 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0048c044-fea4-4d5a-8fa0-4d5c00dd8814\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af711974c3b7e4794dc83d00cb95ff6ab7d8df85618cbf2dabe33992368b332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a4d36b1be5b740f1c9a0aab6e46ecea9e21c53d8468379e9bf55bd8cf0721b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4da3ddd96887cadba76905a55343d81456ea50b7dba277376562c73562d948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.150783 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.160196 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.171910 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.183120 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.193963 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.204075 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.220080 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.220116 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.220124 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.220140 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.220149 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:46Z","lastTransitionTime":"2026-01-27T07:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.322659 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.322722 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.322735 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.322755 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.322767 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:46Z","lastTransitionTime":"2026-01-27T07:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.429143 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.429212 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.429229 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.429250 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.429262 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:46Z","lastTransitionTime":"2026-01-27T07:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.442618 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:48:47.664535154 +0000 UTC Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.450895 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:46 crc kubenswrapper[4799]: E0127 07:46:46.451060 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.450900 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:46 crc kubenswrapper[4799]: E0127 07:46:46.451155 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.532039 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.532083 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.532095 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.532113 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.532125 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:46Z","lastTransitionTime":"2026-01-27T07:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.634044 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.634111 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.634120 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.634135 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.634144 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:46Z","lastTransitionTime":"2026-01-27T07:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.735890 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.735936 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.735947 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.735961 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.735970 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:46Z","lastTransitionTime":"2026-01-27T07:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.838202 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.838242 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.838255 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.838271 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.838284 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:46Z","lastTransitionTime":"2026-01-27T07:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.940448 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.940541 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.940556 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.940620 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.940635 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:46Z","lastTransitionTime":"2026-01-27T07:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.973738 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hggcd_836be94a-c1de-4b1c-b98a-7af78a2a4607/ovnkube-controller/3.log" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.974228 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hggcd_836be94a-c1de-4b1c-b98a-7af78a2a4607/ovnkube-controller/2.log" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.975917 4799 generic.go:334] "Generic (PLEG): container finished" podID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerID="b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3" exitCode=1 Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.975956 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerDied","Data":"b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3"} Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.976008 4799 scope.go:117] "RemoveContainer" containerID="af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.976651 4799 scope.go:117] "RemoveContainer" containerID="b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3" Jan 27 07:46:46 crc kubenswrapper[4799]: E0127 07:46:46.976823 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" Jan 27 07:46:46 crc kubenswrapper[4799]: I0127 07:46:46.990587 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:46.999934 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0048c044-fea4-4d5a-8fa0-4d5c00dd8814\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af711974c3b7e4794dc83d00cb95ff6ab7d8df85618cbf2dabe33992368b332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a4d36b1be5b740f1c9a0aab6e46ecea9e21c53d8468379e9bf55bd8cf0721b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4da3ddd96887cadba76905a55343d81456ea50b7dba277376562c73562d948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.011458 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.022247 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.032077 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997b05f-6093-45cc-aa37-f988051c7f32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ba992e2bfa7a985a725ee707991b95bf535cdc46bd800e5ca71fde162563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56068aa0157d1e112901534ebf61c7bb646d76fc4bfa77f6f68fc63b4b44cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.041072 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qq7cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af5040b-0391-423c-b87d-90df4965f58f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qq7cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.042714 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.042772 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.042784 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.042802 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.042812 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:47Z","lastTransitionTime":"2026-01-27T07:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.051691 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.060436 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.068013 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.075794 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.091461 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.100886 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.115219 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af491881cf5ccd87c62d5f0e4422094150660971b34e70427f05d1e33ada954d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:22Z\\\",\\\"message\\\":\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/community-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0127 07:46:22.184812 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:46Z\\\",\\\"message\\\":\\\"r/metrics per-node LB for network=default: []services.LB{}\\\\nI0127 07:46:46.210398 6850 services_controller.go:453] Built service openshift-authentication-operator/metrics template LB for network=default: []services.LB{}\\\\nI0127 07:46:46.210407 6850 services_controller.go:454] Service openshift-authentication-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0127 07:46:46.210413 6850 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z]\\\\nI0127 07:46:46.210408 6850 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.124460 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.133392 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.142668 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.145387 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.145421 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.145431 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.145445 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.145466 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:47Z","lastTransitionTime":"2026-01-27T07:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.167747 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f68d9971ee77d48b2b7db56c05766ea054be9dbf688bf2110af470179aacfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:43Z\\\",\\\"message\\\":\\\"2026-01-27T07:45:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30d81e92-d000-4f28-836e-fac87078fffe\\\\n2026-01-27T07:45:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30d81e92-d000-4f28-836e-fac87078fffe to /host/opt/cni/bin/\\\\n2026-01-27T07:45:58Z [verbose] multus-daemon started\\\\n2026-01-27T07:45:58Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:46:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.180201 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.248331 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.248410 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.248423 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.248440 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.248460 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:47Z","lastTransitionTime":"2026-01-27T07:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.351466 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.351506 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.351514 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.351531 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.351541 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:47Z","lastTransitionTime":"2026-01-27T07:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.443457 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 10:12:35.382971001 +0000 UTC Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.450862 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.450944 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:47 crc kubenswrapper[4799]: E0127 07:46:47.451002 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:47 crc kubenswrapper[4799]: E0127 07:46:47.451155 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.454381 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.454420 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.454433 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.454449 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.454461 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:47Z","lastTransitionTime":"2026-01-27T07:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.557421 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.557466 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.557480 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.557502 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.557514 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:47Z","lastTransitionTime":"2026-01-27T07:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.660898 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.660939 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.660951 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.660967 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.660976 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:47Z","lastTransitionTime":"2026-01-27T07:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.762781 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.762815 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.762823 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.762837 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.762845 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:47Z","lastTransitionTime":"2026-01-27T07:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.865422 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.865472 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.865486 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.865504 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.865515 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:47Z","lastTransitionTime":"2026-01-27T07:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.967978 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.968047 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.968060 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.968078 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.968090 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:47Z","lastTransitionTime":"2026-01-27T07:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.980551 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hggcd_836be94a-c1de-4b1c-b98a-7af78a2a4607/ovnkube-controller/3.log" Jan 27 07:46:47 crc kubenswrapper[4799]: I0127 07:46:47.983497 4799 scope.go:117] "RemoveContainer" containerID="b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3" Jan 27 07:46:47 crc kubenswrapper[4799]: E0127 07:46:47.983659 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.003011 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:46Z\\\",\\\"message\\\":\\\"r/metrics per-node LB for network=default: []services.LB{}\\\\nI0127 07:46:46.210398 6850 services_controller.go:453] Built service openshift-authentication-operator/metrics template LB for network=default: []services.LB{}\\\\nI0127 07:46:46.210407 6850 services_controller.go:454] Service openshift-authentication-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0127 07:46:46.210413 6850 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z]\\\\nI0127 07:46:46.210408 6850 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.022454 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.035913 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.049682 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.063043 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f68d9971ee77d48b2b7db56c05766ea054be9dbf688bf2110af470179aacfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:43Z\\\",\\\"message\\\":\\\"2026-01-27T07:45:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30d81e92-d000-4f28-836e-fac87078fffe\\\\n2026-01-27T07:45:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30d81e92-d000-4f28-836e-fac87078fffe to /host/opt/cni/bin/\\\\n2026-01-27T07:45:58Z [verbose] multus-daemon started\\\\n2026-01-27T07:45:58Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:46:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.070275 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.070340 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.070352 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.070368 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.070377 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:48Z","lastTransitionTime":"2026-01-27T07:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.079812 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.095730 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.110688 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.123609 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0048c044-fea4-4d5a-8fa0-4d5c00dd8814\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af711974c3b7e4794dc83d00cb95ff6ab7d8df85618cbf2dabe33992368b332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a4d36b1be5b740f1c9a0aab6e46ecea9e21c53d8468379e9bf55bd8cf0721b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4da3ddd96887cadba76905a55343d81456ea50b7dba277376562c73562d948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.136821 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.146552 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.158647 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997b05f-6093-45cc-aa37-f988051c7f32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ba992e2bfa7a985a725ee707991b95bf535cdc46bd800e5ca71fde162563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56068aa0157d1e112901534ebf61c7bb646d76fc4bfa77f6f68fc63b4b44cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.171188 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qq7cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af5040b-0391-423c-b87d-90df4965f58f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qq7cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.173593 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.173641 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.173658 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.173685 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.173703 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:48Z","lastTransitionTime":"2026-01-27T07:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.187403 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.198165 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.212375 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.224805 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.235110 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.276394 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.276443 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.276456 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.276476 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.276490 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:48Z","lastTransitionTime":"2026-01-27T07:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.379389 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.379426 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.379436 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.379450 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.379458 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:48Z","lastTransitionTime":"2026-01-27T07:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.444207 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 04:37:14.479463005 +0000 UTC Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.450668 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.450693 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:48 crc kubenswrapper[4799]: E0127 07:46:48.450831 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:48 crc kubenswrapper[4799]: E0127 07:46:48.450913 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.481621 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.481683 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.481699 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.481722 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.481737 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:48Z","lastTransitionTime":"2026-01-27T07:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.584137 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.584182 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.584193 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.584208 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.584219 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:48Z","lastTransitionTime":"2026-01-27T07:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.686984 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.687049 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.687082 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.687100 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.687112 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:48Z","lastTransitionTime":"2026-01-27T07:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.789850 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.789900 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.789909 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.789925 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.789939 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:48Z","lastTransitionTime":"2026-01-27T07:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.892776 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.892822 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.892831 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.892852 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.892862 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:48Z","lastTransitionTime":"2026-01-27T07:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.994554 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.994591 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.994599 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.994611 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:48 crc kubenswrapper[4799]: I0127 07:46:48.994621 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:48Z","lastTransitionTime":"2026-01-27T07:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.097009 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.097053 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.097063 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.097078 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.097090 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:49Z","lastTransitionTime":"2026-01-27T07:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.200198 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.200229 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.200243 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.200266 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.200277 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:49Z","lastTransitionTime":"2026-01-27T07:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.302584 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.302630 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.302639 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.302652 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.302661 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:49Z","lastTransitionTime":"2026-01-27T07:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.405161 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.405219 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.405236 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.405260 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.405275 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:49Z","lastTransitionTime":"2026-01-27T07:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.444874 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 00:32:52.162321911 +0000 UTC Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.451249 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.451249 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:49 crc kubenswrapper[4799]: E0127 07:46:49.451404 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:49 crc kubenswrapper[4799]: E0127 07:46:49.451459 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.507859 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.507902 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.507912 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.507935 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.507947 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:49Z","lastTransitionTime":"2026-01-27T07:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.610455 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.610518 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.610530 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.610550 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.610568 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:49Z","lastTransitionTime":"2026-01-27T07:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.713021 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.713057 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.713068 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.713084 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.713095 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:49Z","lastTransitionTime":"2026-01-27T07:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.814797 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.814882 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.814897 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.814934 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.814947 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:49Z","lastTransitionTime":"2026-01-27T07:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.917415 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.917454 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.917463 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.917478 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:49 crc kubenswrapper[4799]: I0127 07:46:49.917486 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:49Z","lastTransitionTime":"2026-01-27T07:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.019502 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.019537 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.019546 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.019560 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.019568 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:50Z","lastTransitionTime":"2026-01-27T07:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.121971 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.122008 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.122018 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.122034 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.122045 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:50Z","lastTransitionTime":"2026-01-27T07:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.224180 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.224223 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.224237 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.224250 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.224258 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:50Z","lastTransitionTime":"2026-01-27T07:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.328054 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.328097 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.328109 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.328126 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.328141 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:50Z","lastTransitionTime":"2026-01-27T07:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.430836 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.430892 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.430904 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.430922 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.430940 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:50Z","lastTransitionTime":"2026-01-27T07:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.446040 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 03:12:01.456976406 +0000 UTC Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.451665 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:50 crc kubenswrapper[4799]: E0127 07:46:50.451779 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.451662 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:50 crc kubenswrapper[4799]: E0127 07:46:50.452014 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.534165 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.534206 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.534218 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.534236 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.534247 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:50Z","lastTransitionTime":"2026-01-27T07:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.636716 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.636811 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.636830 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.636855 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.636875 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:50Z","lastTransitionTime":"2026-01-27T07:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.739623 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.739661 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.739671 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.739686 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.739695 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:50Z","lastTransitionTime":"2026-01-27T07:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.842287 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.842355 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.842364 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.842377 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.842386 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:50Z","lastTransitionTime":"2026-01-27T07:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.945056 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.945100 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.945111 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.945131 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:50 crc kubenswrapper[4799]: I0127 07:46:50.945145 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:50Z","lastTransitionTime":"2026-01-27T07:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.047823 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.047865 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.047872 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.047916 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.047924 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:51Z","lastTransitionTime":"2026-01-27T07:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.151250 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.151291 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.151322 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.151342 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.151354 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:51Z","lastTransitionTime":"2026-01-27T07:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.253545 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.253625 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.253649 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.253678 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.253699 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:51Z","lastTransitionTime":"2026-01-27T07:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.356617 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.356670 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.356683 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.356702 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.356717 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:51Z","lastTransitionTime":"2026-01-27T07:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.447111 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 23:29:29.382598375 +0000 UTC Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.450425 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.450471 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:51 crc kubenswrapper[4799]: E0127 07:46:51.450579 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:51 crc kubenswrapper[4799]: E0127 07:46:51.450677 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.459288 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.459344 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.459353 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.459368 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.459377 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:51Z","lastTransitionTime":"2026-01-27T07:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.561875 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.561918 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.561927 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.562459 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.562483 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:51Z","lastTransitionTime":"2026-01-27T07:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.665475 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.665521 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.665531 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.665546 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.665556 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:51Z","lastTransitionTime":"2026-01-27T07:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.768220 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.768260 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.768272 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.768289 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.768323 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:51Z","lastTransitionTime":"2026-01-27T07:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.870135 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.870174 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.870182 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.870198 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.870208 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:51Z","lastTransitionTime":"2026-01-27T07:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.972804 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.972868 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.972878 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.972892 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:51 crc kubenswrapper[4799]: I0127 07:46:51.972901 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:51Z","lastTransitionTime":"2026-01-27T07:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.075111 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.075158 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.075173 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.075194 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.075212 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:52Z","lastTransitionTime":"2026-01-27T07:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.177876 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.177914 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.177924 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.177956 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.177986 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:52Z","lastTransitionTime":"2026-01-27T07:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.236641 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.236678 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.236689 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.236705 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.236715 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:52Z","lastTransitionTime":"2026-01-27T07:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:52 crc kubenswrapper[4799]: E0127 07:46:52.249671 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.253356 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.253388 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.253397 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.253412 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.253421 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:52Z","lastTransitionTime":"2026-01-27T07:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:52 crc kubenswrapper[4799]: E0127 07:46:52.266062 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.269359 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.269397 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.269409 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.269427 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.269439 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:52Z","lastTransitionTime":"2026-01-27T07:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:52 crc kubenswrapper[4799]: E0127 07:46:52.282919 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.286539 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.286561 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.286570 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.286583 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.286591 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:52Z","lastTransitionTime":"2026-01-27T07:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:52 crc kubenswrapper[4799]: E0127 07:46:52.296978 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.299711 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.299737 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.299748 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.299762 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.299771 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:52Z","lastTransitionTime":"2026-01-27T07:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:52 crc kubenswrapper[4799]: E0127 07:46:52.310481 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:52 crc kubenswrapper[4799]: E0127 07:46:52.310638 4799 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.312044 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.312065 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.312073 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.312090 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.312106 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:52Z","lastTransitionTime":"2026-01-27T07:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.415128 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.415165 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.415173 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.415186 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.415197 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:52Z","lastTransitionTime":"2026-01-27T07:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.448075 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 21:16:48.186809841 +0000 UTC Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.451441 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.451496 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:52 crc kubenswrapper[4799]: E0127 07:46:52.451561 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:52 crc kubenswrapper[4799]: E0127 07:46:52.451664 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.517647 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.517694 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.517707 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.517726 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.517741 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:52Z","lastTransitionTime":"2026-01-27T07:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.620739 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.620800 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.620816 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.620839 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.620859 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:52Z","lastTransitionTime":"2026-01-27T07:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.723778 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.723824 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.723859 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.723875 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.723885 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:52Z","lastTransitionTime":"2026-01-27T07:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.826188 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.826231 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.826239 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.826252 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.826260 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:52Z","lastTransitionTime":"2026-01-27T07:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.928218 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.928276 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.928293 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.928366 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:52 crc kubenswrapper[4799]: I0127 07:46:52.928385 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:52Z","lastTransitionTime":"2026-01-27T07:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.030763 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.030811 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.030823 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.030840 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.030851 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:53Z","lastTransitionTime":"2026-01-27T07:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.133557 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.133608 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.133626 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.133651 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.133668 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:53Z","lastTransitionTime":"2026-01-27T07:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.235926 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.235967 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.235978 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.235995 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.236006 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:53Z","lastTransitionTime":"2026-01-27T07:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.338247 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.338282 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.338292 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.338322 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.338332 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:53Z","lastTransitionTime":"2026-01-27T07:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.440737 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.440813 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.440825 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.440842 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.440853 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:53Z","lastTransitionTime":"2026-01-27T07:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.448143 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 19:55:22.614769085 +0000 UTC Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.450743 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.450812 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:53 crc kubenswrapper[4799]: E0127 07:46:53.450899 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:53 crc kubenswrapper[4799]: E0127 07:46:53.450972 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.543816 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.543848 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.543856 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.543872 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.543882 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:53Z","lastTransitionTime":"2026-01-27T07:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.646233 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.646268 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.646279 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.646315 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.646326 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:53Z","lastTransitionTime":"2026-01-27T07:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.748627 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.748666 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.748678 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.748693 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.748706 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:53Z","lastTransitionTime":"2026-01-27T07:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.851239 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.851295 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.851344 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.851365 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.851379 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:53Z","lastTransitionTime":"2026-01-27T07:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.953873 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.953999 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.954013 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.954033 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:53 crc kubenswrapper[4799]: I0127 07:46:53.954045 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:53Z","lastTransitionTime":"2026-01-27T07:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.057101 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.057144 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.057157 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.057172 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.057183 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:54Z","lastTransitionTime":"2026-01-27T07:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.159983 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.160037 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.160060 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.160082 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.160100 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:54Z","lastTransitionTime":"2026-01-27T07:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.262286 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.262333 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.262342 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.262354 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.262364 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:54Z","lastTransitionTime":"2026-01-27T07:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.365900 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.366171 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.366265 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.366400 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.366492 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:54Z","lastTransitionTime":"2026-01-27T07:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.448907 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 15:33:09.282578865 +0000 UTC Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.451800 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:54 crc kubenswrapper[4799]: E0127 07:46:54.452130 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.451850 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:54 crc kubenswrapper[4799]: E0127 07:46:54.452638 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.462576 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.468543 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.468572 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.468581 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.468596 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.468606 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:54Z","lastTransitionTime":"2026-01-27T07:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.474670 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997b05f-6093-45cc-aa37-f988051c7f32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ba992e2bfa7a985a725ee707991b95bf535cdc46bd800e5ca71fde162563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56068aa0157d1e112901534ebf61c7bb646d76fc4bfa77f6f68fc63b4b44cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.484551 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qq7cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af5040b-0391-423c-b87d-90df4965f58f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qq7cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.499472 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.510106 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0048c044-fea4-4d5a-8fa0-4d5c00dd8814\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af711974c3b7e4794dc83d00cb95ff6ab7d8df85618cbf2dabe33992368b332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a4d36b1be5b740f1c9a0aab6e46ecea9e21c53d8468379e9bf55bd8cf0721b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4da3ddd96887cadba76905a55343d81456ea50b7dba277376562c73562d948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.520487 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.536796 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.547696 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.558565 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.572819 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.572853 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.572864 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.572879 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.572888 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:54Z","lastTransitionTime":"2026-01-27T07:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.573445 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.605047 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.629405 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.649970 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:46Z\\\",\\\"message\\\":\\\"r/metrics per-node LB for network=default: []services.LB{}\\\\nI0127 07:46:46.210398 6850 services_controller.go:453] Built service openshift-authentication-operator/metrics template LB for network=default: []services.LB{}\\\\nI0127 07:46:46.210407 6850 services_controller.go:454] Service openshift-authentication-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0127 07:46:46.210413 6850 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z]\\\\nI0127 07:46:46.210408 6850 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.664811 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.674821 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.674849 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.674857 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.674869 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.674878 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:54Z","lastTransitionTime":"2026-01-27T07:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.675760 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.686480 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.700919 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.713079 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f68d9971ee77d48b2b7db56c05766ea054be9dbf688bf2110af470179aacfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:43Z\\\",\\\"message\\\":\\\"2026-01-27T07:45:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30d81e92-d000-4f28-836e-fac87078fffe\\\\n2026-01-27T07:45:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30d81e92-d000-4f28-836e-fac87078fffe to /host/opt/cni/bin/\\\\n2026-01-27T07:45:58Z [verbose] multus-daemon started\\\\n2026-01-27T07:45:58Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:46:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.777327 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.777667 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.777757 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.777854 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.777962 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:54Z","lastTransitionTime":"2026-01-27T07:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.879587 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.879636 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.879665 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.879687 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.879699 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:54Z","lastTransitionTime":"2026-01-27T07:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.982512 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.982583 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.982608 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.982638 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:54 crc kubenswrapper[4799]: I0127 07:46:54.982660 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:54Z","lastTransitionTime":"2026-01-27T07:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.084714 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.084991 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.085093 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.085194 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.085270 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:55Z","lastTransitionTime":"2026-01-27T07:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.188073 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.188396 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.188553 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.188811 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.188922 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:55Z","lastTransitionTime":"2026-01-27T07:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.290852 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.290890 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.290898 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.290914 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.290924 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:55Z","lastTransitionTime":"2026-01-27T07:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.393784 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.393825 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.393836 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.393851 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.393861 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:55Z","lastTransitionTime":"2026-01-27T07:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.449197 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 14:53:58.580990585 +0000 UTC Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.450439 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.450508 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:55 crc kubenswrapper[4799]: E0127 07:46:55.450625 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:55 crc kubenswrapper[4799]: E0127 07:46:55.450780 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.496178 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.496244 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.496268 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.496334 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.496361 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:55Z","lastTransitionTime":"2026-01-27T07:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.598824 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.599096 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.599172 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.599254 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.599360 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:55Z","lastTransitionTime":"2026-01-27T07:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.702432 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.702473 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.702483 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.702500 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.702511 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:55Z","lastTransitionTime":"2026-01-27T07:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.804803 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.804841 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.804851 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.804866 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.804877 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:55Z","lastTransitionTime":"2026-01-27T07:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.907813 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.908131 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.908449 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.908682 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:55 crc kubenswrapper[4799]: I0127 07:46:55.908858 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:55Z","lastTransitionTime":"2026-01-27T07:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.011292 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.011374 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.011387 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.011403 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.011413 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:56Z","lastTransitionTime":"2026-01-27T07:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.115060 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.115101 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.115109 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.115125 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.115133 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:56Z","lastTransitionTime":"2026-01-27T07:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.218023 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.218066 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.218077 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.218094 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.218107 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:56Z","lastTransitionTime":"2026-01-27T07:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.320290 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.321343 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.321501 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.321665 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.321800 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:56Z","lastTransitionTime":"2026-01-27T07:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.424212 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.424255 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.424266 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.424282 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.424295 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:56Z","lastTransitionTime":"2026-01-27T07:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.449903 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 09:21:39.136502264 +0000 UTC Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.451391 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:56 crc kubenswrapper[4799]: E0127 07:46:56.451487 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.451393 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:56 crc kubenswrapper[4799]: E0127 07:46:56.451592 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.526956 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.526992 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.527007 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.527029 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.527044 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:56Z","lastTransitionTime":"2026-01-27T07:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.629397 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.629441 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.629451 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.629466 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.629478 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:56Z","lastTransitionTime":"2026-01-27T07:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.732115 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.732155 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.732164 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.732183 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.732195 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:56Z","lastTransitionTime":"2026-01-27T07:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.834429 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.834479 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.834495 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.834519 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.834535 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:56Z","lastTransitionTime":"2026-01-27T07:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.936708 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.937036 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.937053 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.937067 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:56 crc kubenswrapper[4799]: I0127 07:46:56.937075 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:56Z","lastTransitionTime":"2026-01-27T07:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.039280 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.039335 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.039346 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.039360 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.039370 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:57Z","lastTransitionTime":"2026-01-27T07:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.141703 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.141749 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.141765 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.141787 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.141804 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:57Z","lastTransitionTime":"2026-01-27T07:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.244228 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.244280 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.244293 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.244352 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.244370 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:57Z","lastTransitionTime":"2026-01-27T07:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.346720 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.346787 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.346804 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.346828 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.346845 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:57Z","lastTransitionTime":"2026-01-27T07:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.449338 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.449372 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.449383 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.449399 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.449410 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:57Z","lastTransitionTime":"2026-01-27T07:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.450109 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 10:02:15.990349205 +0000 UTC Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.450733 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:57 crc kubenswrapper[4799]: E0127 07:46:57.450824 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.450985 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:57 crc kubenswrapper[4799]: E0127 07:46:57.451056 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.552063 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.552113 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.552123 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.552572 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.552593 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:57Z","lastTransitionTime":"2026-01-27T07:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.655061 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.655106 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.655115 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.655130 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.655142 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:57Z","lastTransitionTime":"2026-01-27T07:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.756943 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.757229 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.757297 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.757420 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.757518 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:57Z","lastTransitionTime":"2026-01-27T07:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.859201 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.859573 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.859687 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.859777 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.859868 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:57Z","lastTransitionTime":"2026-01-27T07:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.962178 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.962208 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.962218 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.962232 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:57 crc kubenswrapper[4799]: I0127 07:46:57.962242 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:57Z","lastTransitionTime":"2026-01-27T07:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.066309 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.066357 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.066369 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.066386 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.066404 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:58Z","lastTransitionTime":"2026-01-27T07:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.169405 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.169438 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.169447 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.169462 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.169472 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:58Z","lastTransitionTime":"2026-01-27T07:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.175347 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.175394 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.175422 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.175439 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:58 crc kubenswrapper[4799]: E0127 07:46:58.175522 4799 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:46:58 crc kubenswrapper[4799]: E0127 07:46:58.175552 4799 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:46:58 crc kubenswrapper[4799]: E0127 07:46:58.175529 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:46:58 crc kubenswrapper[4799]: E0127 07:46:58.175589 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:46:58 crc kubenswrapper[4799]: E0127 07:46:58.175586 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 07:46:58 crc kubenswrapper[4799]: E0127 07:46:58.175624 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 07:46:58 crc kubenswrapper[4799]: E0127 07:46:58.175637 4799 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:46:58 crc kubenswrapper[4799]: E0127 07:46:58.175603 4799 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:46:58 crc kubenswrapper[4799]: E0127 07:46:58.175578 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:48:02.175562844 +0000 UTC m=+148.486666909 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 07:46:58 crc kubenswrapper[4799]: E0127 07:46:58.175732 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 07:48:02.175713678 +0000 UTC m=+148.486817743 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 07:46:58 crc kubenswrapper[4799]: E0127 07:46:58.175746 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 07:48:02.175739429 +0000 UTC m=+148.486843494 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:46:58 crc kubenswrapper[4799]: E0127 07:46:58.175756 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 07:48:02.175750789 +0000 UTC m=+148.486854854 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.272654 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.273211 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.273221 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.273239 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.273249 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:58Z","lastTransitionTime":"2026-01-27T07:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.276089 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:46:58 crc kubenswrapper[4799]: E0127 07:46:58.276194 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:48:02.276174936 +0000 UTC m=+148.587279001 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.375801 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.375884 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.375896 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.375912 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.375921 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:58Z","lastTransitionTime":"2026-01-27T07:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.451143 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 21:19:46.463131002 +0000 UTC Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.451241 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.451333 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:46:58 crc kubenswrapper[4799]: E0127 07:46:58.451360 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:46:58 crc kubenswrapper[4799]: E0127 07:46:58.451597 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.462732 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.478719 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.478766 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.478778 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.478792 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.478801 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:58Z","lastTransitionTime":"2026-01-27T07:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.581482 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.581521 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.581528 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.581541 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.581549 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:58Z","lastTransitionTime":"2026-01-27T07:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.686256 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.686360 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.686379 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.686416 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.686437 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:58Z","lastTransitionTime":"2026-01-27T07:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.789779 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.789815 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.789824 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.789838 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.789848 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:58Z","lastTransitionTime":"2026-01-27T07:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.892435 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.892479 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.892487 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.892501 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.892510 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:58Z","lastTransitionTime":"2026-01-27T07:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.995152 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.995192 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.995202 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.995217 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:58 crc kubenswrapper[4799]: I0127 07:46:58.995227 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:58Z","lastTransitionTime":"2026-01-27T07:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.097415 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.097463 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.097474 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.097490 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.097501 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:59Z","lastTransitionTime":"2026-01-27T07:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.199740 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.199780 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.199792 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.199810 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.199823 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:59Z","lastTransitionTime":"2026-01-27T07:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.302853 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.302911 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.302927 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.302951 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.302968 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:59Z","lastTransitionTime":"2026-01-27T07:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.405657 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.405699 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.405709 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.405726 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.405737 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:59Z","lastTransitionTime":"2026-01-27T07:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.450761 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.450896 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:46:59 crc kubenswrapper[4799]: E0127 07:46:59.451230 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.451345 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 16:26:11.11638992 +0000 UTC Jan 27 07:46:59 crc kubenswrapper[4799]: E0127 07:46:59.451550 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.508621 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.508678 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.508699 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.508726 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.508747 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:59Z","lastTransitionTime":"2026-01-27T07:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.611408 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.611457 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.611469 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.611487 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.611499 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:59Z","lastTransitionTime":"2026-01-27T07:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.714207 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.714268 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.714285 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.714342 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.714361 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:59Z","lastTransitionTime":"2026-01-27T07:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.816481 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.816559 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.816576 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.816599 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.816616 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:59Z","lastTransitionTime":"2026-01-27T07:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.919877 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.919922 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.919932 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.919947 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:46:59 crc kubenswrapper[4799]: I0127 07:46:59.919956 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:46:59Z","lastTransitionTime":"2026-01-27T07:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.022822 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.022863 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.022871 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.022884 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.022894 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:00Z","lastTransitionTime":"2026-01-27T07:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.124972 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.125038 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.125056 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.125079 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.125097 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:00Z","lastTransitionTime":"2026-01-27T07:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.228150 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.228211 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.228223 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.228242 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.228255 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:00Z","lastTransitionTime":"2026-01-27T07:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.330622 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.330665 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.330677 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.330694 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.330727 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:00Z","lastTransitionTime":"2026-01-27T07:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.432769 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.432825 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.432840 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.432860 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.432907 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:00Z","lastTransitionTime":"2026-01-27T07:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.450557 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:47:00 crc kubenswrapper[4799]: E0127 07:47:00.450671 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.450554 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:47:00 crc kubenswrapper[4799]: E0127 07:47:00.450974 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.451406 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 22:31:14.521657993 +0000 UTC Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.452041 4799 scope.go:117] "RemoveContainer" containerID="b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3" Jan 27 07:47:00 crc kubenswrapper[4799]: E0127 07:47:00.452257 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.535396 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.535463 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.535482 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.535506 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.535527 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:00Z","lastTransitionTime":"2026-01-27T07:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.638038 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.638078 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.638086 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.638100 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.638110 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:00Z","lastTransitionTime":"2026-01-27T07:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.740691 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.740749 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.740766 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.740786 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.740804 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:00Z","lastTransitionTime":"2026-01-27T07:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.842511 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.842556 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.842598 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.842620 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.842634 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:00Z","lastTransitionTime":"2026-01-27T07:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.946347 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.946398 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.946409 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.946427 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:00 crc kubenswrapper[4799]: I0127 07:47:00.946439 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:00Z","lastTransitionTime":"2026-01-27T07:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.049010 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.049064 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.049075 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.049092 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.049102 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:01Z","lastTransitionTime":"2026-01-27T07:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.151650 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.151720 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.151743 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.151773 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.151795 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:01Z","lastTransitionTime":"2026-01-27T07:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.255149 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.255253 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.255267 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.255285 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.255331 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:01Z","lastTransitionTime":"2026-01-27T07:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.358148 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.358223 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.358246 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.358270 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.358288 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:01Z","lastTransitionTime":"2026-01-27T07:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.451104 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:47:01 crc kubenswrapper[4799]: E0127 07:47:01.451257 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.451469 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:57:53.385967287 +0000 UTC Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.451567 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:47:01 crc kubenswrapper[4799]: E0127 07:47:01.452039 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.460821 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.460859 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.460869 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.460884 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.460894 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:01Z","lastTransitionTime":"2026-01-27T07:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.563877 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.564011 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.564048 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.564077 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.564100 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:01Z","lastTransitionTime":"2026-01-27T07:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.666518 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.666561 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.666570 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.666586 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.666595 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:01Z","lastTransitionTime":"2026-01-27T07:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.768844 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.768888 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.768899 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.768916 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.768927 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:01Z","lastTransitionTime":"2026-01-27T07:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.871502 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.871560 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.871594 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.871612 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.871624 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:01Z","lastTransitionTime":"2026-01-27T07:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.974344 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.974401 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.974410 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.974426 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:01 crc kubenswrapper[4799]: I0127 07:47:01.974437 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:01Z","lastTransitionTime":"2026-01-27T07:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.077635 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.078367 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.078385 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.078423 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.078437 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:02Z","lastTransitionTime":"2026-01-27T07:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.180397 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.180446 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.180456 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.180474 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.180488 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:02Z","lastTransitionTime":"2026-01-27T07:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.282803 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.282866 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.282881 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.282902 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.282918 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:02Z","lastTransitionTime":"2026-01-27T07:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.385064 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.385116 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.385129 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.385146 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.385157 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:02Z","lastTransitionTime":"2026-01-27T07:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.450929 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.451006 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:47:02 crc kubenswrapper[4799]: E0127 07:47:02.451056 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:47:02 crc kubenswrapper[4799]: E0127 07:47:02.451248 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.451896 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 02:39:26.111159966 +0000 UTC Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.488263 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.488332 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.488344 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.488362 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.488373 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:02Z","lastTransitionTime":"2026-01-27T07:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.577075 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.577122 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.577137 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.577152 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.577163 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:02Z","lastTransitionTime":"2026-01-27T07:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:02 crc kubenswrapper[4799]: E0127 07:47:02.606053 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.609748 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.609785 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.609801 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.609823 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.609840 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:02Z","lastTransitionTime":"2026-01-27T07:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:02 crc kubenswrapper[4799]: E0127 07:47:02.622200 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.626440 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.626474 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.626487 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.626504 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.626515 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:02Z","lastTransitionTime":"2026-01-27T07:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:02 crc kubenswrapper[4799]: E0127 07:47:02.641583 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.645555 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.645589 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.645607 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.645624 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.645634 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:02Z","lastTransitionTime":"2026-01-27T07:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:02 crc kubenswrapper[4799]: E0127 07:47:02.658397 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.662000 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.662035 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.662047 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.662065 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.662076 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:02Z","lastTransitionTime":"2026-01-27T07:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:02 crc kubenswrapper[4799]: E0127 07:47:02.673853 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:02 crc kubenswrapper[4799]: E0127 07:47:02.674015 4799 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.675686 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.675723 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.675732 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.675746 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.675758 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:02Z","lastTransitionTime":"2026-01-27T07:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.778012 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.778074 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.778082 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.778097 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.778108 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:02Z","lastTransitionTime":"2026-01-27T07:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.881732 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.881779 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.881789 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.881805 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.881815 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:02Z","lastTransitionTime":"2026-01-27T07:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.984703 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.984761 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.984777 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.984801 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:02 crc kubenswrapper[4799]: I0127 07:47:02.984819 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:02Z","lastTransitionTime":"2026-01-27T07:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.087768 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.087818 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.087829 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.087849 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.087861 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:03Z","lastTransitionTime":"2026-01-27T07:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.190778 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.190837 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.190862 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.190891 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.190911 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:03Z","lastTransitionTime":"2026-01-27T07:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.293411 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.293474 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.293493 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.293516 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.293532 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:03Z","lastTransitionTime":"2026-01-27T07:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.396385 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.396467 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.396491 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.396523 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.396544 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:03Z","lastTransitionTime":"2026-01-27T07:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.452187 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.452188 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 20:32:59.345772913 +0000 UTC Jan 27 07:47:03 crc kubenswrapper[4799]: E0127 07:47:03.452455 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.452198 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:47:03 crc kubenswrapper[4799]: E0127 07:47:03.453674 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.498987 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.499032 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.499042 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.499056 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.499065 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:03Z","lastTransitionTime":"2026-01-27T07:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.602070 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.602148 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.602166 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.602195 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.602214 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:03Z","lastTransitionTime":"2026-01-27T07:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.706182 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.706234 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.706249 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.706270 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.706286 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:03Z","lastTransitionTime":"2026-01-27T07:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.809221 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.809275 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.809296 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.809352 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.809371 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:03Z","lastTransitionTime":"2026-01-27T07:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.911092 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.911137 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.911151 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.911168 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:03 crc kubenswrapper[4799]: I0127 07:47:03.911180 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:03Z","lastTransitionTime":"2026-01-27T07:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.014486 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.014567 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.014585 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.014613 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.014637 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:04Z","lastTransitionTime":"2026-01-27T07:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.117407 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.117442 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.117455 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.117471 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.117484 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:04Z","lastTransitionTime":"2026-01-27T07:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.219795 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.219834 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.219842 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.219857 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.219867 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:04Z","lastTransitionTime":"2026-01-27T07:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.323572 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.323624 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.323635 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.323653 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.323664 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:04Z","lastTransitionTime":"2026-01-27T07:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.425909 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.425950 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.425962 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.425979 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.425992 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:04Z","lastTransitionTime":"2026-01-27T07:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.450845 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:47:04 crc kubenswrapper[4799]: E0127 07:47:04.451154 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.451420 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:47:04 crc kubenswrapper[4799]: E0127 07:47:04.451523 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.452701 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 10:44:37.916417528 +0000 UTC Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.466013 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.478985 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.490364 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.503085 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.521742 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.527866 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.527900 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.527910 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.527924 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.527933 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:04Z","lastTransitionTime":"2026-01-27T07:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.534158 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.556358 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:46Z\\\",\\\"message\\\":\\\"r/metrics per-node LB for network=default: []services.LB{}\\\\nI0127 07:46:46.210398 6850 services_controller.go:453] Built service openshift-authentication-operator/metrics template LB for network=default: []services.LB{}\\\\nI0127 07:46:46.210407 6850 services_controller.go:454] Service openshift-authentication-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0127 07:46:46.210413 6850 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z]\\\\nI0127 07:46:46.210408 6850 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.568369 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.581544 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.593603 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.605785 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f68d9971ee77d48b2b7db56c05766ea054be9dbf688bf2110af470179aacfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:43Z\\\",\\\"message\\\":\\\"2026-01-27T07:45:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30d81e92-d000-4f28-836e-fac87078fffe\\\\n2026-01-27T07:45:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30d81e92-d000-4f28-836e-fac87078fffe to /host/opt/cni/bin/\\\\n2026-01-27T07:45:58Z [verbose] multus-daemon started\\\\n2026-01-27T07:45:58Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:46:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.619507 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.628736 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9391125-5662-45bd-872a-60c0f7c8a218\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be502e8f64e3e936b549c9bf744c711d41ed82bea32d16c1a605e494d30e273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f597e8be8ce5125c1221f89be67eb02793d0673551d973741d8d1e1a470fbb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f597e8be8ce5125c1221f89be67eb02793d0673551d973741d8d1e1a470fbb14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.630550 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.630579 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.630586 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.630599 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.630608 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:04Z","lastTransitionTime":"2026-01-27T07:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.640573 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.651196 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0048c044-fea4-4d5a-8fa0-4d5c00dd8814\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af711974c3b7e4794dc83d00cb95ff6ab7d8df85618cbf2dabe33992368b332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a4d36b1be5b740f1c9a0aab6e46ecea9e21c53d8468379e9bf55bd8cf0721b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4da3ddd96887cadba76905a55343d81456ea50b7dba277376562c73562d948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.663884 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.673374 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.684772 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997b05f-6093-45cc-aa37-f988051c7f32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ba992e2bfa7a985a725ee707991b95bf535cdc46bd800e5ca71fde162563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56068aa0157d1e112901534ebf61c7bb646d76fc4bfa77f6f68fc63b4b44cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.695775 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qq7cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af5040b-0391-423c-b87d-90df4965f58f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qq7cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:04Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.732785 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.732845 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.732869 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.732899 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.732923 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:04Z","lastTransitionTime":"2026-01-27T07:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.835328 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.835379 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.835392 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.835407 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.835419 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:04Z","lastTransitionTime":"2026-01-27T07:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.937468 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.937511 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.937519 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.937535 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:04 crc kubenswrapper[4799]: I0127 07:47:04.937546 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:04Z","lastTransitionTime":"2026-01-27T07:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.038859 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.038898 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.038906 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.038918 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.038927 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:05Z","lastTransitionTime":"2026-01-27T07:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.141702 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.141738 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.141747 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.141760 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.141768 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:05Z","lastTransitionTime":"2026-01-27T07:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.244193 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.244316 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.244350 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.244367 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.244378 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:05Z","lastTransitionTime":"2026-01-27T07:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.346234 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.346270 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.346280 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.346318 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.346329 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:05Z","lastTransitionTime":"2026-01-27T07:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.448452 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.448553 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.448565 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.448583 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.448595 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:05Z","lastTransitionTime":"2026-01-27T07:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.450781 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:47:05 crc kubenswrapper[4799]: E0127 07:47:05.450891 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.450781 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:47:05 crc kubenswrapper[4799]: E0127 07:47:05.451035 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.452844 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 08:06:46.30320701 +0000 UTC Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.551209 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.551260 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.551269 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.551284 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.551293 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:05Z","lastTransitionTime":"2026-01-27T07:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.653942 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.653983 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.653994 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.654008 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.654017 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:05Z","lastTransitionTime":"2026-01-27T07:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.755652 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.755683 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.755691 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.755704 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.755712 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:05Z","lastTransitionTime":"2026-01-27T07:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.857546 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.857575 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.857583 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.857597 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.857606 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:05Z","lastTransitionTime":"2026-01-27T07:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.960025 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.960069 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.960080 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.960096 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:05 crc kubenswrapper[4799]: I0127 07:47:05.960109 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:05Z","lastTransitionTime":"2026-01-27T07:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.062997 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.063040 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.063050 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.063065 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.063075 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:06Z","lastTransitionTime":"2026-01-27T07:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.165360 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.165397 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.165409 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.165425 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.165435 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:06Z","lastTransitionTime":"2026-01-27T07:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.267326 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.267408 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.267421 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.267476 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.267490 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:06Z","lastTransitionTime":"2026-01-27T07:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.370147 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.370185 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.370196 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.370212 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.370223 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:06Z","lastTransitionTime":"2026-01-27T07:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.451093 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.451185 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:47:06 crc kubenswrapper[4799]: E0127 07:47:06.451224 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:47:06 crc kubenswrapper[4799]: E0127 07:47:06.451349 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.453147 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 00:39:08.521178552 +0000 UTC Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.472760 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.472811 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.472823 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.472845 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.472857 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:06Z","lastTransitionTime":"2026-01-27T07:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.575417 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.575456 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.575473 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.575492 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.575503 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:06Z","lastTransitionTime":"2026-01-27T07:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.677127 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.677158 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.677167 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.677183 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.677192 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:06Z","lastTransitionTime":"2026-01-27T07:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.779688 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.779750 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.779764 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.779782 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.779794 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:06Z","lastTransitionTime":"2026-01-27T07:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.884828 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.884875 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.884892 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.884907 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.884917 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:06Z","lastTransitionTime":"2026-01-27T07:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.986838 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.986877 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.986886 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.986900 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:06 crc kubenswrapper[4799]: I0127 07:47:06.986909 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:06Z","lastTransitionTime":"2026-01-27T07:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.088854 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.088893 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.088901 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.088913 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.088924 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:07Z","lastTransitionTime":"2026-01-27T07:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.192129 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.192445 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.192468 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.192498 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.192520 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:07Z","lastTransitionTime":"2026-01-27T07:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.295181 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.295217 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.295225 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.295241 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.295251 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:07Z","lastTransitionTime":"2026-01-27T07:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.398054 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.398101 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.398110 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.398127 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.398136 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:07Z","lastTransitionTime":"2026-01-27T07:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.450967 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:47:07 crc kubenswrapper[4799]: E0127 07:47:07.451120 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.450985 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:47:07 crc kubenswrapper[4799]: E0127 07:47:07.451354 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.453979 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 10:46:31.294250945 +0000 UTC Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.500966 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.501019 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.501032 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.501050 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.501066 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:07Z","lastTransitionTime":"2026-01-27T07:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.604155 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.604202 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.604215 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.604232 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.604245 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:07Z","lastTransitionTime":"2026-01-27T07:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.707432 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.707475 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.707484 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.707499 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.707517 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:07Z","lastTransitionTime":"2026-01-27T07:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.810367 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.810473 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.810541 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.810581 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.810778 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:07Z","lastTransitionTime":"2026-01-27T07:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.914097 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.914138 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.914148 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.914161 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:07 crc kubenswrapper[4799]: I0127 07:47:07.914170 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:07Z","lastTransitionTime":"2026-01-27T07:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.016729 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.016781 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.016792 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.016809 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.016818 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:08Z","lastTransitionTime":"2026-01-27T07:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.119653 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.119701 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.119713 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.119731 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.119745 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:08Z","lastTransitionTime":"2026-01-27T07:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.222188 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.222240 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.222254 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.222279 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.222292 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:08Z","lastTransitionTime":"2026-01-27T07:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.324370 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.324457 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.324474 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.324493 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.324503 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:08Z","lastTransitionTime":"2026-01-27T07:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.426569 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.426607 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.426616 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.426629 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.426638 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:08Z","lastTransitionTime":"2026-01-27T07:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.451290 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.451444 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:47:08 crc kubenswrapper[4799]: E0127 07:47:08.451594 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:47:08 crc kubenswrapper[4799]: E0127 07:47:08.451711 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.454192 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 10:01:32.327401419 +0000 UTC Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.528703 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.528738 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.528749 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.528765 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.528777 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:08Z","lastTransitionTime":"2026-01-27T07:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.631328 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.631371 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.631380 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.631395 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.631406 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:08Z","lastTransitionTime":"2026-01-27T07:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.733944 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.733985 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.733993 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.734007 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.734017 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:08Z","lastTransitionTime":"2026-01-27T07:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.836589 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.836626 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.836635 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.836698 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.836709 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:08Z","lastTransitionTime":"2026-01-27T07:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.939218 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.939279 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.939289 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.939329 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:08 crc kubenswrapper[4799]: I0127 07:47:08.939341 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:08Z","lastTransitionTime":"2026-01-27T07:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.042501 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.042553 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.042562 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.042577 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.042588 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:09Z","lastTransitionTime":"2026-01-27T07:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.145017 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.145077 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.145095 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.145119 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.145137 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:09Z","lastTransitionTime":"2026-01-27T07:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.247698 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.247731 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.247739 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.247752 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.247761 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:09Z","lastTransitionTime":"2026-01-27T07:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.349774 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.349806 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.349814 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.349828 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.349838 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:09Z","lastTransitionTime":"2026-01-27T07:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.451057 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.451069 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:47:09 crc kubenswrapper[4799]: E0127 07:47:09.451292 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:47:09 crc kubenswrapper[4799]: E0127 07:47:09.451399 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.453356 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.453422 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.453436 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.453464 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.453822 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:09Z","lastTransitionTime":"2026-01-27T07:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.454731 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 21:04:47.363735912 +0000 UTC Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.556964 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.557011 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.557021 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.557037 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.557049 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:09Z","lastTransitionTime":"2026-01-27T07:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.659611 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.659790 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.659843 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.659871 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.659889 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:09Z","lastTransitionTime":"2026-01-27T07:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.762294 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.762617 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.762626 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.762642 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.762654 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:09Z","lastTransitionTime":"2026-01-27T07:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.865668 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.865734 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.865753 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.865777 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.865796 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:09Z","lastTransitionTime":"2026-01-27T07:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.969597 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.969648 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.969660 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.969679 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:09 crc kubenswrapper[4799]: I0127 07:47:09.969689 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:09Z","lastTransitionTime":"2026-01-27T07:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.071644 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.071697 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.071714 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.071731 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.071744 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:10Z","lastTransitionTime":"2026-01-27T07:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.174207 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.174254 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.174323 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.174341 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.174352 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:10Z","lastTransitionTime":"2026-01-27T07:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.277563 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.277600 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.277612 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.277626 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.277638 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:10Z","lastTransitionTime":"2026-01-27T07:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.379434 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.379478 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.379486 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.379504 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.379513 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:10Z","lastTransitionTime":"2026-01-27T07:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.450464 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.450599 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:47:10 crc kubenswrapper[4799]: E0127 07:47:10.450892 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:47:10 crc kubenswrapper[4799]: E0127 07:47:10.451272 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.455032 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:22:22.41073962 +0000 UTC Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.481891 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.481933 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.481942 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.481961 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.481970 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:10Z","lastTransitionTime":"2026-01-27T07:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.585065 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.585108 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.585119 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.585134 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.585145 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:10Z","lastTransitionTime":"2026-01-27T07:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.687206 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.687770 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.687848 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.687927 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.687998 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:10Z","lastTransitionTime":"2026-01-27T07:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.790653 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.790699 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.790709 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.790724 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.790733 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:10Z","lastTransitionTime":"2026-01-27T07:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.893381 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.893451 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.893474 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.893504 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.893527 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:10Z","lastTransitionTime":"2026-01-27T07:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.996142 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.996785 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.996878 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.996949 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:10 crc kubenswrapper[4799]: I0127 07:47:10.997021 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:10Z","lastTransitionTime":"2026-01-27T07:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.099769 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.099825 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.099842 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.099866 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.099884 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:11Z","lastTransitionTime":"2026-01-27T07:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.202366 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.202474 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.202493 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.202560 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.202579 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:11Z","lastTransitionTime":"2026-01-27T07:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.305712 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.305765 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.305787 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.305816 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.305840 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:11Z","lastTransitionTime":"2026-01-27T07:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.408842 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.408891 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.408906 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.408922 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.408933 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:11Z","lastTransitionTime":"2026-01-27T07:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.450698 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.450719 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:47:11 crc kubenswrapper[4799]: E0127 07:47:11.450871 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:47:11 crc kubenswrapper[4799]: E0127 07:47:11.451005 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.455784 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:06:08.17685555 +0000 UTC Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.510888 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.510943 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.510955 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.510973 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.510985 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:11Z","lastTransitionTime":"2026-01-27T07:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.613537 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.613577 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.613588 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.613604 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.613619 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:11Z","lastTransitionTime":"2026-01-27T07:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.716453 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.716516 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.716532 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.716557 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.716576 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:11Z","lastTransitionTime":"2026-01-27T07:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.819482 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.819548 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.819560 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.819582 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.819596 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:11Z","lastTransitionTime":"2026-01-27T07:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.922424 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.922505 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.922530 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.922559 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:11 crc kubenswrapper[4799]: I0127 07:47:11.922581 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:11Z","lastTransitionTime":"2026-01-27T07:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.025693 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.025755 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.025772 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.025796 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.025819 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:12Z","lastTransitionTime":"2026-01-27T07:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.128600 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.128663 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.128680 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.128707 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.128725 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:12Z","lastTransitionTime":"2026-01-27T07:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.232816 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.232914 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.232939 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.233023 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.233051 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:12Z","lastTransitionTime":"2026-01-27T07:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.337426 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.337516 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.337541 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.337569 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.337587 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:12Z","lastTransitionTime":"2026-01-27T07:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.440822 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.440880 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.440896 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.440928 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.440943 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:12Z","lastTransitionTime":"2026-01-27T07:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.451344 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:47:12 crc kubenswrapper[4799]: E0127 07:47:12.451513 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.451593 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:47:12 crc kubenswrapper[4799]: E0127 07:47:12.452177 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.452710 4799 scope.go:117] "RemoveContainer" containerID="b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3" Jan 27 07:47:12 crc kubenswrapper[4799]: E0127 07:47:12.452962 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.455872 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:07:22.362667745 +0000 UTC Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.544107 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.544163 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.544176 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.544195 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.544208 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:12Z","lastTransitionTime":"2026-01-27T07:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.647338 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.647410 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.647433 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.647464 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.647486 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:12Z","lastTransitionTime":"2026-01-27T07:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.750064 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.750114 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.750126 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.750145 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.750156 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:12Z","lastTransitionTime":"2026-01-27T07:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.852377 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.852436 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.852454 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.852473 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.852485 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:12Z","lastTransitionTime":"2026-01-27T07:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.955564 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.955638 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.955657 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.955684 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:12 crc kubenswrapper[4799]: I0127 07:47:12.955704 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:12Z","lastTransitionTime":"2026-01-27T07:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.041985 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.042037 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.042049 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.042066 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.042078 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:13Z","lastTransitionTime":"2026-01-27T07:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:13 crc kubenswrapper[4799]: E0127 07:47:13.072440 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.076607 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.076696 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.076719 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.076747 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.076766 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:13Z","lastTransitionTime":"2026-01-27T07:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:13 crc kubenswrapper[4799]: E0127 07:47:13.098875 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.104217 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.104262 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.104272 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.104287 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.104296 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:13Z","lastTransitionTime":"2026-01-27T07:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:13 crc kubenswrapper[4799]: E0127 07:47:13.121850 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.125170 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.125203 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.125211 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.125226 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.125235 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:13Z","lastTransitionTime":"2026-01-27T07:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:13 crc kubenswrapper[4799]: E0127 07:47:13.144374 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.148689 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.148751 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.148770 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.148823 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.148841 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:13Z","lastTransitionTime":"2026-01-27T07:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:13 crc kubenswrapper[4799]: E0127 07:47:13.168234 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:13Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:13 crc kubenswrapper[4799]: E0127 07:47:13.168455 4799 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.170247 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.170312 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.170327 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.170352 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.170368 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:13Z","lastTransitionTime":"2026-01-27T07:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.272834 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.272867 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.272877 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.272890 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.272899 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:13Z","lastTransitionTime":"2026-01-27T07:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.375997 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.376038 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.376049 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.376067 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.376080 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:13Z","lastTransitionTime":"2026-01-27T07:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.450630 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.450687 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:47:13 crc kubenswrapper[4799]: E0127 07:47:13.450801 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:47:13 crc kubenswrapper[4799]: E0127 07:47:13.450894 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.456911 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 20:23:25.485126619 +0000 UTC Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.478853 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.478891 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.478901 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.478915 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.478924 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:13Z","lastTransitionTime":"2026-01-27T07:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.582200 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.582250 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.582263 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.582280 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.582322 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:13Z","lastTransitionTime":"2026-01-27T07:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.684626 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.684665 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.684683 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.684700 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.684711 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:13Z","lastTransitionTime":"2026-01-27T07:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.791094 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.791633 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.791660 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.791694 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.791719 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:13Z","lastTransitionTime":"2026-01-27T07:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.893425 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.893465 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.893474 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.893488 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.893497 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:13Z","lastTransitionTime":"2026-01-27T07:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.996045 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.996116 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.996135 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.996157 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:13 crc kubenswrapper[4799]: I0127 07:47:13.996175 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:13Z","lastTransitionTime":"2026-01-27T07:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.098481 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.098531 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.098546 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.098565 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.098578 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:14Z","lastTransitionTime":"2026-01-27T07:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.117330 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs\") pod \"network-metrics-daemon-qq7cx\" (UID: \"0af5040b-0391-423c-b87d-90df4965f58f\") " pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:47:14 crc kubenswrapper[4799]: E0127 07:47:14.117666 4799 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:47:14 crc kubenswrapper[4799]: E0127 07:47:14.117727 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs podName:0af5040b-0391-423c-b87d-90df4965f58f nodeName:}" failed. No retries permitted until 2026-01-27 07:48:18.11771114 +0000 UTC m=+164.428815215 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs") pod "network-metrics-daemon-qq7cx" (UID: "0af5040b-0391-423c-b87d-90df4965f58f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.200768 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.200808 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.200822 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.200840 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.200850 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:14Z","lastTransitionTime":"2026-01-27T07:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.303344 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.303399 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.303416 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.303437 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.303453 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:14Z","lastTransitionTime":"2026-01-27T07:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.406464 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.406511 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.406525 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.406543 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.406558 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:14Z","lastTransitionTime":"2026-01-27T07:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.450940 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.451036 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:47:14 crc kubenswrapper[4799]: E0127 07:47:14.451147 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:47:14 crc kubenswrapper[4799]: E0127 07:47:14.451259 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.457344 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 10:11:13.055435215 +0000 UTC Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.475157 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2498c7744ebb5b57c00ed494e0d4c8b2f0bc4b33231965be2f48adfdab2ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.493991 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e071f3f1374432f8fd33124ce5d7fcc219d02e7e37828b5e6d17a6301fa8469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.510113 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gc4vh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cf6cd90-b4bf-4e62-b758-d31590e43866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a6864db56e386b1388f61181edd87e2d8480d5b4c79f00ed79cc7469915e495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpgp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gc4vh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.511699 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.511759 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.511777 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.511799 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.511817 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:14Z","lastTransitionTime":"2026-01-27T07:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.528523 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"058f98c8-1b84-48d3-8167-ad1a5584351c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9dbac0520a7a38a41182fdb3679c5bb99828d0f8399b09fe98f968b79b80107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrt7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqpcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.566369 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e89fb82b-b7c8-437b-b916-34a5f7e30de0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4f1c311ecb4536ee27582b09be10d725312348aa0474ab86ffe45c8defb03a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8364b09a28e8cb86808b658b288cd43f54cdfb8c29caadf367b3f36ffdd507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b00cf19e2fb323ec026efe341ea8666a90c98eeec0b9035519b50e7df31672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80bc78d8bcb5d9f994d5271d99e3f4c9447cf556444d194f935b9d7c023fc94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ec1099532bfa956bca3732eb19adf86ef4376a66a095d5d0663a1a70450512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2dd5f75143dee5be4ed323abf4e3f8a402016508f8db22a923741747433325f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99505bdf6d7ec235ee78b20f2aea12fb83cde23ff06c67e4918c923d8724f4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e3e1706d936b91bbe70bf4577515ad91a2508a38be85768b40d781289e1fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.586660 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bcc2e13aa205b3b1b7547ba3333833a4902d76154b99796226fb331bca68d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c566a73a47ca07ad80c6dd758bb65d3c6958137d4f01aa58f576db399dc7697f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.614191 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"836be94a-c1de-4b1c-b98a-7af78a2a4607\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:46Z\\\",\\\"message\\\":\\\"r/metrics per-node LB for network=default: []services.LB{}\\\\nI0127 07:46:46.210398 6850 services_controller.go:453] Built service openshift-authentication-operator/metrics template LB for network=default: []services.LB{}\\\\nI0127 07:46:46.210407 6850 services_controller.go:454] Service openshift-authentication-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0127 07:46:46.210413 6850 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:46:46Z is after 2025-08-24T17:21:41Z]\\\\nI0127 07:46:46.210408 6850 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hggcd_openshift-ovn-kubernetes(836be94a-c1de-4b1c-b98a-7af78a2a4607)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnc94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hggcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.616825 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.616899 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.616911 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.616990 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.617005 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:14Z","lastTransitionTime":"2026-01-27T07:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.627852 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e782c50a-95ff-4537-9c07-5e070f7c71e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aabf86c18dbd268fa96cfced40f184022f4fbad7a734ce70074d9936f89f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://619bb098f35ccb2d3cb60941f1d90190ccda32228bfa8404b12dfe65638115fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad3a6d629990be5436c97dac417216c9657fb9d06f61c42c586d86b90383aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.641379 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.658914 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.672877 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tgr7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60934e21-bc53-4f80-bb08-bb67af7301cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49f68d9971ee77d48b2b7db56c05766ea054be9dbf688bf2110af470179aacfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T07:46:43Z\\\",\\\"message\\\":\\\"2026-01-27T07:45:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30d81e92-d000-4f28-836e-fac87078fffe\\\\n2026-01-27T07:45:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30d81e92-d000-4f28-836e-fac87078fffe to /host/opt/cni/bin/\\\\n2026-01-27T07:45:58Z [verbose] multus-daemon started\\\\n2026-01-27T07:45:58Z [verbose] Readiness Indicator file check\\\\n2026-01-27T07:46:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tgr7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.694401 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e09525d-7c34-4bc8-883e-f6dafcd0b4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://641c90515346e2f25100ba3de46340b71ddbb8b721d57ed152ecf3803ff07900\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://980911e5662023a35ec2b1eafd2be938232ff1c2e03bd4ed290d64b11230130f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bd98bd6b289cc9ecaf51df6e9d5c653e6ca9e5efc679bceb3d4856bf8ee8d06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2bc7d06d6aecc0329386464d0819e8af825e22d5b341a14764b48e251095a25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8016c0ccf52ef7b15b9c8a196e24cd7e8b9067448cbdddfdf1ec10ac4f27575b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5bbfa11257416030a301b88f10200c9350b13e26345bf46cf052052ba2e3870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67073700dac039b8e7bf4198f2fbb5c7bbf67c2fb6d3ab3a38ae243a2fb82fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnc6t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8fm6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.705159 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9391125-5662-45bd-872a-60c0f7c8a218\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be502e8f64e3e936b549c9bf744c711d41ed82bea32d16c1a605e494d30e273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f597e8be8ce5125c1221f89be67eb02793d0673551d973741d8d1e1a470fbb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f597e8be8ce5125c1221f89be67eb02793d0673551d973741d8d1e1a470fbb14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.719829 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.719864 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.719876 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.719893 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.719904 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:14Z","lastTransitionTime":"2026-01-27T07:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.724375 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"847339c5-936a-45d5-b326-b9aa8d8d5d97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 07:45:48.008375 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 07:45:48.079039 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3814140980/tls.crt::/tmp/serving-cert-3814140980/tls.key\\\\\\\"\\\\nI0127 07:45:54.307386 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 07:45:54.312140 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 07:45:54.312180 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 07:45:54.312219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 07:45:54.312230 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 07:45:54.325981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 07:45:54.326027 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 07:45:54.326046 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 07:45:54.326053 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 07:45:54.326062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 07:45:54.326067 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 07:45:54.326547 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 07:45:54.331706 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.737794 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0048c044-fea4-4d5a-8fa0-4d5c00dd8814\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af711974c3b7e4794dc83d00cb95ff6ab7d8df85618cbf2dabe33992368b332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19a4d36b1be5b740f1c9a0aab6e46ecea9e21c53d8468379e9bf55bd8cf0721b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4da3ddd96887cadba76905a55343d81456ea50b7dba277376562c73562d948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693cb97f211c4f3066ae58fb26b188bf2c6e343441cdaf9680a6cdf31be10e0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T07:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T07:45:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.750948 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.766141 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w5s6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb74b5d4-3624-4b27-9621-2d38cc2c6f3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://190c5284e837261459499c7e7ab4be4fc950250943b47f2d8a1a3f6343c57d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmlwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:45:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w5s6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.779994 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7997b05f-6093-45cc-aa37-f988051c7f32\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30ba992e2bfa7a985a725ee707991b95bf535cdc46bd800e5ca71fde162563cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56068aa0157d1e112901534ebf61c7bb646d76fc4bfa77f6f68fc63b4b44cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T07:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvtsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.793426 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qq7cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af5040b-0391-423c-b87d-90df4965f58f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T07:46:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8b4hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T07:46:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qq7cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.822659 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.822729 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.822750 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.822784 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.822806 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:14Z","lastTransitionTime":"2026-01-27T07:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.925012 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.925071 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.925081 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.925100 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:14 crc kubenswrapper[4799]: I0127 07:47:14.925113 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:14Z","lastTransitionTime":"2026-01-27T07:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.028460 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.028554 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.028584 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.028620 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.028645 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:15Z","lastTransitionTime":"2026-01-27T07:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.131671 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.131716 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.131727 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.131744 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.131755 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:15Z","lastTransitionTime":"2026-01-27T07:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.234878 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.234945 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.234963 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.234995 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.235013 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:15Z","lastTransitionTime":"2026-01-27T07:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.337979 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.338046 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.338063 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.338087 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.338103 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:15Z","lastTransitionTime":"2026-01-27T07:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.441696 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.441786 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.441796 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.441817 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.441832 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:15Z","lastTransitionTime":"2026-01-27T07:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.451144 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.451165 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:47:15 crc kubenswrapper[4799]: E0127 07:47:15.451281 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:47:15 crc kubenswrapper[4799]: E0127 07:47:15.451389 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.458220 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 05:53:32.125444241 +0000 UTC Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.544694 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.544745 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.544754 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.544787 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.544797 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:15Z","lastTransitionTime":"2026-01-27T07:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.647908 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.648001 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.648032 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.648061 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.648082 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:15Z","lastTransitionTime":"2026-01-27T07:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.750910 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.750964 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.750974 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.750993 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.751006 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:15Z","lastTransitionTime":"2026-01-27T07:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.853840 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.853880 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.853888 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.853902 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.853912 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:15Z","lastTransitionTime":"2026-01-27T07:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.956548 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.956594 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.956602 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.956617 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:15 crc kubenswrapper[4799]: I0127 07:47:15.956629 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:15Z","lastTransitionTime":"2026-01-27T07:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.059145 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.059206 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.059228 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.059249 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.059262 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:16Z","lastTransitionTime":"2026-01-27T07:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.162576 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.162654 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.162674 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.162699 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.162718 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:16Z","lastTransitionTime":"2026-01-27T07:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.266793 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.266873 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.266885 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.266900 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.266914 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:16Z","lastTransitionTime":"2026-01-27T07:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.369161 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.369209 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.369220 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.369237 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.369249 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:16Z","lastTransitionTime":"2026-01-27T07:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.451336 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.451359 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:47:16 crc kubenswrapper[4799]: E0127 07:47:16.451485 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:47:16 crc kubenswrapper[4799]: E0127 07:47:16.451563 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.458660 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:24:17.232918202 +0000 UTC Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.471762 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.471790 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.471894 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.471910 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.471922 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:16Z","lastTransitionTime":"2026-01-27T07:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.574830 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.574872 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.574882 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.574898 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.574909 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:16Z","lastTransitionTime":"2026-01-27T07:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.678742 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.678788 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.678798 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.678816 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.678832 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:16Z","lastTransitionTime":"2026-01-27T07:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.783137 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.783229 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.783250 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.783284 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.783346 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:16Z","lastTransitionTime":"2026-01-27T07:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.886382 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.886439 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.886452 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.886475 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.886488 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:16Z","lastTransitionTime":"2026-01-27T07:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.989822 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.989885 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.989915 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.989936 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:16 crc kubenswrapper[4799]: I0127 07:47:16.989949 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:16Z","lastTransitionTime":"2026-01-27T07:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.092843 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.092917 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.092934 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.092963 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.092980 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:17Z","lastTransitionTime":"2026-01-27T07:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.195293 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.195351 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.195377 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.195394 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.195407 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:17Z","lastTransitionTime":"2026-01-27T07:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.298949 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.299031 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.299048 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.299078 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.299100 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:17Z","lastTransitionTime":"2026-01-27T07:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.401743 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.401811 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.401829 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.401854 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.401876 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:17Z","lastTransitionTime":"2026-01-27T07:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.450719 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.450794 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:47:17 crc kubenswrapper[4799]: E0127 07:47:17.450916 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:47:17 crc kubenswrapper[4799]: E0127 07:47:17.451064 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.459015 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 05:09:42.097351749 +0000 UTC Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.506027 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.506139 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.506174 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.506218 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.506260 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:17Z","lastTransitionTime":"2026-01-27T07:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.610226 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.610359 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.610383 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.610411 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.610430 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:17Z","lastTransitionTime":"2026-01-27T07:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.714738 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.714832 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.714852 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.714885 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.714905 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:17Z","lastTransitionTime":"2026-01-27T07:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.817450 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.817516 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.817538 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.817569 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.817592 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:17Z","lastTransitionTime":"2026-01-27T07:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.921404 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.921454 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.921465 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.921483 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:17 crc kubenswrapper[4799]: I0127 07:47:17.921499 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:17Z","lastTransitionTime":"2026-01-27T07:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.026043 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.026104 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.026116 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.026136 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.026151 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:18Z","lastTransitionTime":"2026-01-27T07:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.128797 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.128860 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.128881 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.128905 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.128924 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:18Z","lastTransitionTime":"2026-01-27T07:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.231571 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.231637 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.231656 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.231685 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.231703 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:18Z","lastTransitionTime":"2026-01-27T07:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.335249 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.335288 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.335339 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.335371 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.335390 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:18Z","lastTransitionTime":"2026-01-27T07:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.439204 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.439264 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.439287 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.439364 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.439383 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:18Z","lastTransitionTime":"2026-01-27T07:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.451477 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.451477 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:47:18 crc kubenswrapper[4799]: E0127 07:47:18.451779 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:47:18 crc kubenswrapper[4799]: E0127 07:47:18.451823 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.459204 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 01:38:05.129244641 +0000 UTC Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.542967 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.543048 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.543071 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.543102 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.543127 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:18Z","lastTransitionTime":"2026-01-27T07:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.646914 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.649416 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.649431 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.649453 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.649466 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:18Z","lastTransitionTime":"2026-01-27T07:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.752793 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.752871 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.752888 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.752920 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.752939 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:18Z","lastTransitionTime":"2026-01-27T07:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.855589 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.855640 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.855655 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.855677 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.855694 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:18Z","lastTransitionTime":"2026-01-27T07:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.958336 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.958404 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.958426 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.958453 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:18 crc kubenswrapper[4799]: I0127 07:47:18.958477 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:18Z","lastTransitionTime":"2026-01-27T07:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.062160 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.062208 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.062218 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.062237 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.062250 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:19Z","lastTransitionTime":"2026-01-27T07:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.165942 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.166008 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.166031 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.166058 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.166106 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:19Z","lastTransitionTime":"2026-01-27T07:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.269070 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.269120 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.269133 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.269153 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.269166 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:19Z","lastTransitionTime":"2026-01-27T07:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.372524 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.372583 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.372596 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.372616 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.372632 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:19Z","lastTransitionTime":"2026-01-27T07:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.450661 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.450802 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:47:19 crc kubenswrapper[4799]: E0127 07:47:19.450865 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:47:19 crc kubenswrapper[4799]: E0127 07:47:19.451291 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.459604 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 05:44:22.81801029 +0000 UTC Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.475297 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.475400 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.475418 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.475523 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.475544 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:19Z","lastTransitionTime":"2026-01-27T07:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.577759 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.577815 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.577832 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.577855 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.577874 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:19Z","lastTransitionTime":"2026-01-27T07:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.682489 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.682534 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.682548 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.682568 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.682578 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:19Z","lastTransitionTime":"2026-01-27T07:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.785124 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.785170 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.785182 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.785198 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.785211 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:19Z","lastTransitionTime":"2026-01-27T07:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.888831 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.888918 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.888945 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.888976 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.889001 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:19Z","lastTransitionTime":"2026-01-27T07:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.993008 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.993120 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.993140 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.993168 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:19 crc kubenswrapper[4799]: I0127 07:47:19.993191 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:19Z","lastTransitionTime":"2026-01-27T07:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.096647 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.096740 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.096768 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.096805 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.096833 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:20Z","lastTransitionTime":"2026-01-27T07:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.200395 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.200495 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.200522 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.200556 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.200581 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:20Z","lastTransitionTime":"2026-01-27T07:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.303155 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.303257 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.303290 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.303354 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.303378 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:20Z","lastTransitionTime":"2026-01-27T07:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.406498 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.406562 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.406574 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.406592 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.406604 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:20Z","lastTransitionTime":"2026-01-27T07:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.451411 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:47:20 crc kubenswrapper[4799]: E0127 07:47:20.451673 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.451834 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:47:20 crc kubenswrapper[4799]: E0127 07:47:20.452018 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.460138 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 13:20:50.583879723 +0000 UTC Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.510717 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.510807 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.510826 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.510857 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.510878 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:20Z","lastTransitionTime":"2026-01-27T07:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.614097 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.614261 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.614391 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.614424 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.614446 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:20Z","lastTransitionTime":"2026-01-27T07:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.718454 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.718530 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.718552 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.718584 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.718604 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:20Z","lastTransitionTime":"2026-01-27T07:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.822665 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.822728 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.822742 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.822769 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.822786 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:20Z","lastTransitionTime":"2026-01-27T07:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.925609 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.925684 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.925707 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.925737 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:20 crc kubenswrapper[4799]: I0127 07:47:20.925761 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:20Z","lastTransitionTime":"2026-01-27T07:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.029398 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.029454 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.029466 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.029488 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.029503 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:21Z","lastTransitionTime":"2026-01-27T07:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.132995 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.133075 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.133087 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.133106 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.133120 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:21Z","lastTransitionTime":"2026-01-27T07:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.237042 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.237125 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.237163 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.237203 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.237230 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:21Z","lastTransitionTime":"2026-01-27T07:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.340988 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.341067 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.341095 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.341128 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.341147 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:21Z","lastTransitionTime":"2026-01-27T07:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.444737 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.444813 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.444834 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.444861 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.444881 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:21Z","lastTransitionTime":"2026-01-27T07:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.451338 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.451556 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:47:21 crc kubenswrapper[4799]: E0127 07:47:21.452005 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:47:21 crc kubenswrapper[4799]: E0127 07:47:21.452192 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.461447 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 16:16:53.647978658 +0000 UTC Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.548844 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.548918 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.548939 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.548969 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.548991 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:21Z","lastTransitionTime":"2026-01-27T07:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.651888 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.651970 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.651991 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.652023 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.652045 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:21Z","lastTransitionTime":"2026-01-27T07:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.754871 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.754952 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.754975 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.755001 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.755021 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:21Z","lastTransitionTime":"2026-01-27T07:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.858432 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.858472 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.858484 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.858502 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.858517 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:21Z","lastTransitionTime":"2026-01-27T07:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.961816 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.961873 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.961884 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.961902 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:21 crc kubenswrapper[4799]: I0127 07:47:21.961914 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:21Z","lastTransitionTime":"2026-01-27T07:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.066211 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.066295 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.066362 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.066406 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.066441 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:22Z","lastTransitionTime":"2026-01-27T07:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.170125 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.170163 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.170173 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.170193 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.170203 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:22Z","lastTransitionTime":"2026-01-27T07:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.273556 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.273594 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.273603 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.273620 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.273631 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:22Z","lastTransitionTime":"2026-01-27T07:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.377189 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.377249 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.377268 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.377288 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.377324 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:22Z","lastTransitionTime":"2026-01-27T07:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.451282 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.451489 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:47:22 crc kubenswrapper[4799]: E0127 07:47:22.451760 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:47:22 crc kubenswrapper[4799]: E0127 07:47:22.451917 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.462291 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 20:03:06.064099229 +0000 UTC Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.480815 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.480891 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.480916 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.480948 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.480977 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:22Z","lastTransitionTime":"2026-01-27T07:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.584473 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.584538 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.584550 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.584573 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.584585 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:22Z","lastTransitionTime":"2026-01-27T07:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.687550 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.687672 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.687704 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.687737 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.687759 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:22Z","lastTransitionTime":"2026-01-27T07:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.791523 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.791602 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.791621 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.791648 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.791665 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:22Z","lastTransitionTime":"2026-01-27T07:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.894407 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.894484 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.894497 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.894534 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.894551 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:22Z","lastTransitionTime":"2026-01-27T07:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.997821 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.997888 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.997903 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.997925 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:22 crc kubenswrapper[4799]: I0127 07:47:22.997944 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:22Z","lastTransitionTime":"2026-01-27T07:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.099969 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.100018 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.100028 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.100045 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.100056 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:23Z","lastTransitionTime":"2026-01-27T07:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.203273 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.203396 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.203419 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.203451 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.203472 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:23Z","lastTransitionTime":"2026-01-27T07:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.307081 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.307138 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.307151 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.307173 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.307190 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:23Z","lastTransitionTime":"2026-01-27T07:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.409682 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.409783 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.409880 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.409918 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.409939 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:23Z","lastTransitionTime":"2026-01-27T07:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.412376 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.412482 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.412537 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.412569 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.412623 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:23Z","lastTransitionTime":"2026-01-27T07:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:23 crc kubenswrapper[4799]: E0127 07:47:23.434794 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:23Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.440479 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.440531 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.440560 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.440583 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.440597 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:23Z","lastTransitionTime":"2026-01-27T07:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.450565 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.450592 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:47:23 crc kubenswrapper[4799]: E0127 07:47:23.451116 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:47:23 crc kubenswrapper[4799]: E0127 07:47:23.451280 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:47:23 crc kubenswrapper[4799]: E0127 07:47:23.461577 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:23Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.462437 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 04:08:38.847773529 +0000 UTC Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.467136 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.467204 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.467227 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.467258 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.467280 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:23Z","lastTransitionTime":"2026-01-27T07:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:23 crc kubenswrapper[4799]: E0127 07:47:23.485209 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:23Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.490870 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.490963 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.490989 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.491022 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.491046 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:23Z","lastTransitionTime":"2026-01-27T07:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:23 crc kubenswrapper[4799]: E0127 07:47:23.514107 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:23Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.520685 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.520753 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.520773 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.520801 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.520822 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:23Z","lastTransitionTime":"2026-01-27T07:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:23 crc kubenswrapper[4799]: E0127 07:47:23.542636 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T07:47:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"908ca879-28d5-4e99-9761-e4bdaff0505d\\\",\\\"systemUUID\\\":\\\"d3817001-797e-409c-8ccf-0b6489f48d4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T07:47:23Z is after 2025-08-24T17:21:41Z" Jan 27 07:47:23 crc kubenswrapper[4799]: E0127 07:47:23.542889 4799 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.545652 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.545817 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.545950 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.546087 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.546225 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:23Z","lastTransitionTime":"2026-01-27T07:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.649563 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.649626 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.649643 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.649676 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.649702 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:23Z","lastTransitionTime":"2026-01-27T07:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.753738 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.754024 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.754054 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.754087 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.754106 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:23Z","lastTransitionTime":"2026-01-27T07:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.857250 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.857321 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.857336 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.857354 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.857369 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:23Z","lastTransitionTime":"2026-01-27T07:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.959917 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.959951 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.959960 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.959975 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:23 crc kubenswrapper[4799]: I0127 07:47:23.959985 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:23Z","lastTransitionTime":"2026-01-27T07:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.063098 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.063181 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.063193 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.063212 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.063246 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:24Z","lastTransitionTime":"2026-01-27T07:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.167578 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.167659 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.167680 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.167715 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.167742 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:24Z","lastTransitionTime":"2026-01-27T07:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.271747 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.271808 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.271826 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.271852 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.271872 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:24Z","lastTransitionTime":"2026-01-27T07:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.374798 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.375058 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.375135 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.375211 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.375276 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:24Z","lastTransitionTime":"2026-01-27T07:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.451098 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.451196 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:47:24 crc kubenswrapper[4799]: E0127 07:47:24.451611 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:47:24 crc kubenswrapper[4799]: E0127 07:47:24.451695 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.463506 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 07:07:23.616048448 +0000 UTC Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.477491 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.477530 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.477541 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.477559 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.477569 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:24Z","lastTransitionTime":"2026-01-27T07:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.500727 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=90.500691517 podStartE2EDuration="1m30.500691517s" podCreationTimestamp="2026-01-27 07:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:24.481926921 +0000 UTC m=+110.793030986" watchObservedRunningTime="2026-01-27 07:47:24.500691517 +0000 UTC m=+110.811795582" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.501239 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=63.501233233 podStartE2EDuration="1m3.501233233s" podCreationTimestamp="2026-01-27 07:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:24.500666317 +0000 UTC m=+110.811770382" watchObservedRunningTime="2026-01-27 07:47:24.501233233 +0000 UTC m=+110.812337298" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.562216 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-w5s6n" podStartSLOduration=90.562196853 podStartE2EDuration="1m30.562196853s" podCreationTimestamp="2026-01-27 07:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:24.543987202 +0000 UTC m=+110.855091287" watchObservedRunningTime="2026-01-27 07:47:24.562196853 +0000 UTC m=+110.873300918" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.562713 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzbd" podStartSLOduration=89.562707757 podStartE2EDuration="1m29.562707757s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:24.561528585 +0000 UTC m=+110.872632650" watchObservedRunningTime="2026-01-27 07:47:24.562707757 +0000 UTC m=+110.873811822" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.580378 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.580647 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.580757 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.580863 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.580962 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:24Z","lastTransitionTime":"2026-01-27T07:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.597123 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=26.597109663 podStartE2EDuration="26.597109663s" podCreationTimestamp="2026-01-27 07:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:24.596751843 +0000 UTC m=+110.907855968" watchObservedRunningTime="2026-01-27 07:47:24.597109663 +0000 UTC m=+110.908213728" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.625085 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-gc4vh" podStartSLOduration=90.625041427 podStartE2EDuration="1m30.625041427s" podCreationTimestamp="2026-01-27 07:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:24.624368518 +0000 UTC m=+110.935472603" watchObservedRunningTime="2026-01-27 07:47:24.625041427 +0000 UTC m=+110.936145532" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.639717 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podStartSLOduration=90.639691557 podStartE2EDuration="1m30.639691557s" podCreationTimestamp="2026-01-27 07:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:24.63906783 +0000 UTC m=+110.950171895" watchObservedRunningTime="2026-01-27 07:47:24.639691557 +0000 UTC m=+110.950795642" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.683788 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.683824 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.683834 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.683851 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.683861 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:24Z","lastTransitionTime":"2026-01-27T07:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.746646 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=88.746606768 podStartE2EDuration="1m28.746606768s" podCreationTimestamp="2026-01-27 07:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:24.725981489 +0000 UTC m=+111.037085574" watchObservedRunningTime="2026-01-27 07:47:24.746606768 +0000 UTC m=+111.057710883" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.778756 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-tgr7w" podStartSLOduration=90.778694718 podStartE2EDuration="1m30.778694718s" podCreationTimestamp="2026-01-27 07:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:24.777222867 +0000 UTC m=+111.088326982" watchObservedRunningTime="2026-01-27 07:47:24.778694718 +0000 UTC m=+111.089798813" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.785955 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.786028 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.786051 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.786080 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.786100 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:24Z","lastTransitionTime":"2026-01-27T07:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.797142 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-8fm6z" podStartSLOduration=90.797112185 podStartE2EDuration="1m30.797112185s" podCreationTimestamp="2026-01-27 07:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:24.796594001 +0000 UTC m=+111.107698096" watchObservedRunningTime="2026-01-27 07:47:24.797112185 +0000 UTC m=+111.108216270" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.812898 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=90.812876067 podStartE2EDuration="1m30.812876067s" podCreationTimestamp="2026-01-27 07:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:24.812568458 +0000 UTC m=+111.123672523" watchObservedRunningTime="2026-01-27 07:47:24.812876067 +0000 UTC m=+111.123980132" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.889712 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.890017 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.890467 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.890554 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.890618 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:24Z","lastTransitionTime":"2026-01-27T07:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.997160 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.997207 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.997222 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.997238 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:24 crc kubenswrapper[4799]: I0127 07:47:24.997248 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:24Z","lastTransitionTime":"2026-01-27T07:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.098969 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.098998 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.099007 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.099020 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.099029 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:25Z","lastTransitionTime":"2026-01-27T07:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.201656 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.201746 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.201790 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.201829 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.201852 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:25Z","lastTransitionTime":"2026-01-27T07:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.304480 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.304884 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.304985 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.305089 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.305187 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:25Z","lastTransitionTime":"2026-01-27T07:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.407626 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.407698 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.407717 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.407746 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.407768 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:25Z","lastTransitionTime":"2026-01-27T07:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.451071 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:47:25 crc kubenswrapper[4799]: E0127 07:47:25.451212 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.451457 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:47:25 crc kubenswrapper[4799]: E0127 07:47:25.451689 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.464507 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 19:33:48.112313542 +0000 UTC Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.511863 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.511936 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.511955 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.512016 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.512037 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:25Z","lastTransitionTime":"2026-01-27T07:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.615216 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.615262 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.615274 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.615291 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.615319 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:25Z","lastTransitionTime":"2026-01-27T07:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.719368 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.719460 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.719481 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.719513 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.719535 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:25Z","lastTransitionTime":"2026-01-27T07:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.823121 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.823192 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.823205 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.823229 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.823245 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:25Z","lastTransitionTime":"2026-01-27T07:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.926538 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.926595 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.926610 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.926635 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:25 crc kubenswrapper[4799]: I0127 07:47:25.926653 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:25Z","lastTransitionTime":"2026-01-27T07:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.029631 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.029690 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.029701 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.029722 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.029734 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:26Z","lastTransitionTime":"2026-01-27T07:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.132993 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.133060 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.133078 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.133105 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.133124 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:26Z","lastTransitionTime":"2026-01-27T07:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.235826 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.236103 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.236132 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.236165 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.236189 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:26Z","lastTransitionTime":"2026-01-27T07:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.339460 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.339545 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.339569 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.339603 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.339632 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:26Z","lastTransitionTime":"2026-01-27T07:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.442356 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.442441 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.442461 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.442492 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.442512 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:26Z","lastTransitionTime":"2026-01-27T07:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.450555 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.450568 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:47:26 crc kubenswrapper[4799]: E0127 07:47:26.451154 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:47:26 crc kubenswrapper[4799]: E0127 07:47:26.451264 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.451689 4799 scope.go:117] "RemoveContainer" containerID="b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.465207 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 06:50:00.411922429 +0000 UTC Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.545550 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.545601 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.545637 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.545655 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.545670 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:26Z","lastTransitionTime":"2026-01-27T07:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.649887 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.649941 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.649957 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.649984 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.650002 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:26Z","lastTransitionTime":"2026-01-27T07:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.752634 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.752692 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.752705 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.752727 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.752741 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:26Z","lastTransitionTime":"2026-01-27T07:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.855851 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.855926 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.855936 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.855958 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.856008 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:26Z","lastTransitionTime":"2026-01-27T07:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.959844 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.959909 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.959930 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.959960 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:26 crc kubenswrapper[4799]: I0127 07:47:26.959980 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:26Z","lastTransitionTime":"2026-01-27T07:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.063997 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.064048 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.064060 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.064081 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.064095 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:27Z","lastTransitionTime":"2026-01-27T07:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.120820 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hggcd_836be94a-c1de-4b1c-b98a-7af78a2a4607/ovnkube-controller/3.log" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.125716 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerStarted","Data":"a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113"} Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.126316 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.161039 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" podStartSLOduration=92.16100754 podStartE2EDuration="1m32.16100754s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:27.160049314 +0000 UTC m=+113.471153389" watchObservedRunningTime="2026-01-27 07:47:27.16100754 +0000 UTC m=+113.472111655" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.166382 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.166431 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.166440 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.166458 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.166469 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:27Z","lastTransitionTime":"2026-01-27T07:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.269732 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.269826 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.269850 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.269883 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.269904 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:27Z","lastTransitionTime":"2026-01-27T07:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.372497 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.372553 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.372566 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.372587 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.372603 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:27Z","lastTransitionTime":"2026-01-27T07:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.450835 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.451061 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:47:27 crc kubenswrapper[4799]: E0127 07:47:27.451179 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:47:27 crc kubenswrapper[4799]: E0127 07:47:27.451442 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.465352 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 22:34:46.276768649 +0000 UTC Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.475426 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.475470 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.475533 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.475552 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.475595 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:27Z","lastTransitionTime":"2026-01-27T07:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.491072 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qq7cx"] Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.577917 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.577959 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.577967 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.577981 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.577991 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:27Z","lastTransitionTime":"2026-01-27T07:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.680134 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.680184 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.680198 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.680217 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.680231 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:27Z","lastTransitionTime":"2026-01-27T07:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.782725 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.782761 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.782770 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.782785 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.782795 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:27Z","lastTransitionTime":"2026-01-27T07:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.885067 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.885107 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.885117 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.885131 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.885141 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:27Z","lastTransitionTime":"2026-01-27T07:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.987891 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.987933 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.987943 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.987958 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:27 crc kubenswrapper[4799]: I0127 07:47:27.987967 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:27Z","lastTransitionTime":"2026-01-27T07:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.090698 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.090747 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.090758 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.090778 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.090797 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:28Z","lastTransitionTime":"2026-01-27T07:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.128077 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:47:28 crc kubenswrapper[4799]: E0127 07:47:28.128219 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.193216 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.193266 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.193281 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.193321 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.193334 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:28Z","lastTransitionTime":"2026-01-27T07:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.296195 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.296235 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.296246 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.296260 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.296271 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:28Z","lastTransitionTime":"2026-01-27T07:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.398570 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.398610 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.398620 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.398635 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.398644 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:28Z","lastTransitionTime":"2026-01-27T07:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.451513 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.451580 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:47:28 crc kubenswrapper[4799]: E0127 07:47:28.451764 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 07:47:28 crc kubenswrapper[4799]: E0127 07:47:28.452060 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.466242 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 13:42:57.47352159 +0000 UTC Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.501142 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.501185 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.501201 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.501222 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.501234 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:28Z","lastTransitionTime":"2026-01-27T07:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.604625 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.604698 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.604717 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.604745 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.604765 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:28Z","lastTransitionTime":"2026-01-27T07:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.707170 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.707249 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.707264 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.707286 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.707335 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:28Z","lastTransitionTime":"2026-01-27T07:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.810402 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.810497 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.810518 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.810548 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.810567 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:28Z","lastTransitionTime":"2026-01-27T07:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.913519 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.913565 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.913575 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.913591 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:28 crc kubenswrapper[4799]: I0127 07:47:28.913604 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:28Z","lastTransitionTime":"2026-01-27T07:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.015953 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.016010 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.016024 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.016045 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.016058 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:29Z","lastTransitionTime":"2026-01-27T07:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.118637 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.118729 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.118749 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.118782 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.118804 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:29Z","lastTransitionTime":"2026-01-27T07:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.226073 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.226131 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.226150 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.226769 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.227456 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:29Z","lastTransitionTime":"2026-01-27T07:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.330092 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.330141 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.330152 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.330169 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.330181 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:29Z","lastTransitionTime":"2026-01-27T07:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.433583 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.433654 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.433673 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.433702 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.433723 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:29Z","lastTransitionTime":"2026-01-27T07:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.451192 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.451193 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.451375 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qq7cx" podUID="0af5040b-0391-423c-b87d-90df4965f58f" Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.451528 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.467373 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 00:51:20.137546709 +0000 UTC Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.537817 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.537966 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.537979 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.537998 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.538013 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:29Z","lastTransitionTime":"2026-01-27T07:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.640604 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.640665 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.640686 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.640712 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.640731 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T07:47:29Z","lastTransitionTime":"2026-01-27T07:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.746171 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.746223 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.746238 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.746260 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.746493 4799 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.842545 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nqdj2"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.843106 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nqdj2" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.844321 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wppxn"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.844914 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wppxn" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.845491 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-6268k"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.846059 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6268k" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.847760 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.848336 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.849050 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-bl4wn"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.849749 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.851017 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9gr7w"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.851441 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9t8n9"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.851863 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.852161 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.852251 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.858842 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-lzvh6"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.859512 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fv5p6"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.859831 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7gnsz"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.859885 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.860436 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-lzvh6" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.860437 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7gnsz" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.874553 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.876096 4799 reflector.go:561] object-"openshift-console"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.876145 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.876403 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.876583 4799 reflector.go:561] object-"openshift-cluster-version"/"default-dockercfg-gxtc4": failed to list *v1.Secret: secrets "default-dockercfg-gxtc4" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-version": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.876608 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"default-dockercfg-gxtc4\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-gxtc4\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-version\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.876652 4799 reflector.go:561] object-"openshift-cluster-version"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-version": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.876669 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-version\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.876759 4799 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.876777 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.876818 4799 reflector.go:561] object-"openshift-console-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.876832 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.876874 4799 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.876891 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.876939 4799 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.876953 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.876993 4799 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.877010 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.877061 4799 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.877074 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.877109 4799 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.877121 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.877235 4799 reflector.go:561] object-"openshift-cluster-samples-operator"/"samples-operator-tls": failed to list *v1.Secret: secrets "samples-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.877254 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"samples-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.877291 4799 reflector.go:561] object-"openshift-console-operator"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.877325 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.877387 4799 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": failed to list *v1.Secret: secrets "authentication-operator-dockercfg-mz9bj" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.877405 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-mz9bj\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"authentication-operator-dockercfg-mz9bj\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.877442 4799 reflector.go:561] object-"openshift-authentication-operator"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.877454 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.877491 4799 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.877561 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.877672 4799 reflector.go:561] object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w": failed to list *v1.Secret: secrets "cluster-samples-operator-dockercfg-xpp9w" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.877696 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.877727 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-xpp9w\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cluster-samples-operator-dockercfg-xpp9w\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.877744 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.877769 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.877799 4799 reflector.go:561] object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr": failed to list *v1.Secret: secrets "console-operator-dockercfg-4xjcr" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.877833 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.877829 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-dockercfg-4xjcr\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"console-operator-dockercfg-4xjcr\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.877905 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.877775 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.877999 4799 reflector.go:561] object-"openshift-console"/"service-ca": failed to list *v1.ConfigMap: configmaps "service-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.878016 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"service-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"service-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.878023 4799 reflector.go:561] object-"openshift-console"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.878020 4799 reflector.go:561] object-"openshift-cluster-version"/"cluster-version-operator-serving-cert": failed to list *v1.Secret: secrets "cluster-version-operator-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-version": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.878031 4799 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: secrets "encryption-config-1" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.878048 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.878065 4799 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.881725 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"encryption-config-1\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.881841 4799 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-config": failed to list *v1.ConfigMap: configmaps "authentication-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.881871 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"authentication-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.878062 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cluster-version-operator-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-version\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.881937 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.882000 4799 reflector.go:561] object-"openshift-console"/"console-dockercfg-f62pw": failed to list *v1.Secret: secrets "console-dockercfg-f62pw" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.882026 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-dockercfg-f62pw\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"console-dockercfg-f62pw\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.882051 4799 reflector.go:561] object-"openshift-console"/"console-config": failed to list *v1.ConfigMap: configmaps "console-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.882082 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"console-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.882106 4799 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": failed to list *v1.Secret: secrets "openshift-apiserver-sa-dockercfg-djjff" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.882117 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-djjff\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-sa-dockercfg-djjff\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.882209 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.883052 4799 reflector.go:561] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.883088 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.883192 4799 reflector.go:561] object-"openshift-console-operator"/"trusted-ca": failed to list *v1.ConfigMap: configmaps "trusted-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.883255 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.883486 4799 reflector.go:561] object-"openshift-console-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.883518 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.883571 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-tnr7q"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.883598 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.883616 4799 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.883678 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.883955 4799 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: configmaps "audit-1" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.883978 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit-1\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.884004 4799 reflector.go:561] object-"openshift-console"/"oauth-serving-cert": failed to list *v1.ConfigMap: configmaps "oauth-serving-cert" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.884021 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"oauth-serving-cert\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"oauth-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.884039 4799 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.884068 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.884102 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.884211 4799 reflector.go:561] object-"openshift-console-operator"/"console-operator-config": failed to list *v1.ConfigMap: configmaps "console-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.884230 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"console-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.884339 4799 reflector.go:561] object-"openshift-authentication-operator"/"service-ca-bundle": failed to list *v1.ConfigMap: configmaps "service-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.884366 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"service-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.884428 4799 reflector.go:561] object-"openshift-cluster-version"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-version": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.884447 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-version\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.884466 4799 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.884483 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.884592 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.884622 4799 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: configmaps "image-import-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.884645 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-import-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.884802 4799 reflector.go:561] object-"openshift-authentication-operator"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.884821 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.884837 4799 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.884859 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.884890 4799 reflector.go:561] object-"openshift-authentication-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.884903 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.884922 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.885078 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tnr7q" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.885198 4799 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.885219 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.885230 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-g9lhq"] Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.885575 4799 reflector.go:561] object-"openshift-console"/"console-oauth-config": failed to list *v1.Secret: secrets "console-oauth-config" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.885617 4799 reflector.go:561] object-"openshift-cluster-samples-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.885639 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.885621 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-oauth-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"console-oauth-config\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.885819 4799 reflector.go:561] object-"openshift-console"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.885839 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.885912 4799 reflector.go:561] object-"openshift-authentication-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.885929 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: W0127 07:47:29.886164 4799 reflector.go:561] object-"openshift-console"/"console-serving-cert": failed to list *v1.Secret: secrets "console-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 27 07:47:29 crc kubenswrapper[4799]: E0127 07:47:29.886186 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"console-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.886385 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.891508 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.898188 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n67f6"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.898394 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-g9lhq" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.902280 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.902947 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7zk6z"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.903631 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.904681 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6ww5r"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.904700 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.905054 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-dc6gt"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.905671 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-dc6gt" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.905888 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.906017 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.906117 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.906351 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zk6z" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.907443 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-config\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.907466 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-audit\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.907486 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8169c83-7958-41b3-84b9-94fd314f09e8-auth-proxy-config\") pod \"machine-approver-56656f9798-6268k\" (UID: \"a8169c83-7958-41b3-84b9-94fd314f09e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6268k" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.907506 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt5gc\" (UniqueName: \"kubernetes.io/projected/a7d76fd7-71e0-4263-ba99-b08222f58e6f-kube-api-access-lt5gc\") pod \"cluster-image-registry-operator-dc59b4c8b-wppxn\" (UID: \"a7d76fd7-71e0-4263-ba99-b08222f58e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wppxn" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.907524 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8169c83-7958-41b3-84b9-94fd314f09e8-config\") pod \"machine-approver-56656f9798-6268k\" (UID: \"a8169c83-7958-41b3-84b9-94fd314f09e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6268k" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.907541 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdss4\" (UniqueName: \"kubernetes.io/projected/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-kube-api-access-pdss4\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.907558 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3cad1fa-7215-4807-8c41-cc85a25dcb32-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-69np8\" (UID: \"a3cad1fa-7215-4807-8c41-cc85a25dcb32\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.907575 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-service-ca-bundle\") pod \"authentication-operator-69f744f599-9gr7w\" (UID: \"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.907591 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a7d76fd7-71e0-4263-ba99-b08222f58e6f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wppxn\" (UID: \"a7d76fd7-71e0-4263-ba99-b08222f58e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wppxn" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.907903 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a8169c83-7958-41b3-84b9-94fd314f09e8-machine-approver-tls\") pod \"machine-approver-56656f9798-6268k\" (UID: \"a8169c83-7958-41b3-84b9-94fd314f09e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6268k" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.907945 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-image-import-ca\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.907971 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ebd2f02f-3d33-46f5-b78f-c3a81e326627-audit-dir\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.908039 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnxvl\" (UniqueName: \"kubernetes.io/projected/de0b6ae8-2347-46f4-9870-9e9b14d6a621-kube-api-access-pnxvl\") pod \"openshift-apiserver-operator-796bbdcf4f-nqdj2\" (UID: \"de0b6ae8-2347-46f4-9870-9e9b14d6a621\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nqdj2" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.908061 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8ln5\" (UniqueName: \"kubernetes.io/projected/a8169c83-7958-41b3-84b9-94fd314f09e8-kube-api-access-x8ln5\") pod \"machine-approver-56656f9798-6268k\" (UID: \"a8169c83-7958-41b3-84b9-94fd314f09e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6268k" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.908112 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-oauth-serving-cert\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.908169 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de0b6ae8-2347-46f4-9870-9e9b14d6a621-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nqdj2\" (UID: \"de0b6ae8-2347-46f4-9870-9e9b14d6a621\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nqdj2" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.908189 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-encryption-config\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.908208 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9gr7w\" (UID: \"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.908260 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a7d76fd7-71e0-4263-ba99-b08222f58e6f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wppxn\" (UID: \"a7d76fd7-71e0-4263-ba99-b08222f58e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wppxn" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.908330 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de0b6ae8-2347-46f4-9870-9e9b14d6a621-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nqdj2\" (UID: \"de0b6ae8-2347-46f4-9870-9e9b14d6a621\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nqdj2" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.908346 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-config\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.908367 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-serving-cert\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.908436 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a3cad1fa-7215-4807-8c41-cc85a25dcb32-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-69np8\" (UID: \"a3cad1fa-7215-4807-8c41-cc85a25dcb32\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.908453 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-etcd-client\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.908487 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.908507 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgw95\" (UniqueName: \"kubernetes.io/projected/ebd2f02f-3d33-46f5-b78f-c3a81e326627-kube-api-access-bgw95\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.908524 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-trusted-ca-bundle\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.908541 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3cad1fa-7215-4807-8c41-cc85a25dcb32-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-69np8\" (UID: \"a3cad1fa-7215-4807-8c41-cc85a25dcb32\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.908665 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-serving-cert\") pod \"authentication-operator-69f744f599-9gr7w\" (UID: \"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.908721 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3cad1fa-7215-4807-8c41-cc85a25dcb32-service-ca\") pod \"cluster-version-operator-5c965bbfc6-69np8\" (UID: \"a3cad1fa-7215-4807-8c41-cc85a25dcb32\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.908799 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ebd2f02f-3d33-46f5-b78f-c3a81e326627-node-pullsecrets\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.908933 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a3cad1fa-7215-4807-8c41-cc85a25dcb32-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-69np8\" (UID: \"a3cad1fa-7215-4807-8c41-cc85a25dcb32\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.908997 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-service-ca\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.919965 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-etcd-serving-ca\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.920005 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-serving-cert\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.922798 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.923019 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.923014 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.922906 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-config\") pod \"authentication-operator-69f744f599-9gr7w\" (UID: \"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.924278 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-oauth-config\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.924322 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8kx4\" (UniqueName: \"kubernetes.io/projected/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-kube-api-access-p8kx4\") pod \"authentication-operator-69f744f599-9gr7w\" (UID: \"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.924362 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7d76fd7-71e0-4263-ba99-b08222f58e6f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wppxn\" (UID: \"a7d76fd7-71e0-4263-ba99-b08222f58e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wppxn" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.924414 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.924552 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.924630 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rrhvs"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.924662 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.924760 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.925217 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rrhvs" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.932251 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.932611 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.932824 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.933253 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.934488 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.934571 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.934705 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.934801 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.934992 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.935083 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.935210 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.935327 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.935460 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.935503 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.935775 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.935835 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.935886 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.935912 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.936002 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.936083 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.936100 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.936133 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.936225 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.936370 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.936412 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.936436 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.936372 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.936557 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.936570 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.940993 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkdlq"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.941829 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9m4qv"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.941915 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkdlq" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.942940 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9m4qv" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.944020 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-64xcf"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.944205 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.944590 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-l4462"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.945025 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-l4462" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.945256 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.947642 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-svppz"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.948129 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-svppz" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.948780 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mnv4z"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.949179 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mnv4z" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.949288 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n2c6j"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.962913 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.965587 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.968199 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxqzc"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.969204 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxqzc" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.971775 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lrjsc"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.973522 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zsf62"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.973595 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n2c6j" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.973722 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrjsc" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.974733 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zsf62" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.975869 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2m2xz"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.977410 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.978650 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mn8tq"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.985844 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.991581 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mn8tq" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.993535 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.994391 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-dxknv"] Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.994979 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dxknv" Jan 27 07:47:29 crc kubenswrapper[4799]: I0127 07:47:29.995020 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lfclk"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.000773 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.000801 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tbm5t"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.001377 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-d8mn9"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.001658 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lfclk" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.001740 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tbm5t" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.001765 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-d8mn9" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.001697 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fmzz6"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.002601 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-xjwwr"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.003086 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fmzz6" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.003563 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xjwwr" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.004140 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bhdxs"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.004727 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bhdxs" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.004784 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.005366 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.006105 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.007100 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nqdj2"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.008281 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wppxn"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.009434 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9gr7w"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.010476 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-497f2"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.011084 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-497f2" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.011948 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9t8n9"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.013354 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fv5p6"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.014867 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7gnsz"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.016573 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-bl4wn"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.017705 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-g9lhq"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.018798 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-lzvh6"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.020486 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.021875 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rrhvs"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.023207 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n67f6"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.024686 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkdlq"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.025142 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-image-import-ca\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.025170 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ebd2f02f-3d33-46f5-b78f-c3a81e326627-audit-dir\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.025196 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnxvl\" (UniqueName: \"kubernetes.io/projected/de0b6ae8-2347-46f4-9870-9e9b14d6a621-kube-api-access-pnxvl\") pod \"openshift-apiserver-operator-796bbdcf4f-nqdj2\" (UID: \"de0b6ae8-2347-46f4-9870-9e9b14d6a621\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nqdj2" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.025217 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8ln5\" (UniqueName: \"kubernetes.io/projected/a8169c83-7958-41b3-84b9-94fd314f09e8-kube-api-access-x8ln5\") pod \"machine-approver-56656f9798-6268k\" (UID: \"a8169c83-7958-41b3-84b9-94fd314f09e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6268k" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.025236 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-oauth-serving-cert\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.025282 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ebd2f02f-3d33-46f5-b78f-c3a81e326627-audit-dir\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.025440 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de0b6ae8-2347-46f4-9870-9e9b14d6a621-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nqdj2\" (UID: \"de0b6ae8-2347-46f4-9870-9e9b14d6a621\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nqdj2" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.025468 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-encryption-config\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.025486 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9gr7w\" (UID: \"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.025503 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a7d76fd7-71e0-4263-ba99-b08222f58e6f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wppxn\" (UID: \"a7d76fd7-71e0-4263-ba99-b08222f58e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wppxn" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.025440 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.025601 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-config\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.025698 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de0b6ae8-2347-46f4-9870-9e9b14d6a621-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nqdj2\" (UID: \"de0b6ae8-2347-46f4-9870-9e9b14d6a621\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nqdj2" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.025728 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glxwc\" (UniqueName: \"kubernetes.io/projected/58e91935-0e33-4595-8b6a-27f157d8adaf-kube-api-access-glxwc\") pod \"openshift-controller-manager-operator-756b6f6bc6-hkdlq\" (UID: \"58e91935-0e33-4595-8b6a-27f157d8adaf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkdlq" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.025767 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-serving-cert\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026142 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a3cad1fa-7215-4807-8c41-cc85a25dcb32-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-69np8\" (UID: \"a3cad1fa-7215-4807-8c41-cc85a25dcb32\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026208 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58e91935-0e33-4595-8b6a-27f157d8adaf-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hkdlq\" (UID: \"58e91935-0e33-4595-8b6a-27f157d8adaf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkdlq" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026351 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-etcd-client\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026372 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026385 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a3cad1fa-7215-4807-8c41-cc85a25dcb32-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-69np8\" (UID: \"a3cad1fa-7215-4807-8c41-cc85a25dcb32\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026395 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgw95\" (UniqueName: \"kubernetes.io/projected/ebd2f02f-3d33-46f5-b78f-c3a81e326627-kube-api-access-bgw95\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026502 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-trusted-ca-bundle\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026527 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3cad1fa-7215-4807-8c41-cc85a25dcb32-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-69np8\" (UID: \"a3cad1fa-7215-4807-8c41-cc85a25dcb32\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026551 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-serving-cert\") pod \"authentication-operator-69f744f599-9gr7w\" (UID: \"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026572 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3cad1fa-7215-4807-8c41-cc85a25dcb32-service-ca\") pod \"cluster-version-operator-5c965bbfc6-69np8\" (UID: \"a3cad1fa-7215-4807-8c41-cc85a25dcb32\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026603 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5gnr\" (UniqueName: \"kubernetes.io/projected/a593dc31-38ff-4849-9ad0-cbaf0b6d1547-kube-api-access-t5gnr\") pod \"downloads-7954f5f757-tnr7q\" (UID: \"a593dc31-38ff-4849-9ad0-cbaf0b6d1547\") " pod="openshift-console/downloads-7954f5f757-tnr7q" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026594 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de0b6ae8-2347-46f4-9870-9e9b14d6a621-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nqdj2\" (UID: \"de0b6ae8-2347-46f4-9870-9e9b14d6a621\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nqdj2" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026625 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5dc5c15b-696b-49fe-9593-102cc1e00398-images\") pod \"machine-api-operator-5694c8668f-g9lhq\" (UID: \"5dc5c15b-696b-49fe-9593-102cc1e00398\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g9lhq" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026743 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ebd2f02f-3d33-46f5-b78f-c3a81e326627-node-pullsecrets\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026777 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a3cad1fa-7215-4807-8c41-cc85a25dcb32-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-69np8\" (UID: \"a3cad1fa-7215-4807-8c41-cc85a25dcb32\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026795 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58e91935-0e33-4595-8b6a-27f157d8adaf-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hkdlq\" (UID: \"58e91935-0e33-4595-8b6a-27f157d8adaf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkdlq" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026827 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-service-ca\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026843 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-etcd-serving-ca\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026859 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-serving-cert\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026852 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ebd2f02f-3d33-46f5-b78f-c3a81e326627-node-pullsecrets\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026876 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-config\") pod \"authentication-operator-69f744f599-9gr7w\" (UID: \"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026929 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-oauth-config\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026953 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8kx4\" (UniqueName: \"kubernetes.io/projected/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-kube-api-access-p8kx4\") pod \"authentication-operator-69f744f599-9gr7w\" (UID: \"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026960 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a3cad1fa-7215-4807-8c41-cc85a25dcb32-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-69np8\" (UID: \"a3cad1fa-7215-4807-8c41-cc85a25dcb32\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.026985 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7t6x\" (UniqueName: \"kubernetes.io/projected/632a04e6-2ac7-4d81-a22c-2e3d4b58afe4-kube-api-access-r7t6x\") pod \"cluster-samples-operator-665b6dd947-7gnsz\" (UID: \"632a04e6-2ac7-4d81-a22c-2e3d4b58afe4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7gnsz" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.027030 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7d76fd7-71e0-4263-ba99-b08222f58e6f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wppxn\" (UID: \"a7d76fd7-71e0-4263-ba99-b08222f58e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wppxn" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.027072 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-config\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.027093 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb4fq\" (UniqueName: \"kubernetes.io/projected/5dc5c15b-696b-49fe-9593-102cc1e00398-kube-api-access-vb4fq\") pod \"machine-api-operator-5694c8668f-g9lhq\" (UID: \"5dc5c15b-696b-49fe-9593-102cc1e00398\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g9lhq" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.027132 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-audit\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.027189 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8169c83-7958-41b3-84b9-94fd314f09e8-auth-proxy-config\") pod \"machine-approver-56656f9798-6268k\" (UID: \"a8169c83-7958-41b3-84b9-94fd314f09e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6268k" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.027245 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8169c83-7958-41b3-84b9-94fd314f09e8-config\") pod \"machine-approver-56656f9798-6268k\" (UID: \"a8169c83-7958-41b3-84b9-94fd314f09e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6268k" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.027364 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dc5c15b-696b-49fe-9593-102cc1e00398-config\") pod \"machine-api-operator-5694c8668f-g9lhq\" (UID: \"5dc5c15b-696b-49fe-9593-102cc1e00398\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g9lhq" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.027451 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt5gc\" (UniqueName: \"kubernetes.io/projected/a7d76fd7-71e0-4263-ba99-b08222f58e6f-kube-api-access-lt5gc\") pod \"cluster-image-registry-operator-dc59b4c8b-wppxn\" (UID: \"a7d76fd7-71e0-4263-ba99-b08222f58e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wppxn" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.027474 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdss4\" (UniqueName: \"kubernetes.io/projected/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-kube-api-access-pdss4\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.027503 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3cad1fa-7215-4807-8c41-cc85a25dcb32-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-69np8\" (UID: \"a3cad1fa-7215-4807-8c41-cc85a25dcb32\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.027538 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-service-ca-bundle\") pod \"authentication-operator-69f744f599-9gr7w\" (UID: \"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.027571 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dc5c15b-696b-49fe-9593-102cc1e00398-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-g9lhq\" (UID: \"5dc5c15b-696b-49fe-9593-102cc1e00398\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g9lhq" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.027635 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a7d76fd7-71e0-4263-ba99-b08222f58e6f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wppxn\" (UID: \"a7d76fd7-71e0-4263-ba99-b08222f58e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wppxn" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.027673 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/632a04e6-2ac7-4d81-a22c-2e3d4b58afe4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7gnsz\" (UID: \"632a04e6-2ac7-4d81-a22c-2e3d4b58afe4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7gnsz" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.027700 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a8169c83-7958-41b3-84b9-94fd314f09e8-machine-approver-tls\") pod \"machine-approver-56656f9798-6268k\" (UID: \"a8169c83-7958-41b3-84b9-94fd314f09e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6268k" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.028773 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8169c83-7958-41b3-84b9-94fd314f09e8-config\") pod \"machine-approver-56656f9798-6268k\" (UID: \"a8169c83-7958-41b3-84b9-94fd314f09e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6268k" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.029560 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7d76fd7-71e0-4263-ba99-b08222f58e6f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wppxn\" (UID: \"a7d76fd7-71e0-4263-ba99-b08222f58e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wppxn" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.033637 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8169c83-7958-41b3-84b9-94fd314f09e8-auth-proxy-config\") pod \"machine-approver-56656f9798-6268k\" (UID: \"a8169c83-7958-41b3-84b9-94fd314f09e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6268k" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.033773 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tnr7q"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.033851 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7zk6z"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.034819 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de0b6ae8-2347-46f4-9870-9e9b14d6a621-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nqdj2\" (UID: \"de0b6ae8-2347-46f4-9870-9e9b14d6a621\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nqdj2" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.035322 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a7d76fd7-71e0-4263-ba99-b08222f58e6f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wppxn\" (UID: \"a7d76fd7-71e0-4263-ba99-b08222f58e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wppxn" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.035670 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a8169c83-7958-41b3-84b9-94fd314f09e8-machine-approver-tls\") pod \"machine-approver-56656f9798-6268k\" (UID: \"a8169c83-7958-41b3-84b9-94fd314f09e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6268k" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.037321 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-dc6gt"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.039359 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2m2xz"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.041587 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxqzc"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.045622 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-svppz"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.045648 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-d8mn9"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.045663 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-64xcf"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.045846 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6ww5r"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.049970 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xjwwr"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.051504 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.061135 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-dxknv"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.062564 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-clk6d"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.064208 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-clk6d" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.066819 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8g24k"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.070366 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8g24k" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.073066 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.073115 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mnv4z"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.075372 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.076795 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zsf62"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.078620 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mn8tq"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.080619 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lfclk"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.081941 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lrjsc"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.084051 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9m4qv"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.084415 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.085045 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n2c6j"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.086329 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8g24k"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.087396 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fmzz6"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.088501 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tbm5t"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.089558 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bhdxs"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.090639 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-497f2"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.091865 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-qvlrt"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.092513 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qvlrt" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.093052 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qvlrt"] Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.105077 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.124761 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.128345 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58e91935-0e33-4595-8b6a-27f157d8adaf-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hkdlq\" (UID: \"58e91935-0e33-4595-8b6a-27f157d8adaf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkdlq" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.128443 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7t6x\" (UniqueName: \"kubernetes.io/projected/632a04e6-2ac7-4d81-a22c-2e3d4b58afe4-kube-api-access-r7t6x\") pod \"cluster-samples-operator-665b6dd947-7gnsz\" (UID: \"632a04e6-2ac7-4d81-a22c-2e3d4b58afe4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7gnsz" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.128567 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb4fq\" (UniqueName: \"kubernetes.io/projected/5dc5c15b-696b-49fe-9593-102cc1e00398-kube-api-access-vb4fq\") pod \"machine-api-operator-5694c8668f-g9lhq\" (UID: \"5dc5c15b-696b-49fe-9593-102cc1e00398\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g9lhq" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.128614 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dc5c15b-696b-49fe-9593-102cc1e00398-config\") pod \"machine-api-operator-5694c8668f-g9lhq\" (UID: \"5dc5c15b-696b-49fe-9593-102cc1e00398\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g9lhq" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.128665 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dc5c15b-696b-49fe-9593-102cc1e00398-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-g9lhq\" (UID: \"5dc5c15b-696b-49fe-9593-102cc1e00398\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g9lhq" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.128700 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/632a04e6-2ac7-4d81-a22c-2e3d4b58afe4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7gnsz\" (UID: \"632a04e6-2ac7-4d81-a22c-2e3d4b58afe4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7gnsz" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.128794 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glxwc\" (UniqueName: \"kubernetes.io/projected/58e91935-0e33-4595-8b6a-27f157d8adaf-kube-api-access-glxwc\") pod \"openshift-controller-manager-operator-756b6f6bc6-hkdlq\" (UID: \"58e91935-0e33-4595-8b6a-27f157d8adaf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkdlq" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.128838 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58e91935-0e33-4595-8b6a-27f157d8adaf-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hkdlq\" (UID: \"58e91935-0e33-4595-8b6a-27f157d8adaf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkdlq" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.128912 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5gnr\" (UniqueName: \"kubernetes.io/projected/a593dc31-38ff-4849-9ad0-cbaf0b6d1547-kube-api-access-t5gnr\") pod \"downloads-7954f5f757-tnr7q\" (UID: \"a593dc31-38ff-4849-9ad0-cbaf0b6d1547\") " pod="openshift-console/downloads-7954f5f757-tnr7q" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.128932 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5dc5c15b-696b-49fe-9593-102cc1e00398-images\") pod \"machine-api-operator-5694c8668f-g9lhq\" (UID: \"5dc5c15b-696b-49fe-9593-102cc1e00398\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g9lhq" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.130247 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5dc5c15b-696b-49fe-9593-102cc1e00398-images\") pod \"machine-api-operator-5694c8668f-g9lhq\" (UID: \"5dc5c15b-696b-49fe-9593-102cc1e00398\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g9lhq" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.129405 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dc5c15b-696b-49fe-9593-102cc1e00398-config\") pod \"machine-api-operator-5694c8668f-g9lhq\" (UID: \"5dc5c15b-696b-49fe-9593-102cc1e00398\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g9lhq" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.133090 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dc5c15b-696b-49fe-9593-102cc1e00398-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-g9lhq\" (UID: \"5dc5c15b-696b-49fe-9593-102cc1e00398\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g9lhq" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.145209 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.177671 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.186524 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.191764 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58e91935-0e33-4595-8b6a-27f157d8adaf-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hkdlq\" (UID: \"58e91935-0e33-4595-8b6a-27f157d8adaf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkdlq" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.205459 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.209726 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58e91935-0e33-4595-8b6a-27f157d8adaf-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hkdlq\" (UID: \"58e91935-0e33-4595-8b6a-27f157d8adaf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkdlq" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.225897 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.244943 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.265068 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.285898 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.305170 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.325722 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.344947 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.365478 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.384947 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.405225 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.425965 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.445410 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.450338 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.450338 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.466475 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.467912 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:38:06.218430019 +0000 UTC Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.467965 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.485329 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.505166 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.526038 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.546326 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.565604 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.585357 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.605284 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.625392 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.645094 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.665619 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.684959 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.705515 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.724658 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.765078 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.785501 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.806033 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.824805 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.845318 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.865933 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.885070 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.905282 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.924554 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.945521 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.965217 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.984485 4799 request.go:700] Waited for 1.008568047s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0 Jan 27 07:47:30 crc kubenswrapper[4799]: I0127 07:47:30.994840 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.006063 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.025566 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.025660 4799 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.025712 4799 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.025668 4799 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.025758 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-trusted-ca-bundle podName:f6e1f4db-f2d9-4334-99ea-57ec0b6711e2 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:31.525735027 +0000 UTC m=+117.836839092 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-trusted-ca-bundle") pod "authentication-operator-69f744f599-9gr7w" (UID: "f6e1f4db-f2d9-4334-99ea-57ec0b6711e2") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.025852 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-oauth-serving-cert podName:1c1b6ac6-0dc3-4f65-bb94-d448893ae317 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:31.5258287 +0000 UTC m=+117.836932775 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-oauth-serving-cert") pod "console-f9d7485db-bl4wn" (UID: "1c1b6ac6-0dc3-4f65-bb94-d448893ae317") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.025879 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-image-import-ca podName:ebd2f02f-3d33-46f5-b78f-c3a81e326627 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:31.525869351 +0000 UTC m=+117.836973426 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-image-import-ca") pod "apiserver-76f77b778f-9t8n9" (UID: "ebd2f02f-3d33-46f5-b78f-c3a81e326627") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.025881 4799 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.025952 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-config podName:ebd2f02f-3d33-46f5-b78f-c3a81e326627 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:31.525914482 +0000 UTC m=+117.837018757 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-config") pod "apiserver-76f77b778f-9t8n9" (UID: "ebd2f02f-3d33-46f5-b78f-c3a81e326627") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.026148 4799 secret.go:188] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.026231 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-encryption-config podName:ebd2f02f-3d33-46f5-b78f-c3a81e326627 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:31.52620817 +0000 UTC m=+117.837312436 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-encryption-config") pod "apiserver-76f77b778f-9t8n9" (UID: "ebd2f02f-3d33-46f5-b78f-c3a81e326627") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.026459 4799 secret.go:188] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.026560 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-etcd-client podName:ebd2f02f-3d33-46f5-b78f-c3a81e326627 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:31.52655069 +0000 UTC m=+117.837654755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-etcd-client") pod "apiserver-76f77b778f-9t8n9" (UID: "ebd2f02f-3d33-46f5-b78f-c3a81e326627") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.026629 4799 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.026644 4799 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.026688 4799 secret.go:188] Couldn't get secret openshift-authentication-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.026708 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-trusted-ca-bundle podName:1c1b6ac6-0dc3-4f65-bb94-d448893ae317 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:31.526683943 +0000 UTC m=+117.837788208 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-trusted-ca-bundle") pod "console-f9d7485db-bl4wn" (UID: "1c1b6ac6-0dc3-4f65-bb94-d448893ae317") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.026747 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-trusted-ca-bundle podName:ebd2f02f-3d33-46f5-b78f-c3a81e326627 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:31.526728165 +0000 UTC m=+117.837832470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-trusted-ca-bundle") pod "apiserver-76f77b778f-9t8n9" (UID: "ebd2f02f-3d33-46f5-b78f-c3a81e326627") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.026750 4799 configmap.go:193] Couldn't get configMap openshift-cluster-version/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.026779 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-serving-cert podName:f6e1f4db-f2d9-4334-99ea-57ec0b6711e2 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:31.526762686 +0000 UTC m=+117.837867001 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-serving-cert") pod "authentication-operator-69f744f599-9gr7w" (UID: "f6e1f4db-f2d9-4334-99ea-57ec0b6711e2") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.026781 4799 secret.go:188] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.026811 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3cad1fa-7215-4807-8c41-cc85a25dcb32-service-ca podName:a3cad1fa-7215-4807-8c41-cc85a25dcb32 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:31.526798917 +0000 UTC m=+117.837903212 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/a3cad1fa-7215-4807-8c41-cc85a25dcb32-service-ca") pod "cluster-version-operator-5c965bbfc6-69np8" (UID: "a3cad1fa-7215-4807-8c41-cc85a25dcb32") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.026463 4799 secret.go:188] Couldn't get secret openshift-console/console-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.026835 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3cad1fa-7215-4807-8c41-cc85a25dcb32-serving-cert podName:a3cad1fa-7215-4807-8c41-cc85a25dcb32 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:31.526822387 +0000 UTC m=+117.837926462 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a3cad1fa-7215-4807-8c41-cc85a25dcb32-serving-cert") pod "cluster-version-operator-5c965bbfc6-69np8" (UID: "a3cad1fa-7215-4807-8c41-cc85a25dcb32") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.026914 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-serving-cert podName:1c1b6ac6-0dc3-4f65-bb94-d448893ae317 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:31.526895379 +0000 UTC m=+117.837999674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-serving-cert") pod "console-f9d7485db-bl4wn" (UID: "1c1b6ac6-0dc3-4f65-bb94-d448893ae317") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.026978 4799 configmap.go:193] Couldn't get configMap openshift-console/service-ca: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.027029 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-service-ca podName:1c1b6ac6-0dc3-4f65-bb94-d448893ae317 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:31.527013312 +0000 UTC m=+117.838117597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-service-ca") pod "console-f9d7485db-bl4wn" (UID: "1c1b6ac6-0dc3-4f65-bb94-d448893ae317") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.027115 4799 secret.go:188] Couldn't get secret openshift-console/console-oauth-config: failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.027185 4799 secret.go:188] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.027191 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-oauth-config podName:1c1b6ac6-0dc3-4f65-bb94-d448893ae317 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:31.527183287 +0000 UTC m=+117.838287352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-oauth-config") pod "console-f9d7485db-bl4wn" (UID: "1c1b6ac6-0dc3-4f65-bb94-d448893ae317") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.027250 4799 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.027269 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-serving-cert podName:ebd2f02f-3d33-46f5-b78f-c3a81e326627 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:31.52725169 +0000 UTC m=+117.838355975 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-serving-cert") pod "apiserver-76f77b778f-9t8n9" (UID: "ebd2f02f-3d33-46f5-b78f-c3a81e326627") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.027329 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-config podName:f6e1f4db-f2d9-4334-99ea-57ec0b6711e2 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:31.527283121 +0000 UTC m=+117.838387406 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-config") pod "authentication-operator-69f744f599-9gr7w" (UID: "f6e1f4db-f2d9-4334-99ea-57ec0b6711e2") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.027403 4799 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.027466 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-etcd-serving-ca podName:ebd2f02f-3d33-46f5-b78f-c3a81e326627 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:31.527448715 +0000 UTC m=+117.838553020 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-etcd-serving-ca") pod "apiserver-76f77b778f-9t8n9" (UID: "ebd2f02f-3d33-46f5-b78f-c3a81e326627") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.027531 4799 configmap.go:193] Couldn't get configMap openshift-console/console-config: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.027631 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-config podName:1c1b6ac6-0dc3-4f65-bb94-d448893ae317 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:31.52762105 +0000 UTC m=+117.838725115 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-config") pod "console-f9d7485db-bl4wn" (UID: "1c1b6ac6-0dc3-4f65-bb94-d448893ae317") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.027887 4799 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.027909 4799 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.027970 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-audit podName:ebd2f02f-3d33-46f5-b78f-c3a81e326627 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:31.527926498 +0000 UTC m=+117.839030553 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-audit") pod "apiserver-76f77b778f-9t8n9" (UID: "ebd2f02f-3d33-46f5-b78f-c3a81e326627") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.028006 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-service-ca-bundle podName:f6e1f4db-f2d9-4334-99ea-57ec0b6711e2 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:31.52797913 +0000 UTC m=+117.839083355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-service-ca-bundle") pod "authentication-operator-69f744f599-9gr7w" (UID: "f6e1f4db-f2d9-4334-99ea-57ec0b6711e2") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.044706 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.065832 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.084694 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.111995 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.125722 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.129683 4799 secret.go:188] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: E0127 07:47:31.129824 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/632a04e6-2ac7-4d81-a22c-2e3d4b58afe4-samples-operator-tls podName:632a04e6-2ac7-4d81-a22c-2e3d4b58afe4 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:31.629791997 +0000 UTC m=+117.940896102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/632a04e6-2ac7-4d81-a22c-2e3d4b58afe4-samples-operator-tls") pod "cluster-samples-operator-665b6dd947-7gnsz" (UID: "632a04e6-2ac7-4d81-a22c-2e3d4b58afe4") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.145456 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.165153 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.185285 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.205391 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.225945 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.245349 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.266205 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.286374 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.305898 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.324986 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.344256 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.365787 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.384877 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.405501 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.425568 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.446044 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.451014 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.451191 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.465350 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.485381 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.506409 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.525474 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.545381 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.549447 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-serving-cert\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.549514 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.549545 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-etcd-client\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.549580 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-trusted-ca-bundle\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.549606 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3cad1fa-7215-4807-8c41-cc85a25dcb32-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-69np8\" (UID: \"a3cad1fa-7215-4807-8c41-cc85a25dcb32\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.549631 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-serving-cert\") pod \"authentication-operator-69f744f599-9gr7w\" (UID: \"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.549654 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3cad1fa-7215-4807-8c41-cc85a25dcb32-service-ca\") pod \"cluster-version-operator-5c965bbfc6-69np8\" (UID: \"a3cad1fa-7215-4807-8c41-cc85a25dcb32\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.549713 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-config\") pod \"authentication-operator-69f744f599-9gr7w\" (UID: \"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.549745 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-service-ca\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.549769 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-etcd-serving-ca\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.549793 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-serving-cert\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.549815 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-oauth-config\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.549869 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-config\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.549907 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-audit\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.549962 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-service-ca-bundle\") pod \"authentication-operator-69f744f599-9gr7w\" (UID: \"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.550006 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-image-import-ca\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.550046 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-encryption-config\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.550075 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-oauth-serving-cert\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.550099 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9gr7w\" (UID: \"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.550136 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-config\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.566251 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.585296 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.605527 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.626048 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.644968 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.650848 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/632a04e6-2ac7-4d81-a22c-2e3d4b58afe4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7gnsz\" (UID: \"632a04e6-2ac7-4d81-a22c-2e3d4b58afe4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7gnsz" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.665851 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.685092 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.705118 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.725062 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.745475 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.765813 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.807997 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnxvl\" (UniqueName: \"kubernetes.io/projected/de0b6ae8-2347-46f4-9870-9e9b14d6a621-kube-api-access-pnxvl\") pod \"openshift-apiserver-operator-796bbdcf4f-nqdj2\" (UID: \"de0b6ae8-2347-46f4-9870-9e9b14d6a621\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nqdj2" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.818642 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8ln5\" (UniqueName: \"kubernetes.io/projected/a8169c83-7958-41b3-84b9-94fd314f09e8-kube-api-access-x8ln5\") pod \"machine-approver-56656f9798-6268k\" (UID: \"a8169c83-7958-41b3-84b9-94fd314f09e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6268k" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.918839 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a7d76fd7-71e0-4263-ba99-b08222f58e6f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wppxn\" (UID: \"a7d76fd7-71e0-4263-ba99-b08222f58e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wppxn" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.942983 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt5gc\" (UniqueName: \"kubernetes.io/projected/a7d76fd7-71e0-4263-ba99-b08222f58e6f-kube-api-access-lt5gc\") pod \"cluster-image-registry-operator-dc59b4c8b-wppxn\" (UID: \"a7d76fd7-71e0-4263-ba99-b08222f58e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wppxn" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.944857 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.962256 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nqdj2" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.965616 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.985814 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 07:47:31 crc kubenswrapper[4799]: I0127 07:47:31.999084 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wppxn" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.004333 4799 request.go:700] Waited for 1.936491524s due to client-side throttling, not priority and fairness, request: PATCH:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-879f6c89f-fv5p6/status Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.019410 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6268k" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.025454 4799 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 07:47:32 crc kubenswrapper[4799]: W0127 07:47:32.036585 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8169c83_7958_41b3_84b9_94fd314f09e8.slice/crio-819a598b09c298913bbde2548ae4dc824a74f9c2f3c622730cef1bbe8b3a0fa2 WatchSource:0}: Error finding container 819a598b09c298913bbde2548ae4dc824a74f9c2f3c622730cef1bbe8b3a0fa2: Status 404 returned error can't find the container with id 819a598b09c298913bbde2548ae4dc824a74f9c2f3c622730cef1bbe8b3a0fa2 Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.045853 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.065577 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.085953 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.106616 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.125605 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.143791 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6268k" event={"ID":"a8169c83-7958-41b3-84b9-94fd314f09e8","Type":"ContainerStarted","Data":"819a598b09c298913bbde2548ae4dc824a74f9c2f3c622730cef1bbe8b3a0fa2"} Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.145577 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.177085 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nqdj2"] Jan 27 07:47:32 crc kubenswrapper[4799]: W0127 07:47:32.186791 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde0b6ae8_2347_46f4_9870_9e9b14d6a621.slice/crio-35cba62e6ebb95bac09fcae84503c14d905b24619ef178cc9a8fd2f59d360e17 WatchSource:0}: Error finding container 35cba62e6ebb95bac09fcae84503c14d905b24619ef178cc9a8fd2f59d360e17: Status 404 returned error can't find the container with id 35cba62e6ebb95bac09fcae84503c14d905b24619ef178cc9a8fd2f59d360e17 Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.204764 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wppxn"] Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.207359 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb4fq\" (UniqueName: \"kubernetes.io/projected/5dc5c15b-696b-49fe-9593-102cc1e00398-kube-api-access-vb4fq\") pod \"machine-api-operator-5694c8668f-g9lhq\" (UID: \"5dc5c15b-696b-49fe-9593-102cc1e00398\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g9lhq" Jan 27 07:47:32 crc kubenswrapper[4799]: W0127 07:47:32.214003 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7d76fd7_71e0_4263_ba99_b08222f58e6f.slice/crio-41bdb3e3efce67ef590a216fa5c025425503c998c0f39935b274c6dd67731a2f WatchSource:0}: Error finding container 41bdb3e3efce67ef590a216fa5c025425503c998c0f39935b274c6dd67731a2f: Status 404 returned error can't find the container with id 41bdb3e3efce67ef590a216fa5c025425503c998c0f39935b274c6dd67731a2f Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.220862 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glxwc\" (UniqueName: \"kubernetes.io/projected/58e91935-0e33-4595-8b6a-27f157d8adaf-kube-api-access-glxwc\") pod \"openshift-controller-manager-operator-756b6f6bc6-hkdlq\" (UID: \"58e91935-0e33-4595-8b6a-27f157d8adaf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkdlq" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.222171 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkdlq" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.245607 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.265829 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.305896 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.322392 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-serving-cert\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.328156 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.334011 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-oauth-config\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.363627 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e84a898-a553-48bd-afbb-5688db92ff4b-config\") pod \"route-controller-manager-6576b87f9c-77f8f\" (UID: \"5e84a898-a553-48bd-afbb-5688db92ff4b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.363755 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9664c11c-1653-4690-9eb4-9c4918070a0d-service-ca-bundle\") pod \"router-default-5444994796-l4462\" (UID: \"9664c11c-1653-4690-9eb4-9c4918070a0d\") " pod="openshift-ingress/router-default-5444994796-l4462" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.363778 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-audit-policies\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.364508 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.364594 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.364667 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5735c0d4-af84-4c65-b453-88a9086e0d8c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rrhvs\" (UID: \"5735c0d4-af84-4c65-b453-88a9086e0d8c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rrhvs" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.364708 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e84a898-a553-48bd-afbb-5688db92ff4b-client-ca\") pod \"route-controller-manager-6576b87f9c-77f8f\" (UID: \"5e84a898-a553-48bd-afbb-5688db92ff4b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.364736 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gprsr\" (UniqueName: \"kubernetes.io/projected/5e84a898-a553-48bd-afbb-5688db92ff4b-kube-api-access-gprsr\") pod \"route-controller-manager-6576b87f9c-77f8f\" (UID: \"5e84a898-a553-48bd-afbb-5688db92ff4b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.364762 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.364799 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.364877 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsxwm\" (UniqueName: \"kubernetes.io/projected/9664c11c-1653-4690-9eb4-9c4918070a0d-kube-api-access-qsxwm\") pod \"router-default-5444994796-l4462\" (UID: \"9664c11c-1653-4690-9eb4-9c4918070a0d\") " pod="openshift-ingress/router-default-5444994796-l4462" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.364914 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e84a898-a553-48bd-afbb-5688db92ff4b-serving-cert\") pod \"route-controller-manager-6576b87f9c-77f8f\" (UID: \"5e84a898-a553-48bd-afbb-5688db92ff4b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.364956 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.364985 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.365008 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:32.864993584 +0000 UTC m=+119.176097889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365033 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2aec4ae4-6eeb-4e1f-8912-8401d5607d2d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-svppz\" (UID: \"2aec4ae4-6eeb-4e1f-8912-8401d5607d2d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-svppz" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365288 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365352 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-registry-certificates\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365400 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a7388c2-4452-4132-961e-3a2f24154237-serving-cert\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365429 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6a7388c2-4452-4132-961e-3a2f24154237-audit-policies\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365457 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a7388c2-4452-4132-961e-3a2f24154237-encryption-config\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365479 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365502 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5735c0d4-af84-4c65-b453-88a9086e0d8c-config\") pod \"kube-controller-manager-operator-78b949d7b-rrhvs\" (UID: \"5735c0d4-af84-4c65-b453-88a9086e0d8c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rrhvs" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365550 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fac797c-c9f7-45e7-91dd-1efa96411e06-serving-cert\") pod \"openshift-config-operator-7777fb866f-7zk6z\" (UID: \"5fac797c-c9f7-45e7-91dd-1efa96411e06\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zk6z" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365598 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fea687ed-75f7-463e-9c99-c53398e244b5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9m4qv\" (UID: \"fea687ed-75f7-463e-9c99-c53398e244b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9m4qv" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365635 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9664c11c-1653-4690-9eb4-9c4918070a0d-metrics-certs\") pod \"router-default-5444994796-l4462\" (UID: \"9664c11c-1653-4690-9eb4-9c4918070a0d\") " pod="openshift-ingress/router-default-5444994796-l4462" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365663 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365704 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a7388c2-4452-4132-961e-3a2f24154237-etcd-client\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365728 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwvh4\" (UniqueName: \"kubernetes.io/projected/2aec4ae4-6eeb-4e1f-8912-8401d5607d2d-kube-api-access-rwvh4\") pod \"machine-config-controller-84d6567774-svppz\" (UID: \"2aec4ae4-6eeb-4e1f-8912-8401d5607d2d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-svppz" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365756 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-bound-sa-token\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365780 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9664c11c-1653-4690-9eb4-9c4918070a0d-stats-auth\") pod \"router-default-5444994796-l4462\" (UID: \"9664c11c-1653-4690-9eb4-9c4918070a0d\") " pod="openshift-ingress/router-default-5444994796-l4462" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365825 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97f98\" (UniqueName: \"kubernetes.io/projected/810999fd-fa8e-4e6c-9b07-bc58f174202b-kube-api-access-97f98\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365847 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-config\") pod \"controller-manager-879f6c89f-fv5p6\" (UID: \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365882 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fea687ed-75f7-463e-9c99-c53398e244b5-images\") pod \"machine-config-operator-74547568cd-9m4qv\" (UID: \"fea687ed-75f7-463e-9c99-c53398e244b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9m4qv" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365907 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a7388c2-4452-4132-961e-3a2f24154237-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365928 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-serving-cert\") pod \"controller-manager-879f6c89f-fv5p6\" (UID: \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365959 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/cac170c6-2d9b-4966-873b-a92ce0f3da29-etcd-service-ca\") pod \"etcd-operator-b45778765-64xcf\" (UID: \"cac170c6-2d9b-4966-873b-a92ce0f3da29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.365985 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwnvq\" (UniqueName: \"kubernetes.io/projected/208eb479-5aaa-44f5-91d4-7a9394a2aac2-kube-api-access-kwnvq\") pod \"console-operator-58897d9998-lzvh6\" (UID: \"208eb479-5aaa-44f5-91d4-7a9394a2aac2\") " pod="openshift-console-operator/console-operator-58897d9998-lzvh6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.366043 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2tct\" (UniqueName: \"kubernetes.io/projected/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-kube-api-access-c2tct\") pod \"controller-manager-879f6c89f-fv5p6\" (UID: \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.366205 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fv5p6\" (UID: \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.366265 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-trusted-ca\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.366292 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/208eb479-5aaa-44f5-91d4-7a9394a2aac2-trusted-ca\") pod \"console-operator-58897d9998-lzvh6\" (UID: \"208eb479-5aaa-44f5-91d4-7a9394a2aac2\") " pod="openshift-console-operator/console-operator-58897d9998-lzvh6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.366333 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9664c11c-1653-4690-9eb4-9c4918070a0d-default-certificate\") pod \"router-default-5444994796-l4462\" (UID: \"9664c11c-1653-4690-9eb4-9c4918070a0d\") " pod="openshift-ingress/router-default-5444994796-l4462" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.366364 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e88970a-7b70-4335-ab92-5b927f6864bd-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-dc6gt\" (UID: \"7e88970a-7b70-4335-ab92-5b927f6864bd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dc6gt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.366387 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w68rn\" (UniqueName: \"kubernetes.io/projected/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-kube-api-access-w68rn\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.366410 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.366441 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzj2d\" (UniqueName: \"kubernetes.io/projected/fea687ed-75f7-463e-9c99-c53398e244b5-kube-api-access-jzj2d\") pod \"machine-config-operator-74547568cd-9m4qv\" (UID: \"fea687ed-75f7-463e-9c99-c53398e244b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9m4qv" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.366480 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5fac797c-c9f7-45e7-91dd-1efa96411e06-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7zk6z\" (UID: \"5fac797c-c9f7-45e7-91dd-1efa96411e06\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zk6z" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.366504 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.366651 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fea687ed-75f7-463e-9c99-c53398e244b5-proxy-tls\") pod \"machine-config-operator-74547568cd-9m4qv\" (UID: \"fea687ed-75f7-463e-9c99-c53398e244b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9m4qv" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.366681 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8746\" (UniqueName: \"kubernetes.io/projected/6a7388c2-4452-4132-961e-3a2f24154237-kube-api-access-q8746\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.366702 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/88a08ba8-7cde-40c0-9a88-b01b642c78df-metrics-tls\") pod \"dns-operator-744455d44c-mnv4z\" (UID: \"88a08ba8-7cde-40c0-9a88-b01b642c78df\") " pod="openshift-dns-operator/dns-operator-744455d44c-mnv4z" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.366722 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrrgx\" (UniqueName: \"kubernetes.io/projected/88a08ba8-7cde-40c0-9a88-b01b642c78df-kube-api-access-mrrgx\") pod \"dns-operator-744455d44c-mnv4z\" (UID: \"88a08ba8-7cde-40c0-9a88-b01b642c78df\") " pod="openshift-dns-operator/dns-operator-744455d44c-mnv4z" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.366756 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvvdt\" (UniqueName: \"kubernetes.io/projected/7e88970a-7b70-4335-ab92-5b927f6864bd-kube-api-access-tvvdt\") pod \"multus-admission-controller-857f4d67dd-dc6gt\" (UID: \"7e88970a-7b70-4335-ab92-5b927f6864bd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dc6gt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.366790 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/208eb479-5aaa-44f5-91d4-7a9394a2aac2-serving-cert\") pod \"console-operator-58897d9998-lzvh6\" (UID: \"208eb479-5aaa-44f5-91d4-7a9394a2aac2\") " pod="openshift-console-operator/console-operator-58897d9998-lzvh6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.366824 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5735c0d4-af84-4c65-b453-88a9086e0d8c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rrhvs\" (UID: \"5735c0d4-af84-4c65-b453-88a9086e0d8c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rrhvs" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.366879 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-client-ca\") pod \"controller-manager-879f6c89f-fv5p6\" (UID: \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.366927 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2aec4ae4-6eeb-4e1f-8912-8401d5607d2d-proxy-tls\") pod \"machine-config-controller-84d6567774-svppz\" (UID: \"2aec4ae4-6eeb-4e1f-8912-8401d5607d2d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-svppz" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.366958 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cac170c6-2d9b-4966-873b-a92ce0f3da29-serving-cert\") pod \"etcd-operator-b45778765-64xcf\" (UID: \"cac170c6-2d9b-4966-873b-a92ce0f3da29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.366987 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cac170c6-2d9b-4966-873b-a92ce0f3da29-etcd-client\") pod \"etcd-operator-b45778765-64xcf\" (UID: \"cac170c6-2d9b-4966-873b-a92ce0f3da29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.367005 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208eb479-5aaa-44f5-91d4-7a9394a2aac2-config\") pod \"console-operator-58897d9998-lzvh6\" (UID: \"208eb479-5aaa-44f5-91d4-7a9394a2aac2\") " pod="openshift-console-operator/console-operator-58897d9998-lzvh6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.367035 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8z4k\" (UniqueName: \"kubernetes.io/projected/cac170c6-2d9b-4966-873b-a92ce0f3da29-kube-api-access-q8z4k\") pod \"etcd-operator-b45778765-64xcf\" (UID: \"cac170c6-2d9b-4966-873b-a92ce0f3da29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.368557 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-registry-tls\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.369072 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.369091 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/810999fd-fa8e-4e6c-9b07-bc58f174202b-audit-dir\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.369135 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a7388c2-4452-4132-961e-3a2f24154237-audit-dir\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.369190 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.369318 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwtsz\" (UniqueName: \"kubernetes.io/projected/5fac797c-c9f7-45e7-91dd-1efa96411e06-kube-api-access-wwtsz\") pod \"openshift-config-operator-7777fb866f-7zk6z\" (UID: \"5fac797c-c9f7-45e7-91dd-1efa96411e06\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zk6z" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.369369 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cac170c6-2d9b-4966-873b-a92ce0f3da29-config\") pod \"etcd-operator-b45778765-64xcf\" (UID: \"cac170c6-2d9b-4966-873b-a92ce0f3da29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.369389 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/cac170c6-2d9b-4966-873b-a92ce0f3da29-etcd-ca\") pod \"etcd-operator-b45778765-64xcf\" (UID: \"cac170c6-2d9b-4966-873b-a92ce0f3da29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.369407 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a7388c2-4452-4132-961e-3a2f24154237-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.369426 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.372331 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.381818 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9gr7w\" (UID: \"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.385529 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.391550 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-config\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.407639 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkdlq"] Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.411389 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.421498 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.433324 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.441729 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-trusted-ca-bundle\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.445613 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.457399 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-g9lhq" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.465333 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471013 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471162 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-audit\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.471179 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:32.971153273 +0000 UTC m=+119.282257338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471284 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvgt2\" (UniqueName: \"kubernetes.io/projected/82fa445d-953b-4729-8c80-a2bc760f0ce3-kube-api-access-pvgt2\") pod \"csi-hostpathplugin-8g24k\" (UID: \"82fa445d-953b-4729-8c80-a2bc760f0ce3\") " pod="hostpath-provisioner/csi-hostpathplugin-8g24k" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471356 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dd94cd3b-bc32-422f-8c10-dc6d7cb52453-apiservice-cert\") pod \"packageserver-d55dfcdfc-lfclk\" (UID: \"dd94cd3b-bc32-422f-8c10-dc6d7cb52453\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lfclk" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471389 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/82fa445d-953b-4729-8c80-a2bc760f0ce3-csi-data-dir\") pod \"csi-hostpathplugin-8g24k\" (UID: \"82fa445d-953b-4729-8c80-a2bc760f0ce3\") " pod="hostpath-provisioner/csi-hostpathplugin-8g24k" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471478 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471509 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv4fs\" (UniqueName: \"kubernetes.io/projected/8f54e330-fce1-4959-89f0-76a62f86ae43-kube-api-access-gv4fs\") pod \"migrator-59844c95c7-tbm5t\" (UID: \"8f54e330-fce1-4959-89f0-76a62f86ae43\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tbm5t" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471550 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp4j9\" (UniqueName: \"kubernetes.io/projected/b4251b28-e3a3-4694-b8d3-8106bacdfe86-kube-api-access-vp4j9\") pod \"kube-storage-version-migrator-operator-b67b599dd-mn8tq\" (UID: \"b4251b28-e3a3-4694-b8d3-8106bacdfe86\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mn8tq" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471583 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/82fa445d-953b-4729-8c80-a2bc760f0ce3-mountpoint-dir\") pod \"csi-hostpathplugin-8g24k\" (UID: \"82fa445d-953b-4729-8c80-a2bc760f0ce3\") " pod="hostpath-provisioner/csi-hostpathplugin-8g24k" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471608 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fcf282b3-df77-4087-a390-c000adfd8f86-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-n2c6j\" (UID: \"fcf282b3-df77-4087-a390-c000adfd8f86\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n2c6j" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471639 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471668 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3817b4c1-1d03-4512-8a0e-f339f9c2fb5f-trusted-ca\") pod \"ingress-operator-5b745b69d9-lrjsc\" (UID: \"3817b4c1-1d03-4512-8a0e-f339f9c2fb5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrjsc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471694 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f6abdebd-9ed0-44e3-934a-9472c0f92bc7-signing-key\") pod \"service-ca-9c57cc56f-d8mn9\" (UID: \"f6abdebd-9ed0-44e3-934a-9472c0f92bc7\") " pod="openshift-service-ca/service-ca-9c57cc56f-d8mn9" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471726 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hl6v\" (UniqueName: \"kubernetes.io/projected/74820e6d-62e6-49db-8f4d-a49ae5fe95ee-kube-api-access-9hl6v\") pod \"dns-default-xjwwr\" (UID: \"74820e6d-62e6-49db-8f4d-a49ae5fe95ee\") " pod="openshift-dns/dns-default-xjwwr" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471754 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4251b28-e3a3-4694-b8d3-8106bacdfe86-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mn8tq\" (UID: \"b4251b28-e3a3-4694-b8d3-8106bacdfe86\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mn8tq" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471779 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b678fa7-59f7-4a2c-8cae-3f71a17f8734-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2m2xz\" (UID: \"2b678fa7-59f7-4a2c-8cae-3f71a17f8734\") " pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471817 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5735c0d4-af84-4c65-b453-88a9086e0d8c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rrhvs\" (UID: \"5735c0d4-af84-4c65-b453-88a9086e0d8c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rrhvs" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471844 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e84a898-a553-48bd-afbb-5688db92ff4b-client-ca\") pod \"route-controller-manager-6576b87f9c-77f8f\" (UID: \"5e84a898-a553-48bd-afbb-5688db92ff4b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471869 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gprsr\" (UniqueName: \"kubernetes.io/projected/5e84a898-a553-48bd-afbb-5688db92ff4b-kube-api-access-gprsr\") pod \"route-controller-manager-6576b87f9c-77f8f\" (UID: \"5e84a898-a553-48bd-afbb-5688db92ff4b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471897 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/82fa445d-953b-4729-8c80-a2bc760f0ce3-registration-dir\") pod \"csi-hostpathplugin-8g24k\" (UID: \"82fa445d-953b-4729-8c80-a2bc760f0ce3\") " pod="hostpath-provisioner/csi-hostpathplugin-8g24k" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471923 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471949 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.471973 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de07a2d4-e916-4c2d-bb3b-b8a268461a71-secret-volume\") pod \"collect-profiles-29491665-4rrsc\" (UID: \"de07a2d4-e916-4c2d-bb3b-b8a268461a71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472003 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsxwm\" (UniqueName: \"kubernetes.io/projected/9664c11c-1653-4690-9eb4-9c4918070a0d-kube-api-access-qsxwm\") pod \"router-default-5444994796-l4462\" (UID: \"9664c11c-1653-4690-9eb4-9c4918070a0d\") " pod="openshift-ingress/router-default-5444994796-l4462" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472026 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e84a898-a553-48bd-afbb-5688db92ff4b-serving-cert\") pod \"route-controller-manager-6576b87f9c-77f8f\" (UID: \"5e84a898-a553-48bd-afbb-5688db92ff4b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472051 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472075 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472102 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4whn\" (UniqueName: \"kubernetes.io/projected/dd94cd3b-bc32-422f-8c10-dc6d7cb52453-kube-api-access-b4whn\") pod \"packageserver-d55dfcdfc-lfclk\" (UID: \"dd94cd3b-bc32-422f-8c10-dc6d7cb52453\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lfclk" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472132 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-registry-certificates\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472167 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bc66b736-3c9e-40dd-b203-a9238bf0789d-node-bootstrap-token\") pod \"machine-config-server-clk6d\" (UID: \"bc66b736-3c9e-40dd-b203-a9238bf0789d\") " pod="openshift-machine-config-operator/machine-config-server-clk6d" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472190 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/efe93705-6f73-4523-9c27-4e2b2486d7ad-profile-collector-cert\") pod \"catalog-operator-68c6474976-497f2\" (UID: \"efe93705-6f73-4523-9c27-4e2b2486d7ad\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-497f2" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472217 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3817b4c1-1d03-4512-8a0e-f339f9c2fb5f-metrics-tls\") pod \"ingress-operator-5b745b69d9-lrjsc\" (UID: \"3817b4c1-1d03-4512-8a0e-f339f9c2fb5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrjsc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472241 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c9eb96a5-27c6-4cab-889a-1938f92b95aa-srv-cert\") pod \"olm-operator-6b444d44fb-lxqzc\" (UID: \"c9eb96a5-27c6-4cab-889a-1938f92b95aa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxqzc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472271 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a7388c2-4452-4132-961e-3a2f24154237-encryption-config\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472294 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472466 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9664c11c-1653-4690-9eb4-9c4918070a0d-metrics-certs\") pod \"router-default-5444994796-l4462\" (UID: \"9664c11c-1653-4690-9eb4-9c4918070a0d\") " pod="openshift-ingress/router-default-5444994796-l4462" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472495 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djfv8\" (UniqueName: \"kubernetes.io/projected/de07a2d4-e916-4c2d-bb3b-b8a268461a71-kube-api-access-djfv8\") pod \"collect-profiles-29491665-4rrsc\" (UID: \"de07a2d4-e916-4c2d-bb3b-b8a268461a71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472515 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fea687ed-75f7-463e-9c99-c53398e244b5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9m4qv\" (UID: \"fea687ed-75f7-463e-9c99-c53398e244b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9m4qv" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472540 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwvh4\" (UniqueName: \"kubernetes.io/projected/2aec4ae4-6eeb-4e1f-8912-8401d5607d2d-kube-api-access-rwvh4\") pod \"machine-config-controller-84d6567774-svppz\" (UID: \"2aec4ae4-6eeb-4e1f-8912-8401d5607d2d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-svppz" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472559 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzhbl\" (UniqueName: \"kubernetes.io/projected/2b678fa7-59f7-4a2c-8cae-3f71a17f8734-kube-api-access-rzhbl\") pod \"marketplace-operator-79b997595-2m2xz\" (UID: \"2b678fa7-59f7-4a2c-8cae-3f71a17f8734\") " pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472586 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f90c837a-43bf-4353-ba01-70a80be22306-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bhdxs\" (UID: \"f90c837a-43bf-4353-ba01-70a80be22306\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bhdxs" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472608 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97f98\" (UniqueName: \"kubernetes.io/projected/810999fd-fa8e-4e6c-9b07-bc58f174202b-kube-api-access-97f98\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472626 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-config\") pod \"controller-manager-879f6c89f-fv5p6\" (UID: \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472641 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-serving-cert\") pod \"controller-manager-879f6c89f-fv5p6\" (UID: \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472697 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/814c6bed-3956-4eff-9909-58d7b74247c5-config\") pod \"service-ca-operator-777779d784-dxknv\" (UID: \"814c6bed-3956-4eff-9909-58d7b74247c5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dxknv" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472715 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74820e6d-62e6-49db-8f4d-a49ae5fe95ee-metrics-tls\") pod \"dns-default-xjwwr\" (UID: \"74820e6d-62e6-49db-8f4d-a49ae5fe95ee\") " pod="openshift-dns/dns-default-xjwwr" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472733 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f90c837a-43bf-4353-ba01-70a80be22306-config\") pod \"kube-apiserver-operator-766d6c64bb-bhdxs\" (UID: \"f90c837a-43bf-4353-ba01-70a80be22306\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bhdxs" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472757 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f6abdebd-9ed0-44e3-934a-9472c0f92bc7-signing-cabundle\") pod \"service-ca-9c57cc56f-d8mn9\" (UID: \"f6abdebd-9ed0-44e3-934a-9472c0f92bc7\") " pod="openshift-service-ca/service-ca-9c57cc56f-d8mn9" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472787 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w68rn\" (UniqueName: \"kubernetes.io/projected/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-kube-api-access-w68rn\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472806 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472849 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwc9g\" (UniqueName: \"kubernetes.io/projected/17f2f9b7-aad3-4959-8193-3e3e1d525141-kube-api-access-dwc9g\") pod \"control-plane-machine-set-operator-78cbb6b69f-fmzz6\" (UID: \"17f2f9b7-aad3-4959-8193-3e3e1d525141\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fmzz6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472872 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzj2d\" (UniqueName: \"kubernetes.io/projected/fea687ed-75f7-463e-9c99-c53398e244b5-kube-api-access-jzj2d\") pod \"machine-config-operator-74547568cd-9m4qv\" (UID: \"fea687ed-75f7-463e-9c99-c53398e244b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9m4qv" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472897 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5fac797c-c9f7-45e7-91dd-1efa96411e06-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7zk6z\" (UID: \"5fac797c-c9f7-45e7-91dd-1efa96411e06\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zk6z" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472916 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472933 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fea687ed-75f7-463e-9c99-c53398e244b5-proxy-tls\") pod \"machine-config-operator-74547568cd-9m4qv\" (UID: \"fea687ed-75f7-463e-9c99-c53398e244b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9m4qv" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472954 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvvdt\" (UniqueName: \"kubernetes.io/projected/7e88970a-7b70-4335-ab92-5b927f6864bd-kube-api-access-tvvdt\") pod \"multus-admission-controller-857f4d67dd-dc6gt\" (UID: \"7e88970a-7b70-4335-ab92-5b927f6864bd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dc6gt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472972 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/88a08ba8-7cde-40c0-9a88-b01b642c78df-metrics-tls\") pod \"dns-operator-744455d44c-mnv4z\" (UID: \"88a08ba8-7cde-40c0-9a88-b01b642c78df\") " pod="openshift-dns-operator/dns-operator-744455d44c-mnv4z" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.472991 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/208eb479-5aaa-44f5-91d4-7a9394a2aac2-serving-cert\") pod \"console-operator-58897d9998-lzvh6\" (UID: \"208eb479-5aaa-44f5-91d4-7a9394a2aac2\") " pod="openshift-console-operator/console-operator-58897d9998-lzvh6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.473009 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6cf7c2a6-012a-48df-9c00-e6eac17da885-cert\") pod \"ingress-canary-qvlrt\" (UID: \"6cf7c2a6-012a-48df-9c00-e6eac17da885\") " pod="openshift-ingress-canary/ingress-canary-qvlrt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.473029 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5735c0d4-af84-4c65-b453-88a9086e0d8c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rrhvs\" (UID: \"5735c0d4-af84-4c65-b453-88a9086e0d8c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rrhvs" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.473057 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2aec4ae4-6eeb-4e1f-8912-8401d5607d2d-proxy-tls\") pod \"machine-config-controller-84d6567774-svppz\" (UID: \"2aec4ae4-6eeb-4e1f-8912-8401d5607d2d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-svppz" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.473074 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cac170c6-2d9b-4966-873b-a92ce0f3da29-serving-cert\") pod \"etcd-operator-b45778765-64xcf\" (UID: \"cac170c6-2d9b-4966-873b-a92ce0f3da29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.473095 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cac170c6-2d9b-4966-873b-a92ce0f3da29-etcd-client\") pod \"etcd-operator-b45778765-64xcf\" (UID: \"cac170c6-2d9b-4966-873b-a92ce0f3da29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.473111 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208eb479-5aaa-44f5-91d4-7a9394a2aac2-config\") pod \"console-operator-58897d9998-lzvh6\" (UID: \"208eb479-5aaa-44f5-91d4-7a9394a2aac2\") " pod="openshift-console-operator/console-operator-58897d9998-lzvh6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.473129 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-client-ca\") pod \"controller-manager-879f6c89f-fv5p6\" (UID: \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.473147 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/82fa445d-953b-4729-8c80-a2bc760f0ce3-plugins-dir\") pod \"csi-hostpathplugin-8g24k\" (UID: \"82fa445d-953b-4729-8c80-a2bc760f0ce3\") " pod="hostpath-provisioner/csi-hostpathplugin-8g24k" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.473170 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-registry-tls\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.473190 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8z4k\" (UniqueName: \"kubernetes.io/projected/cac170c6-2d9b-4966-873b-a92ce0f3da29-kube-api-access-q8z4k\") pod \"etcd-operator-b45778765-64xcf\" (UID: \"cac170c6-2d9b-4966-873b-a92ce0f3da29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.473208 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/810999fd-fa8e-4e6c-9b07-bc58f174202b-audit-dir\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.473236 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a7388c2-4452-4132-961e-3a2f24154237-audit-dir\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.473254 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.473278 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwtsz\" (UniqueName: \"kubernetes.io/projected/5fac797c-c9f7-45e7-91dd-1efa96411e06-kube-api-access-wwtsz\") pod \"openshift-config-operator-7777fb866f-7zk6z\" (UID: \"5fac797c-c9f7-45e7-91dd-1efa96411e06\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zk6z" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.473327 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ps8z\" (UniqueName: \"kubernetes.io/projected/c9eb96a5-27c6-4cab-889a-1938f92b95aa-kube-api-access-9ps8z\") pod \"olm-operator-6b444d44fb-lxqzc\" (UID: \"c9eb96a5-27c6-4cab-889a-1938f92b95aa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxqzc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.473350 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cac170c6-2d9b-4966-873b-a92ce0f3da29-config\") pod \"etcd-operator-b45778765-64xcf\" (UID: \"cac170c6-2d9b-4966-873b-a92ce0f3da29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.473370 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/cac170c6-2d9b-4966-873b-a92ce0f3da29-etcd-ca\") pod \"etcd-operator-b45778765-64xcf\" (UID: \"cac170c6-2d9b-4966-873b-a92ce0f3da29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.473389 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a7388c2-4452-4132-961e-3a2f24154237-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.473416 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9664c11c-1653-4690-9eb4-9c4918070a0d-service-ca-bundle\") pod \"router-default-5444994796-l4462\" (UID: \"9664c11c-1653-4690-9eb4-9c4918070a0d\") " pod="openshift-ingress/router-default-5444994796-l4462" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.473441 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e84a898-a553-48bd-afbb-5688db92ff4b-config\") pod \"route-controller-manager-6576b87f9c-77f8f\" (UID: \"5e84a898-a553-48bd-afbb-5688db92ff4b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.473463 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-audit-policies\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.474700 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e84a898-a553-48bd-afbb-5688db92ff4b-config\") pod \"route-controller-manager-6576b87f9c-77f8f\" (UID: \"5e84a898-a553-48bd-afbb-5688db92ff4b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.474833 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.474862 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99nck\" (UniqueName: \"kubernetes.io/projected/fcf282b3-df77-4087-a390-c000adfd8f86-kube-api-access-99nck\") pod \"package-server-manager-789f6589d5-n2c6j\" (UID: \"fcf282b3-df77-4087-a390-c000adfd8f86\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n2c6j" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.474889 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.474928 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd2gn\" (UniqueName: \"kubernetes.io/projected/814c6bed-3956-4eff-9909-58d7b74247c5-kube-api-access-nd2gn\") pod \"service-ca-operator-777779d784-dxknv\" (UID: \"814c6bed-3956-4eff-9909-58d7b74247c5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dxknv" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.474967 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dd94cd3b-bc32-422f-8c10-dc6d7cb52453-webhook-cert\") pod \"packageserver-d55dfcdfc-lfclk\" (UID: \"dd94cd3b-bc32-422f-8c10-dc6d7cb52453\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lfclk" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.474986 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/17f2f9b7-aad3-4959-8193-3e3e1d525141-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fmzz6\" (UID: \"17f2f9b7-aad3-4959-8193-3e3e1d525141\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fmzz6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475012 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae711767-328e-4007-94b6-59087a7ca625-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zsf62\" (UID: \"ae711767-328e-4007-94b6-59087a7ca625\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zsf62" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475042 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475058 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2aec4ae4-6eeb-4e1f-8912-8401d5607d2d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-svppz\" (UID: \"2aec4ae4-6eeb-4e1f-8912-8401d5607d2d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-svppz" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475082 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a7388c2-4452-4132-961e-3a2f24154237-serving-cert\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475098 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74820e6d-62e6-49db-8f4d-a49ae5fe95ee-config-volume\") pod \"dns-default-xjwwr\" (UID: \"74820e6d-62e6-49db-8f4d-a49ae5fe95ee\") " pod="openshift-dns/dns-default-xjwwr" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475123 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6a7388c2-4452-4132-961e-3a2f24154237-audit-policies\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475139 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475158 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5735c0d4-af84-4c65-b453-88a9086e0d8c-config\") pod \"kube-controller-manager-operator-78b949d7b-rrhvs\" (UID: \"5735c0d4-af84-4c65-b453-88a9086e0d8c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rrhvs" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475178 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fac797c-c9f7-45e7-91dd-1efa96411e06-serving-cert\") pod \"openshift-config-operator-7777fb866f-7zk6z\" (UID: \"5fac797c-c9f7-45e7-91dd-1efa96411e06\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zk6z" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475202 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475219 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/814c6bed-3956-4eff-9909-58d7b74247c5-serving-cert\") pod \"service-ca-operator-777779d784-dxknv\" (UID: \"814c6bed-3956-4eff-9909-58d7b74247c5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dxknv" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475236 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/efe93705-6f73-4523-9c27-4e2b2486d7ad-srv-cert\") pod \"catalog-operator-68c6474976-497f2\" (UID: \"efe93705-6f73-4523-9c27-4e2b2486d7ad\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-497f2" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475260 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a7388c2-4452-4132-961e-3a2f24154237-etcd-client\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475293 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-bound-sa-token\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475345 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9664c11c-1653-4690-9eb4-9c4918070a0d-stats-auth\") pod \"router-default-5444994796-l4462\" (UID: \"9664c11c-1653-4690-9eb4-9c4918070a0d\") " pod="openshift-ingress/router-default-5444994796-l4462" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475366 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5r8m\" (UniqueName: \"kubernetes.io/projected/3817b4c1-1d03-4512-8a0e-f339f9c2fb5f-kube-api-access-x5r8m\") pod \"ingress-operator-5b745b69d9-lrjsc\" (UID: \"3817b4c1-1d03-4512-8a0e-f339f9c2fb5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrjsc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475385 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4251b28-e3a3-4694-b8d3-8106bacdfe86-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mn8tq\" (UID: \"b4251b28-e3a3-4694-b8d3-8106bacdfe86\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mn8tq" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475403 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m88mj\" (UniqueName: \"kubernetes.io/projected/efe93705-6f73-4523-9c27-4e2b2486d7ad-kube-api-access-m88mj\") pod \"catalog-operator-68c6474976-497f2\" (UID: \"efe93705-6f73-4523-9c27-4e2b2486d7ad\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-497f2" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475652 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c9eb96a5-27c6-4cab-889a-1938f92b95aa-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lxqzc\" (UID: \"c9eb96a5-27c6-4cab-889a-1938f92b95aa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxqzc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475676 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a7388c2-4452-4132-961e-3a2f24154237-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475694 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f90c837a-43bf-4353-ba01-70a80be22306-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bhdxs\" (UID: \"f90c837a-43bf-4353-ba01-70a80be22306\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bhdxs" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475711 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae711767-328e-4007-94b6-59087a7ca625-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zsf62\" (UID: \"ae711767-328e-4007-94b6-59087a7ca625\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zsf62" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475729 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fea687ed-75f7-463e-9c99-c53398e244b5-images\") pod \"machine-config-operator-74547568cd-9m4qv\" (UID: \"fea687ed-75f7-463e-9c99-c53398e244b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9m4qv" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475756 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/cac170c6-2d9b-4966-873b-a92ce0f3da29-etcd-service-ca\") pod \"etcd-operator-b45778765-64xcf\" (UID: \"cac170c6-2d9b-4966-873b-a92ce0f3da29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475788 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwnvq\" (UniqueName: \"kubernetes.io/projected/208eb479-5aaa-44f5-91d4-7a9394a2aac2-kube-api-access-kwnvq\") pod \"console-operator-58897d9998-lzvh6\" (UID: \"208eb479-5aaa-44f5-91d4-7a9394a2aac2\") " pod="openshift-console-operator/console-operator-58897d9998-lzvh6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475808 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2tct\" (UniqueName: \"kubernetes.io/projected/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-kube-api-access-c2tct\") pod \"controller-manager-879f6c89f-fv5p6\" (UID: \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475827 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bc66b736-3c9e-40dd-b203-a9238bf0789d-certs\") pod \"machine-config-server-clk6d\" (UID: \"bc66b736-3c9e-40dd-b203-a9238bf0789d\") " pod="openshift-machine-config-operator/machine-config-server-clk6d" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475853 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fv5p6\" (UID: \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475870 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxcxf\" (UniqueName: \"kubernetes.io/projected/6cf7c2a6-012a-48df-9c00-e6eac17da885-kube-api-access-gxcxf\") pod \"ingress-canary-qvlrt\" (UID: \"6cf7c2a6-012a-48df-9c00-e6eac17da885\") " pod="openshift-ingress-canary/ingress-canary-qvlrt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475886 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b678fa7-59f7-4a2c-8cae-3f71a17f8734-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2m2xz\" (UID: \"2b678fa7-59f7-4a2c-8cae-3f71a17f8734\") " pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475910 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-trusted-ca\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475929 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/208eb479-5aaa-44f5-91d4-7a9394a2aac2-trusted-ca\") pod \"console-operator-58897d9998-lzvh6\" (UID: \"208eb479-5aaa-44f5-91d4-7a9394a2aac2\") " pod="openshift-console-operator/console-operator-58897d9998-lzvh6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475947 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9664c11c-1653-4690-9eb4-9c4918070a0d-default-certificate\") pod \"router-default-5444994796-l4462\" (UID: \"9664c11c-1653-4690-9eb4-9c4918070a0d\") " pod="openshift-ingress/router-default-5444994796-l4462" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475965 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3817b4c1-1d03-4512-8a0e-f339f9c2fb5f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lrjsc\" (UID: \"3817b4c1-1d03-4512-8a0e-f339f9c2fb5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrjsc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.475981 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/82fa445d-953b-4729-8c80-a2bc760f0ce3-socket-dir\") pod \"csi-hostpathplugin-8g24k\" (UID: \"82fa445d-953b-4729-8c80-a2bc760f0ce3\") " pod="hostpath-provisioner/csi-hostpathplugin-8g24k" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.476012 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e88970a-7b70-4335-ab92-5b927f6864bd-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-dc6gt\" (UID: \"7e88970a-7b70-4335-ab92-5b927f6864bd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dc6gt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.476029 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae711767-328e-4007-94b6-59087a7ca625-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zsf62\" (UID: \"ae711767-328e-4007-94b6-59087a7ca625\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zsf62" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.476051 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de07a2d4-e916-4c2d-bb3b-b8a268461a71-config-volume\") pod \"collect-profiles-29491665-4rrsc\" (UID: \"de07a2d4-e916-4c2d-bb3b-b8a268461a71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.476068 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dd94cd3b-bc32-422f-8c10-dc6d7cb52453-tmpfs\") pod \"packageserver-d55dfcdfc-lfclk\" (UID: \"dd94cd3b-bc32-422f-8c10-dc6d7cb52453\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lfclk" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.476089 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8746\" (UniqueName: \"kubernetes.io/projected/6a7388c2-4452-4132-961e-3a2f24154237-kube-api-access-q8746\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.476291 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzvhg\" (UniqueName: \"kubernetes.io/projected/bc66b736-3c9e-40dd-b203-a9238bf0789d-kube-api-access-lzvhg\") pod \"machine-config-server-clk6d\" (UID: \"bc66b736-3c9e-40dd-b203-a9238bf0789d\") " pod="openshift-machine-config-operator/machine-config-server-clk6d" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.476328 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrrgx\" (UniqueName: \"kubernetes.io/projected/88a08ba8-7cde-40c0-9a88-b01b642c78df-kube-api-access-mrrgx\") pod \"dns-operator-744455d44c-mnv4z\" (UID: \"88a08ba8-7cde-40c0-9a88-b01b642c78df\") " pod="openshift-dns-operator/dns-operator-744455d44c-mnv4z" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.476346 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9m47\" (UniqueName: \"kubernetes.io/projected/f6abdebd-9ed0-44e3-934a-9472c0f92bc7-kube-api-access-w9m47\") pod \"service-ca-9c57cc56f-d8mn9\" (UID: \"f6abdebd-9ed0-44e3-934a-9472c0f92bc7\") " pod="openshift-service-ca/service-ca-9c57cc56f-d8mn9" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.477130 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5fac797c-c9f7-45e7-91dd-1efa96411e06-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7zk6z\" (UID: \"5fac797c-c9f7-45e7-91dd-1efa96411e06\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zk6z" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.477880 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9664c11c-1653-4690-9eb4-9c4918070a0d-service-ca-bundle\") pod \"router-default-5444994796-l4462\" (UID: \"9664c11c-1653-4690-9eb4-9c4918070a0d\") " pod="openshift-ingress/router-default-5444994796-l4462" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.478104 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.478118 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.478167 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/810999fd-fa8e-4e6c-9b07-bc58f174202b-audit-dir\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.478204 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a7388c2-4452-4132-961e-3a2f24154237-audit-dir\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.478727 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-audit-policies\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.479055 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:32.979030713 +0000 UTC m=+119.290134788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.479251 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-client-ca\") pod \"controller-manager-879f6c89f-fv5p6\" (UID: \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.479634 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/cac170c6-2d9b-4966-873b-a92ce0f3da29-etcd-ca\") pod \"etcd-operator-b45778765-64xcf\" (UID: \"cac170c6-2d9b-4966-873b-a92ce0f3da29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.479655 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cac170c6-2d9b-4966-873b-a92ce0f3da29-config\") pod \"etcd-operator-b45778765-64xcf\" (UID: \"cac170c6-2d9b-4966-873b-a92ce0f3da29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.479998 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.480023 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a7388c2-4452-4132-961e-3a2f24154237-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.480673 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6a7388c2-4452-4132-961e-3a2f24154237-audit-policies\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.481168 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e84a898-a553-48bd-afbb-5688db92ff4b-client-ca\") pod \"route-controller-manager-6576b87f9c-77f8f\" (UID: \"5e84a898-a553-48bd-afbb-5688db92ff4b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.481932 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5735c0d4-af84-4c65-b453-88a9086e0d8c-config\") pod \"kube-controller-manager-operator-78b949d7b-rrhvs\" (UID: \"5735c0d4-af84-4c65-b453-88a9086e0d8c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rrhvs" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.482425 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/cac170c6-2d9b-4966-873b-a92ce0f3da29-etcd-service-ca\") pod \"etcd-operator-b45778765-64xcf\" (UID: \"cac170c6-2d9b-4966-873b-a92ce0f3da29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.482602 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-registry-certificates\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.482653 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.483354 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fea687ed-75f7-463e-9c99-c53398e244b5-images\") pod \"machine-config-operator-74547568cd-9m4qv\" (UID: \"fea687ed-75f7-463e-9c99-c53398e244b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9m4qv" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.483199 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2aec4ae4-6eeb-4e1f-8912-8401d5607d2d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-svppz\" (UID: \"2aec4ae4-6eeb-4e1f-8912-8401d5607d2d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-svppz" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.483431 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2aec4ae4-6eeb-4e1f-8912-8401d5607d2d-proxy-tls\") pod \"machine-config-controller-84d6567774-svppz\" (UID: \"2aec4ae4-6eeb-4e1f-8912-8401d5607d2d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-svppz" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.483535 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fea687ed-75f7-463e-9c99-c53398e244b5-proxy-tls\") pod \"machine-config-operator-74547568cd-9m4qv\" (UID: \"fea687ed-75f7-463e-9c99-c53398e244b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9m4qv" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.483780 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.483840 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-registry-tls\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.484137 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fac797c-c9f7-45e7-91dd-1efa96411e06-serving-cert\") pod \"openshift-config-operator-7777fb866f-7zk6z\" (UID: \"5fac797c-c9f7-45e7-91dd-1efa96411e06\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zk6z" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.484457 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.485050 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-trusted-ca\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.485455 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a7388c2-4452-4132-961e-3a2f24154237-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.486673 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.486709 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.488230 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9664c11c-1653-4690-9eb4-9c4918070a0d-stats-auth\") pod \"router-default-5444994796-l4462\" (UID: \"9664c11c-1653-4690-9eb4-9c4918070a0d\") " pod="openshift-ingress/router-default-5444994796-l4462" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.489099 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e88970a-7b70-4335-ab92-5b927f6864bd-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-dc6gt\" (UID: \"7e88970a-7b70-4335-ab92-5b927f6864bd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dc6gt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.489129 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e84a898-a553-48bd-afbb-5688db92ff4b-serving-cert\") pod \"route-controller-manager-6576b87f9c-77f8f\" (UID: \"5e84a898-a553-48bd-afbb-5688db92ff4b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.489141 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fea687ed-75f7-463e-9c99-c53398e244b5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9m4qv\" (UID: \"fea687ed-75f7-463e-9c99-c53398e244b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9m4qv" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.489712 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a7388c2-4452-4132-961e-3a2f24154237-encryption-config\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.490388 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.490726 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/88a08ba8-7cde-40c0-9a88-b01b642c78df-metrics-tls\") pod \"dns-operator-744455d44c-mnv4z\" (UID: \"88a08ba8-7cde-40c0-9a88-b01b642c78df\") " pod="openshift-dns-operator/dns-operator-744455d44c-mnv4z" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.490902 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-etcd-serving-ca\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.490994 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a7388c2-4452-4132-961e-3a2f24154237-etcd-client\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.491236 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.491589 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9664c11c-1653-4690-9eb4-9c4918070a0d-metrics-certs\") pod \"router-default-5444994796-l4462\" (UID: \"9664c11c-1653-4690-9eb4-9c4918070a0d\") " pod="openshift-ingress/router-default-5444994796-l4462" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.491793 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.492047 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a7388c2-4452-4132-961e-3a2f24154237-serving-cert\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.492076 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.492268 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5735c0d4-af84-4c65-b453-88a9086e0d8c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rrhvs\" (UID: \"5735c0d4-af84-4c65-b453-88a9086e0d8c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rrhvs" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.492961 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.495490 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cac170c6-2d9b-4966-873b-a92ce0f3da29-etcd-client\") pod \"etcd-operator-b45778765-64xcf\" (UID: \"cac170c6-2d9b-4966-873b-a92ce0f3da29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.495962 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9664c11c-1653-4690-9eb4-9c4918070a0d-default-certificate\") pod \"router-default-5444994796-l4462\" (UID: \"9664c11c-1653-4690-9eb4-9c4918070a0d\") " pod="openshift-ingress/router-default-5444994796-l4462" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.496254 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cac170c6-2d9b-4966-873b-a92ce0f3da29-serving-cert\") pod \"etcd-operator-b45778765-64xcf\" (UID: \"cac170c6-2d9b-4966-873b-a92ce0f3da29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.506293 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.525728 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.545493 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.549788 4799 secret.go:188] Couldn't get secret openshift-authentication-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.549858 4799 configmap.go:193] Couldn't get configMap openshift-cluster-version/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.549884 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-serving-cert podName:f6e1f4db-f2d9-4334-99ea-57ec0b6711e2 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.549863201 +0000 UTC m=+119.860967256 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-serving-cert") pod "authentication-operator-69f744f599-9gr7w" (UID: "f6e1f4db-f2d9-4334-99ea-57ec0b6711e2") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.549945 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3cad1fa-7215-4807-8c41-cc85a25dcb32-service-ca podName:a3cad1fa-7215-4807-8c41-cc85a25dcb32 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.549917453 +0000 UTC m=+119.861021538 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/a3cad1fa-7215-4807-8c41-cc85a25dcb32-service-ca") pod "cluster-version-operator-5c965bbfc6-69np8" (UID: "a3cad1fa-7215-4807-8c41-cc85a25dcb32") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.549976 4799 secret.go:188] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.550015 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3cad1fa-7215-4807-8c41-cc85a25dcb32-serving-cert podName:a3cad1fa-7215-4807-8c41-cc85a25dcb32 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.550005555 +0000 UTC m=+119.861109630 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a3cad1fa-7215-4807-8c41-cc85a25dcb32-serving-cert") pod "cluster-version-operator-5c965bbfc6-69np8" (UID: "a3cad1fa-7215-4807-8c41-cc85a25dcb32") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.550176 4799 secret.go:188] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.550216 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-etcd-client podName:ebd2f02f-3d33-46f5-b78f-c3a81e326627 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.550205521 +0000 UTC m=+119.861309596 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-etcd-client") pod "apiserver-76f77b778f-9t8n9" (UID: "ebd2f02f-3d33-46f5-b78f-c3a81e326627") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.550257 4799 configmap.go:193] Couldn't get configMap openshift-console/console-config: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.550285 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-config podName:1c1b6ac6-0dc3-4f65-bb94-d448893ae317 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.550276472 +0000 UTC m=+119.861380547 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-config") pod "console-f9d7485db-bl4wn" (UID: "1c1b6ac6-0dc3-4f65-bb94-d448893ae317") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.550285 4799 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.550356 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-oauth-serving-cert podName:1c1b6ac6-0dc3-4f65-bb94-d448893ae317 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.550344584 +0000 UTC m=+119.861448859 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-oauth-serving-cert") pod "console-f9d7485db-bl4wn" (UID: "1c1b6ac6-0dc3-4f65-bb94-d448893ae317") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.550374 4799 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.550399 4799 secret.go:188] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.550413 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-service-ca-bundle podName:f6e1f4db-f2d9-4334-99ea-57ec0b6711e2 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.550403656 +0000 UTC m=+119.861507741 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-service-ca-bundle") pod "authentication-operator-69f744f599-9gr7w" (UID: "f6e1f4db-f2d9-4334-99ea-57ec0b6711e2") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.550446 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-serving-cert podName:ebd2f02f-3d33-46f5-b78f-c3a81e326627 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.550434117 +0000 UTC m=+119.861538432 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-serving-cert") pod "apiserver-76f77b778f-9t8n9" (UID: "ebd2f02f-3d33-46f5-b78f-c3a81e326627") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.550469 4799 secret.go:188] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.550496 4799 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.550528 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-encryption-config podName:ebd2f02f-3d33-46f5-b78f-c3a81e326627 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.550514189 +0000 UTC m=+119.861618274 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-encryption-config") pod "apiserver-76f77b778f-9t8n9" (UID: "ebd2f02f-3d33-46f5-b78f-c3a81e326627") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.550551 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-config podName:f6e1f4db-f2d9-4334-99ea-57ec0b6711e2 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.55054152 +0000 UTC m=+119.861645605 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-config") pod "authentication-operator-69f744f599-9gr7w" (UID: "f6e1f4db-f2d9-4334-99ea-57ec0b6711e2") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.550555 4799 configmap.go:193] Couldn't get configMap openshift-console/service-ca: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.550584 4799 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.550593 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-service-ca podName:1c1b6ac6-0dc3-4f65-bb94-d448893ae317 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.550581932 +0000 UTC m=+119.861686247 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-service-ca") pod "console-f9d7485db-bl4wn" (UID: "1c1b6ac6-0dc3-4f65-bb94-d448893ae317") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.550611 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-image-import-ca podName:ebd2f02f-3d33-46f5-b78f-c3a81e326627 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.550603852 +0000 UTC m=+119.861707927 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-image-import-ca") pod "apiserver-76f77b778f-9t8n9" (UID: "ebd2f02f-3d33-46f5-b78f-c3a81e326627") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.588631 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.588949 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/814c6bed-3956-4eff-9909-58d7b74247c5-serving-cert\") pod \"service-ca-operator-777779d784-dxknv\" (UID: \"814c6bed-3956-4eff-9909-58d7b74247c5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dxknv" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.588985 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/efe93705-6f73-4523-9c27-4e2b2486d7ad-srv-cert\") pod \"catalog-operator-68c6474976-497f2\" (UID: \"efe93705-6f73-4523-9c27-4e2b2486d7ad\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-497f2" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589047 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5r8m\" (UniqueName: \"kubernetes.io/projected/3817b4c1-1d03-4512-8a0e-f339f9c2fb5f-kube-api-access-x5r8m\") pod \"ingress-operator-5b745b69d9-lrjsc\" (UID: \"3817b4c1-1d03-4512-8a0e-f339f9c2fb5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrjsc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589079 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4251b28-e3a3-4694-b8d3-8106bacdfe86-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mn8tq\" (UID: \"b4251b28-e3a3-4694-b8d3-8106bacdfe86\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mn8tq" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589112 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m88mj\" (UniqueName: \"kubernetes.io/projected/efe93705-6f73-4523-9c27-4e2b2486d7ad-kube-api-access-m88mj\") pod \"catalog-operator-68c6474976-497f2\" (UID: \"efe93705-6f73-4523-9c27-4e2b2486d7ad\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-497f2" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589148 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c9eb96a5-27c6-4cab-889a-1938f92b95aa-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lxqzc\" (UID: \"c9eb96a5-27c6-4cab-889a-1938f92b95aa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxqzc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589172 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae711767-328e-4007-94b6-59087a7ca625-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zsf62\" (UID: \"ae711767-328e-4007-94b6-59087a7ca625\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zsf62" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589201 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f90c837a-43bf-4353-ba01-70a80be22306-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bhdxs\" (UID: \"f90c837a-43bf-4353-ba01-70a80be22306\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bhdxs" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589359 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bc66b736-3c9e-40dd-b203-a9238bf0789d-certs\") pod \"machine-config-server-clk6d\" (UID: \"bc66b736-3c9e-40dd-b203-a9238bf0789d\") " pod="openshift-machine-config-operator/machine-config-server-clk6d" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589417 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxcxf\" (UniqueName: \"kubernetes.io/projected/6cf7c2a6-012a-48df-9c00-e6eac17da885-kube-api-access-gxcxf\") pod \"ingress-canary-qvlrt\" (UID: \"6cf7c2a6-012a-48df-9c00-e6eac17da885\") " pod="openshift-ingress-canary/ingress-canary-qvlrt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589443 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b678fa7-59f7-4a2c-8cae-3f71a17f8734-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2m2xz\" (UID: \"2b678fa7-59f7-4a2c-8cae-3f71a17f8734\") " pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589503 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3817b4c1-1d03-4512-8a0e-f339f9c2fb5f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lrjsc\" (UID: \"3817b4c1-1d03-4512-8a0e-f339f9c2fb5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrjsc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589528 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/82fa445d-953b-4729-8c80-a2bc760f0ce3-socket-dir\") pod \"csi-hostpathplugin-8g24k\" (UID: \"82fa445d-953b-4729-8c80-a2bc760f0ce3\") " pod="hostpath-provisioner/csi-hostpathplugin-8g24k" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589570 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae711767-328e-4007-94b6-59087a7ca625-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zsf62\" (UID: \"ae711767-328e-4007-94b6-59087a7ca625\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zsf62" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589605 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de07a2d4-e916-4c2d-bb3b-b8a268461a71-config-volume\") pod \"collect-profiles-29491665-4rrsc\" (UID: \"de07a2d4-e916-4c2d-bb3b-b8a268461a71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589639 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dd94cd3b-bc32-422f-8c10-dc6d7cb52453-tmpfs\") pod \"packageserver-d55dfcdfc-lfclk\" (UID: \"dd94cd3b-bc32-422f-8c10-dc6d7cb52453\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lfclk" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589673 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzvhg\" (UniqueName: \"kubernetes.io/projected/bc66b736-3c9e-40dd-b203-a9238bf0789d-kube-api-access-lzvhg\") pod \"machine-config-server-clk6d\" (UID: \"bc66b736-3c9e-40dd-b203-a9238bf0789d\") " pod="openshift-machine-config-operator/machine-config-server-clk6d" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589726 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9m47\" (UniqueName: \"kubernetes.io/projected/f6abdebd-9ed0-44e3-934a-9472c0f92bc7-kube-api-access-w9m47\") pod \"service-ca-9c57cc56f-d8mn9\" (UID: \"f6abdebd-9ed0-44e3-934a-9472c0f92bc7\") " pod="openshift-service-ca/service-ca-9c57cc56f-d8mn9" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589766 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvgt2\" (UniqueName: \"kubernetes.io/projected/82fa445d-953b-4729-8c80-a2bc760f0ce3-kube-api-access-pvgt2\") pod \"csi-hostpathplugin-8g24k\" (UID: \"82fa445d-953b-4729-8c80-a2bc760f0ce3\") " pod="hostpath-provisioner/csi-hostpathplugin-8g24k" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589790 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dd94cd3b-bc32-422f-8c10-dc6d7cb52453-apiservice-cert\") pod \"packageserver-d55dfcdfc-lfclk\" (UID: \"dd94cd3b-bc32-422f-8c10-dc6d7cb52453\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lfclk" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589817 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/82fa445d-953b-4729-8c80-a2bc760f0ce3-csi-data-dir\") pod \"csi-hostpathplugin-8g24k\" (UID: \"82fa445d-953b-4729-8c80-a2bc760f0ce3\") " pod="hostpath-provisioner/csi-hostpathplugin-8g24k" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589846 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv4fs\" (UniqueName: \"kubernetes.io/projected/8f54e330-fce1-4959-89f0-76a62f86ae43-kube-api-access-gv4fs\") pod \"migrator-59844c95c7-tbm5t\" (UID: \"8f54e330-fce1-4959-89f0-76a62f86ae43\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tbm5t" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589894 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp4j9\" (UniqueName: \"kubernetes.io/projected/b4251b28-e3a3-4694-b8d3-8106bacdfe86-kube-api-access-vp4j9\") pod \"kube-storage-version-migrator-operator-b67b599dd-mn8tq\" (UID: \"b4251b28-e3a3-4694-b8d3-8106bacdfe86\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mn8tq" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589928 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/82fa445d-953b-4729-8c80-a2bc760f0ce3-mountpoint-dir\") pod \"csi-hostpathplugin-8g24k\" (UID: \"82fa445d-953b-4729-8c80-a2bc760f0ce3\") " pod="hostpath-provisioner/csi-hostpathplugin-8g24k" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589952 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fcf282b3-df77-4087-a390-c000adfd8f86-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-n2c6j\" (UID: \"fcf282b3-df77-4087-a390-c000adfd8f86\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n2c6j" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.589981 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f6abdebd-9ed0-44e3-934a-9472c0f92bc7-signing-key\") pod \"service-ca-9c57cc56f-d8mn9\" (UID: \"f6abdebd-9ed0-44e3-934a-9472c0f92bc7\") " pod="openshift-service-ca/service-ca-9c57cc56f-d8mn9" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590006 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3817b4c1-1d03-4512-8a0e-f339f9c2fb5f-trusted-ca\") pod \"ingress-operator-5b745b69d9-lrjsc\" (UID: \"3817b4c1-1d03-4512-8a0e-f339f9c2fb5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrjsc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590032 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4251b28-e3a3-4694-b8d3-8106bacdfe86-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mn8tq\" (UID: \"b4251b28-e3a3-4694-b8d3-8106bacdfe86\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mn8tq" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590056 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b678fa7-59f7-4a2c-8cae-3f71a17f8734-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2m2xz\" (UID: \"2b678fa7-59f7-4a2c-8cae-3f71a17f8734\") " pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590079 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hl6v\" (UniqueName: \"kubernetes.io/projected/74820e6d-62e6-49db-8f4d-a49ae5fe95ee-kube-api-access-9hl6v\") pod \"dns-default-xjwwr\" (UID: \"74820e6d-62e6-49db-8f4d-a49ae5fe95ee\") " pod="openshift-dns/dns-default-xjwwr" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590122 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/82fa445d-953b-4729-8c80-a2bc760f0ce3-registration-dir\") pod \"csi-hostpathplugin-8g24k\" (UID: \"82fa445d-953b-4729-8c80-a2bc760f0ce3\") " pod="hostpath-provisioner/csi-hostpathplugin-8g24k" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590146 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de07a2d4-e916-4c2d-bb3b-b8a268461a71-secret-volume\") pod \"collect-profiles-29491665-4rrsc\" (UID: \"de07a2d4-e916-4c2d-bb3b-b8a268461a71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590173 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4whn\" (UniqueName: \"kubernetes.io/projected/dd94cd3b-bc32-422f-8c10-dc6d7cb52453-kube-api-access-b4whn\") pod \"packageserver-d55dfcdfc-lfclk\" (UID: \"dd94cd3b-bc32-422f-8c10-dc6d7cb52453\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lfclk" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590207 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bc66b736-3c9e-40dd-b203-a9238bf0789d-node-bootstrap-token\") pod \"machine-config-server-clk6d\" (UID: \"bc66b736-3c9e-40dd-b203-a9238bf0789d\") " pod="openshift-machine-config-operator/machine-config-server-clk6d" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590230 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/efe93705-6f73-4523-9c27-4e2b2486d7ad-profile-collector-cert\") pod \"catalog-operator-68c6474976-497f2\" (UID: \"efe93705-6f73-4523-9c27-4e2b2486d7ad\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-497f2" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590254 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3817b4c1-1d03-4512-8a0e-f339f9c2fb5f-metrics-tls\") pod \"ingress-operator-5b745b69d9-lrjsc\" (UID: \"3817b4c1-1d03-4512-8a0e-f339f9c2fb5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrjsc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590275 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c9eb96a5-27c6-4cab-889a-1938f92b95aa-srv-cert\") pod \"olm-operator-6b444d44fb-lxqzc\" (UID: \"c9eb96a5-27c6-4cab-889a-1938f92b95aa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxqzc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590342 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfv8\" (UniqueName: \"kubernetes.io/projected/de07a2d4-e916-4c2d-bb3b-b8a268461a71-kube-api-access-djfv8\") pod \"collect-profiles-29491665-4rrsc\" (UID: \"de07a2d4-e916-4c2d-bb3b-b8a268461a71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590401 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzhbl\" (UniqueName: \"kubernetes.io/projected/2b678fa7-59f7-4a2c-8cae-3f71a17f8734-kube-api-access-rzhbl\") pod \"marketplace-operator-79b997595-2m2xz\" (UID: \"2b678fa7-59f7-4a2c-8cae-3f71a17f8734\") " pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590432 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f90c837a-43bf-4353-ba01-70a80be22306-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bhdxs\" (UID: \"f90c837a-43bf-4353-ba01-70a80be22306\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bhdxs" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590520 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/814c6bed-3956-4eff-9909-58d7b74247c5-config\") pod \"service-ca-operator-777779d784-dxknv\" (UID: \"814c6bed-3956-4eff-9909-58d7b74247c5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dxknv" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590554 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74820e6d-62e6-49db-8f4d-a49ae5fe95ee-metrics-tls\") pod \"dns-default-xjwwr\" (UID: \"74820e6d-62e6-49db-8f4d-a49ae5fe95ee\") " pod="openshift-dns/dns-default-xjwwr" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590603 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f90c837a-43bf-4353-ba01-70a80be22306-config\") pod \"kube-apiserver-operator-766d6c64bb-bhdxs\" (UID: \"f90c837a-43bf-4353-ba01-70a80be22306\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bhdxs" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590626 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f6abdebd-9ed0-44e3-934a-9472c0f92bc7-signing-cabundle\") pod \"service-ca-9c57cc56f-d8mn9\" (UID: \"f6abdebd-9ed0-44e3-934a-9472c0f92bc7\") " pod="openshift-service-ca/service-ca-9c57cc56f-d8mn9" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590654 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwc9g\" (UniqueName: \"kubernetes.io/projected/17f2f9b7-aad3-4959-8193-3e3e1d525141-kube-api-access-dwc9g\") pod \"control-plane-machine-set-operator-78cbb6b69f-fmzz6\" (UID: \"17f2f9b7-aad3-4959-8193-3e3e1d525141\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fmzz6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590710 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6cf7c2a6-012a-48df-9c00-e6eac17da885-cert\") pod \"ingress-canary-qvlrt\" (UID: \"6cf7c2a6-012a-48df-9c00-e6eac17da885\") " pod="openshift-ingress-canary/ingress-canary-qvlrt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590738 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/82fa445d-953b-4729-8c80-a2bc760f0ce3-plugins-dir\") pod \"csi-hostpathplugin-8g24k\" (UID: \"82fa445d-953b-4729-8c80-a2bc760f0ce3\") " pod="hostpath-provisioner/csi-hostpathplugin-8g24k" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590781 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ps8z\" (UniqueName: \"kubernetes.io/projected/c9eb96a5-27c6-4cab-889a-1938f92b95aa-kube-api-access-9ps8z\") pod \"olm-operator-6b444d44fb-lxqzc\" (UID: \"c9eb96a5-27c6-4cab-889a-1938f92b95aa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxqzc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590827 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99nck\" (UniqueName: \"kubernetes.io/projected/fcf282b3-df77-4087-a390-c000adfd8f86-kube-api-access-99nck\") pod \"package-server-manager-789f6589d5-n2c6j\" (UID: \"fcf282b3-df77-4087-a390-c000adfd8f86\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n2c6j" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590871 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd2gn\" (UniqueName: \"kubernetes.io/projected/814c6bed-3956-4eff-9909-58d7b74247c5-kube-api-access-nd2gn\") pod \"service-ca-operator-777779d784-dxknv\" (UID: \"814c6bed-3956-4eff-9909-58d7b74247c5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dxknv" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590899 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae711767-328e-4007-94b6-59087a7ca625-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zsf62\" (UID: \"ae711767-328e-4007-94b6-59087a7ca625\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zsf62" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590916 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dd94cd3b-bc32-422f-8c10-dc6d7cb52453-webhook-cert\") pod \"packageserver-d55dfcdfc-lfclk\" (UID: \"dd94cd3b-bc32-422f-8c10-dc6d7cb52453\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lfclk" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590940 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/17f2f9b7-aad3-4959-8193-3e3e1d525141-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fmzz6\" (UID: \"17f2f9b7-aad3-4959-8193-3e3e1d525141\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fmzz6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.590969 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74820e6d-62e6-49db-8f4d-a49ae5fe95ee-config-volume\") pod \"dns-default-xjwwr\" (UID: \"74820e6d-62e6-49db-8f4d-a49ae5fe95ee\") " pod="openshift-dns/dns-default-xjwwr" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.591824 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74820e6d-62e6-49db-8f4d-a49ae5fe95ee-config-volume\") pod \"dns-default-xjwwr\" (UID: \"74820e6d-62e6-49db-8f4d-a49ae5fe95ee\") " pod="openshift-dns/dns-default-xjwwr" Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.592204 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.092129807 +0000 UTC m=+119.403233872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.593952 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae711767-328e-4007-94b6-59087a7ca625-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zsf62\" (UID: \"ae711767-328e-4007-94b6-59087a7ca625\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zsf62" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.594105 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4251b28-e3a3-4694-b8d3-8106bacdfe86-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mn8tq\" (UID: \"b4251b28-e3a3-4694-b8d3-8106bacdfe86\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mn8tq" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.594852 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/82fa445d-953b-4729-8c80-a2bc760f0ce3-socket-dir\") pod \"csi-hostpathplugin-8g24k\" (UID: \"82fa445d-953b-4729-8c80-a2bc760f0ce3\") " pod="hostpath-provisioner/csi-hostpathplugin-8g24k" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.595521 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.595715 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3817b4c1-1d03-4512-8a0e-f339f9c2fb5f-trusted-ca\") pod \"ingress-operator-5b745b69d9-lrjsc\" (UID: \"3817b4c1-1d03-4512-8a0e-f339f9c2fb5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrjsc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.596787 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b678fa7-59f7-4a2c-8cae-3f71a17f8734-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2m2xz\" (UID: \"2b678fa7-59f7-4a2c-8cae-3f71a17f8734\") " pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.597057 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/82fa445d-953b-4729-8c80-a2bc760f0ce3-registration-dir\") pod \"csi-hostpathplugin-8g24k\" (UID: \"82fa445d-953b-4729-8c80-a2bc760f0ce3\") " pod="hostpath-provisioner/csi-hostpathplugin-8g24k" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.597507 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.597850 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de07a2d4-e916-4c2d-bb3b-b8a268461a71-config-volume\") pod \"collect-profiles-29491665-4rrsc\" (UID: \"de07a2d4-e916-4c2d-bb3b-b8a268461a71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.598104 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dd94cd3b-bc32-422f-8c10-dc6d7cb52453-tmpfs\") pod \"packageserver-d55dfcdfc-lfclk\" (UID: \"dd94cd3b-bc32-422f-8c10-dc6d7cb52453\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lfclk" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.598219 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/82fa445d-953b-4729-8c80-a2bc760f0ce3-csi-data-dir\") pod \"csi-hostpathplugin-8g24k\" (UID: \"82fa445d-953b-4729-8c80-a2bc760f0ce3\") " pod="hostpath-provisioner/csi-hostpathplugin-8g24k" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.598282 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f90c837a-43bf-4353-ba01-70a80be22306-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bhdxs\" (UID: \"f90c837a-43bf-4353-ba01-70a80be22306\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bhdxs" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.598359 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/82fa445d-953b-4729-8c80-a2bc760f0ce3-plugins-dir\") pod \"csi-hostpathplugin-8g24k\" (UID: \"82fa445d-953b-4729-8c80-a2bc760f0ce3\") " pod="hostpath-provisioner/csi-hostpathplugin-8g24k" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.598404 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/82fa445d-953b-4729-8c80-a2bc760f0ce3-mountpoint-dir\") pod \"csi-hostpathplugin-8g24k\" (UID: \"82fa445d-953b-4729-8c80-a2bc760f0ce3\") " pod="hostpath-provisioner/csi-hostpathplugin-8g24k" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.599249 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f6abdebd-9ed0-44e3-934a-9472c0f92bc7-signing-cabundle\") pod \"service-ca-9c57cc56f-d8mn9\" (UID: \"f6abdebd-9ed0-44e3-934a-9472c0f92bc7\") " pod="openshift-service-ca/service-ca-9c57cc56f-d8mn9" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.599821 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bc66b736-3c9e-40dd-b203-a9238bf0789d-certs\") pod \"machine-config-server-clk6d\" (UID: \"bc66b736-3c9e-40dd-b203-a9238bf0789d\") " pod="openshift-machine-config-operator/machine-config-server-clk6d" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.600347 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/814c6bed-3956-4eff-9909-58d7b74247c5-serving-cert\") pod \"service-ca-operator-777779d784-dxknv\" (UID: \"814c6bed-3956-4eff-9909-58d7b74247c5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dxknv" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.600938 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/814c6bed-3956-4eff-9909-58d7b74247c5-config\") pod \"service-ca-operator-777779d784-dxknv\" (UID: \"814c6bed-3956-4eff-9909-58d7b74247c5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dxknv" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.601143 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fcf282b3-df77-4087-a390-c000adfd8f86-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-n2c6j\" (UID: \"fcf282b3-df77-4087-a390-c000adfd8f86\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n2c6j" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.601763 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f6abdebd-9ed0-44e3-934a-9472c0f92bc7-signing-key\") pod \"service-ca-9c57cc56f-d8mn9\" (UID: \"f6abdebd-9ed0-44e3-934a-9472c0f92bc7\") " pod="openshift-service-ca/service-ca-9c57cc56f-d8mn9" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.601785 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f90c837a-43bf-4353-ba01-70a80be22306-config\") pod \"kube-apiserver-operator-766d6c64bb-bhdxs\" (UID: \"f90c837a-43bf-4353-ba01-70a80be22306\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bhdxs" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.602051 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dd94cd3b-bc32-422f-8c10-dc6d7cb52453-apiservice-cert\") pod \"packageserver-d55dfcdfc-lfclk\" (UID: \"dd94cd3b-bc32-422f-8c10-dc6d7cb52453\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lfclk" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.604236 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de07a2d4-e916-4c2d-bb3b-b8a268461a71-secret-volume\") pod \"collect-profiles-29491665-4rrsc\" (UID: \"de07a2d4-e916-4c2d-bb3b-b8a268461a71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.604468 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/17f2f9b7-aad3-4959-8193-3e3e1d525141-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fmzz6\" (UID: \"17f2f9b7-aad3-4959-8193-3e3e1d525141\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fmzz6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.604568 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/efe93705-6f73-4523-9c27-4e2b2486d7ad-srv-cert\") pod \"catalog-operator-68c6474976-497f2\" (UID: \"efe93705-6f73-4523-9c27-4e2b2486d7ad\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-497f2" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.605133 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b678fa7-59f7-4a2c-8cae-3f71a17f8734-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2m2xz\" (UID: \"2b678fa7-59f7-4a2c-8cae-3f71a17f8734\") " pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.605204 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c9eb96a5-27c6-4cab-889a-1938f92b95aa-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lxqzc\" (UID: \"c9eb96a5-27c6-4cab-889a-1938f92b95aa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxqzc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.605544 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bc66b736-3c9e-40dd-b203-a9238bf0789d-node-bootstrap-token\") pod \"machine-config-server-clk6d\" (UID: \"bc66b736-3c9e-40dd-b203-a9238bf0789d\") " pod="openshift-machine-config-operator/machine-config-server-clk6d" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.605577 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/efe93705-6f73-4523-9c27-4e2b2486d7ad-profile-collector-cert\") pod \"catalog-operator-68c6474976-497f2\" (UID: \"efe93705-6f73-4523-9c27-4e2b2486d7ad\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-497f2" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.606691 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6cf7c2a6-012a-48df-9c00-e6eac17da885-cert\") pod \"ingress-canary-qvlrt\" (UID: \"6cf7c2a6-012a-48df-9c00-e6eac17da885\") " pod="openshift-ingress-canary/ingress-canary-qvlrt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.606787 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae711767-328e-4007-94b6-59087a7ca625-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zsf62\" (UID: \"ae711767-328e-4007-94b6-59087a7ca625\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zsf62" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.606977 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dd94cd3b-bc32-422f-8c10-dc6d7cb52453-webhook-cert\") pod \"packageserver-d55dfcdfc-lfclk\" (UID: \"dd94cd3b-bc32-422f-8c10-dc6d7cb52453\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lfclk" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.607199 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.607883 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4251b28-e3a3-4694-b8d3-8106bacdfe86-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mn8tq\" (UID: \"b4251b28-e3a3-4694-b8d3-8106bacdfe86\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mn8tq" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.608782 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3817b4c1-1d03-4512-8a0e-f339f9c2fb5f-metrics-tls\") pod \"ingress-operator-5b745b69d9-lrjsc\" (UID: \"3817b4c1-1d03-4512-8a0e-f339f9c2fb5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrjsc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.609193 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74820e6d-62e6-49db-8f4d-a49ae5fe95ee-metrics-tls\") pod \"dns-default-xjwwr\" (UID: \"74820e6d-62e6-49db-8f4d-a49ae5fe95ee\") " pod="openshift-dns/dns-default-xjwwr" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.609486 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c9eb96a5-27c6-4cab-889a-1938f92b95aa-srv-cert\") pod \"olm-operator-6b444d44fb-lxqzc\" (UID: \"c9eb96a5-27c6-4cab-889a-1938f92b95aa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxqzc" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.632462 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.642033 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/208eb479-5aaa-44f5-91d4-7a9394a2aac2-serving-cert\") pod \"console-operator-58897d9998-lzvh6\" (UID: \"208eb479-5aaa-44f5-91d4-7a9394a2aac2\") " pod="openshift-console-operator/console-operator-58897d9998-lzvh6" Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.651408 4799 secret.go:188] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.651485 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/632a04e6-2ac7-4d81-a22c-2e3d4b58afe4-samples-operator-tls podName:632a04e6-2ac7-4d81-a22c-2e3d4b58afe4 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.651461852 +0000 UTC m=+119.962565917 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/632a04e6-2ac7-4d81-a22c-2e3d4b58afe4-samples-operator-tls") pod "cluster-samples-operator-665b6dd947-7gnsz" (UID: "632a04e6-2ac7-4d81-a22c-2e3d4b58afe4") : failed to sync secret cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.652055 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.655600 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/208eb479-5aaa-44f5-91d4-7a9394a2aac2-trusted-ca\") pod \"console-operator-58897d9998-lzvh6\" (UID: \"208eb479-5aaa-44f5-91d4-7a9394a2aac2\") " pod="openshift-console-operator/console-operator-58897d9998-lzvh6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.666208 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.685660 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.688431 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-g9lhq"] Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.692764 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.693319 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.193277235 +0000 UTC m=+119.504381300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:32 crc kubenswrapper[4799]: W0127 07:47:32.696188 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dc5c15b_696b_49fe_9593_102cc1e00398.slice/crio-3bff4814055a328eab9d49fd8d28dc85a5051e680626a39397f94c7ab4b583d0 WatchSource:0}: Error finding container 3bff4814055a328eab9d49fd8d28dc85a5051e680626a39397f94c7ab4b583d0: Status 404 returned error can't find the container with id 3bff4814055a328eab9d49fd8d28dc85a5051e680626a39397f94c7ab4b583d0 Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.705638 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.726420 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.745289 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.751551 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8kx4\" (UniqueName: \"kubernetes.io/projected/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-kube-api-access-p8kx4\") pod \"authentication-operator-69f744f599-9gr7w\" (UID: \"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.774871 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.782834 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fv5p6\" (UID: \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.786256 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.794543 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.794954 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.294915517 +0000 UTC m=+119.606019582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.795684 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.796011 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.295998637 +0000 UTC m=+119.607102702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.807120 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.825549 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.836288 4799 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.845526 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.849940 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7t6x\" (UniqueName: \"kubernetes.io/projected/632a04e6-2ac7-4d81-a22c-2e3d4b58afe4-kube-api-access-r7t6x\") pod \"cluster-samples-operator-665b6dd947-7gnsz\" (UID: \"632a04e6-2ac7-4d81-a22c-2e3d4b58afe4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7gnsz" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.866099 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.869816 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3cad1fa-7215-4807-8c41-cc85a25dcb32-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-69np8\" (UID: \"a3cad1fa-7215-4807-8c41-cc85a25dcb32\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.885388 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.896359 4799 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.896722 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.896908 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.396874358 +0000 UTC m=+119.707978423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.897408 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.897774 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.397762042 +0000 UTC m=+119.708866107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.905441 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.925701 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.926703 4799 projected.go:194] Error preparing data for projected volume kube-api-access-bgw95 for pod openshift-apiserver/apiserver-76f77b778f-9t8n9: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.926785 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebd2f02f-3d33-46f5-b78f-c3a81e326627-kube-api-access-bgw95 podName:ebd2f02f-3d33-46f5-b78f-c3a81e326627 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.426758436 +0000 UTC m=+119.737862501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bgw95" (UniqueName: "kubernetes.io/projected/ebd2f02f-3d33-46f5-b78f-c3a81e326627-kube-api-access-bgw95") pod "apiserver-76f77b778f-9t8n9" (UID: "ebd2f02f-3d33-46f5-b78f-c3a81e326627") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.945445 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.966030 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.985412 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.998343 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.998446 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.498426196 +0000 UTC m=+119.809530261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:32 crc kubenswrapper[4799]: I0127 07:47:32.998773 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:32 crc kubenswrapper[4799]: E0127 07:47:32.999129 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.499122076 +0000 UTC m=+119.810226141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.005470 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.010075 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-config\") pod \"controller-manager-879f6c89f-fv5p6\" (UID: \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.023624 4799 request.go:700] Waited for 1.693383663s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-sa-dockercfg-djjff&limit=500&resourceVersion=0 Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.025045 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.045864 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.066024 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 07:47:33 crc kubenswrapper[4799]: E0127 07:47:33.067046 4799 projected.go:194] Error preparing data for projected volume kube-api-access-pdss4 for pod openshift-console/console-f9d7485db-bl4wn: failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:33 crc kubenswrapper[4799]: E0127 07:47:33.067115 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-kube-api-access-pdss4 podName:1c1b6ac6-0dc3-4f65-bb94-d448893ae317 nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.567095963 +0000 UTC m=+119.878200028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pdss4" (UniqueName: "kubernetes.io/projected/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-kube-api-access-pdss4") pod "console-f9d7485db-bl4wn" (UID: "1c1b6ac6-0dc3-4f65-bb94-d448893ae317") : failed to sync configmap cache: timed out waiting for the condition Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.085789 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.099925 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:33 crc kubenswrapper[4799]: E0127 07:47:33.101222 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.601204501 +0000 UTC m=+119.912308566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.105428 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.126001 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.131541 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-serving-cert\") pod \"controller-manager-879f6c89f-fv5p6\" (UID: \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.149599 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.155856 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6268k" event={"ID":"a8169c83-7958-41b3-84b9-94fd314f09e8","Type":"ContainerStarted","Data":"3114f40abfa0fe0a1a4a7c25de7c727e06f0ff215516c0ed844332e075de636b"} Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.155902 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6268k" event={"ID":"a8169c83-7958-41b3-84b9-94fd314f09e8","Type":"ContainerStarted","Data":"303902bb6de90228fda9d65131d245f5ce2b3bf1ca8c53b5f6f788dfa5a745be"} Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.159681 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-g9lhq" event={"ID":"5dc5c15b-696b-49fe-9593-102cc1e00398","Type":"ContainerStarted","Data":"cc0970f9feb72416503747d4586b157128615e6328811482082ba0cef95dcc70"} Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.159862 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-g9lhq" event={"ID":"5dc5c15b-696b-49fe-9593-102cc1e00398","Type":"ContainerStarted","Data":"6055c0a78b5d5ccaee12c65918ad9902ed5e4aa06fced2b0813b7b03d56bb480"} Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.160034 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-g9lhq" event={"ID":"5dc5c15b-696b-49fe-9593-102cc1e00398","Type":"ContainerStarted","Data":"3bff4814055a328eab9d49fd8d28dc85a5051e680626a39397f94c7ab4b583d0"} Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.165713 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkdlq" event={"ID":"58e91935-0e33-4595-8b6a-27f157d8adaf","Type":"ContainerStarted","Data":"2d829e6f209fc6977b3138e04e6bcafc11f4a393f6ccd349fbfcc2f1fd98eb01"} Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.165774 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkdlq" event={"ID":"58e91935-0e33-4595-8b6a-27f157d8adaf","Type":"ContainerStarted","Data":"0d5ba9c5d69d6bfe04504eedfaf6ff586731d6f1f1a95f0a6bf201a6ddab84d3"} Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.166468 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5gnr\" (UniqueName: \"kubernetes.io/projected/a593dc31-38ff-4849-9ad0-cbaf0b6d1547-kube-api-access-t5gnr\") pod \"downloads-7954f5f757-tnr7q\" (UID: \"a593dc31-38ff-4849-9ad0-cbaf0b6d1547\") " pod="openshift-console/downloads-7954f5f757-tnr7q" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.166497 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.167695 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wppxn" event={"ID":"a7d76fd7-71e0-4263-ba99-b08222f58e6f","Type":"ContainerStarted","Data":"ca5c6bbb35f28ab6bbd40f8b02a82fa1a6cd6b1de220c67806cdf59c0b4edd12"} Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.167847 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wppxn" event={"ID":"a7d76fd7-71e0-4263-ba99-b08222f58e6f","Type":"ContainerStarted","Data":"41bdb3e3efce67ef590a216fa5c025425503c998c0f39935b274c6dd67731a2f"} Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.168646 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208eb479-5aaa-44f5-91d4-7a9394a2aac2-config\") pod \"console-operator-58897d9998-lzvh6\" (UID: \"208eb479-5aaa-44f5-91d4-7a9394a2aac2\") " pod="openshift-console-operator/console-operator-58897d9998-lzvh6" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.170574 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nqdj2" event={"ID":"de0b6ae8-2347-46f4-9870-9e9b14d6a621","Type":"ContainerStarted","Data":"e75227d5689d971048b79a56ccdb4774bcd220783a48609dbdb1e6d660802cd7"} Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.170696 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nqdj2" event={"ID":"de0b6ae8-2347-46f4-9870-9e9b14d6a621","Type":"ContainerStarted","Data":"35cba62e6ebb95bac09fcae84503c14d905b24619ef178cc9a8fd2f59d360e17"} Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.189819 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.204228 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.204810 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 07:47:33 crc kubenswrapper[4799]: E0127 07:47:33.204841 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.704810278 +0000 UTC m=+120.015914413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.226088 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.245809 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.266059 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.286953 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.305406 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.306105 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:33 crc kubenswrapper[4799]: E0127 07:47:33.306263 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.806243353 +0000 UTC m=+120.117347428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.307064 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:33 crc kubenswrapper[4799]: E0127 07:47:33.307778 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.807766426 +0000 UTC m=+120.118870581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.324901 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.333280 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tnr7q" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.400909 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsxwm\" (UniqueName: \"kubernetes.io/projected/9664c11c-1653-4690-9eb4-9c4918070a0d-kube-api-access-qsxwm\") pod \"router-default-5444994796-l4462\" (UID: \"9664c11c-1653-4690-9eb4-9c4918070a0d\") " pod="openshift-ingress/router-default-5444994796-l4462" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.408910 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:33 crc kubenswrapper[4799]: E0127 07:47:33.409077 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.909049618 +0000 UTC m=+120.220153683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.409631 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:33 crc kubenswrapper[4799]: E0127 07:47:33.409965 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:33.909957033 +0000 UTC m=+120.221061098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.427406 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzj2d\" (UniqueName: \"kubernetes.io/projected/fea687ed-75f7-463e-9c99-c53398e244b5-kube-api-access-jzj2d\") pod \"machine-config-operator-74547568cd-9m4qv\" (UID: \"fea687ed-75f7-463e-9c99-c53398e244b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9m4qv" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.428565 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9m4qv" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.442884 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-l4462" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.454992 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvvdt\" (UniqueName: \"kubernetes.io/projected/7e88970a-7b70-4335-ab92-5b927f6864bd-kube-api-access-tvvdt\") pod \"multus-admission-controller-857f4d67dd-dc6gt\" (UID: \"7e88970a-7b70-4335-ab92-5b927f6864bd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dc6gt" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.466968 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w68rn\" (UniqueName: \"kubernetes.io/projected/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-kube-api-access-w68rn\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.483965 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97f98\" (UniqueName: \"kubernetes.io/projected/810999fd-fa8e-4e6c-9b07-bc58f174202b-kube-api-access-97f98\") pod \"oauth-openshift-558db77b4-n67f6\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.505727 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwvh4\" (UniqueName: \"kubernetes.io/projected/2aec4ae4-6eeb-4e1f-8912-8401d5607d2d-kube-api-access-rwvh4\") pod \"machine-config-controller-84d6567774-svppz\" (UID: \"2aec4ae4-6eeb-4e1f-8912-8401d5607d2d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-svppz" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.510595 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.510785 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgw95\" (UniqueName: \"kubernetes.io/projected/ebd2f02f-3d33-46f5-b78f-c3a81e326627-kube-api-access-bgw95\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:33 crc kubenswrapper[4799]: E0127 07:47:33.511878 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:34.011857923 +0000 UTC m=+120.322961998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.516421 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgw95\" (UniqueName: \"kubernetes.io/projected/ebd2f02f-3d33-46f5-b78f-c3a81e326627-kube-api-access-bgw95\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.532368 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8z4k\" (UniqueName: \"kubernetes.io/projected/cac170c6-2d9b-4966-873b-a92ce0f3da29-kube-api-access-q8z4k\") pod \"etcd-operator-b45778765-64xcf\" (UID: \"cac170c6-2d9b-4966-873b-a92ce0f3da29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.553998 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwtsz\" (UniqueName: \"kubernetes.io/projected/5fac797c-c9f7-45e7-91dd-1efa96411e06-kube-api-access-wwtsz\") pod \"openshift-config-operator-7777fb866f-7zk6z\" (UID: \"5fac797c-c9f7-45e7-91dd-1efa96411e06\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zk6z" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.578698 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tnr7q"] Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.579948 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5735c0d4-af84-4c65-b453-88a9086e0d8c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rrhvs\" (UID: \"5735c0d4-af84-4c65-b453-88a9086e0d8c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rrhvs" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.595548 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwnvq\" (UniqueName: \"kubernetes.io/projected/208eb479-5aaa-44f5-91d4-7a9394a2aac2-kube-api-access-kwnvq\") pod \"console-operator-58897d9998-lzvh6\" (UID: \"208eb479-5aaa-44f5-91d4-7a9394a2aac2\") " pod="openshift-console-operator/console-operator-58897d9998-lzvh6" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.608397 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-lzvh6" Jan 27 07:47:33 crc kubenswrapper[4799]: W0127 07:47:33.608404 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda593dc31_38ff_4849_9ad0_cbaf0b6d1547.slice/crio-3ece638065466d5c508063fe86df6fbfcca32a7db2bcca8b68c155b3deae5f60 WatchSource:0}: Error finding container 3ece638065466d5c508063fe86df6fbfcca32a7db2bcca8b68c155b3deae5f60: Status 404 returned error can't find the container with id 3ece638065466d5c508063fe86df6fbfcca32a7db2bcca8b68c155b3deae5f60 Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.612372 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3cad1fa-7215-4807-8c41-cc85a25dcb32-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-69np8\" (UID: \"a3cad1fa-7215-4807-8c41-cc85a25dcb32\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.612421 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3cad1fa-7215-4807-8c41-cc85a25dcb32-service-ca\") pod \"cluster-version-operator-5c965bbfc6-69np8\" (UID: \"a3cad1fa-7215-4807-8c41-cc85a25dcb32\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.612476 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-serving-cert\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.612496 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-config\") pod \"authentication-operator-69f744f599-9gr7w\" (UID: \"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.612533 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-config\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.612587 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdss4\" (UniqueName: \"kubernetes.io/projected/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-kube-api-access-pdss4\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.612651 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-service-ca-bundle\") pod \"authentication-operator-69f744f599-9gr7w\" (UID: \"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.612733 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-etcd-client\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.612763 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-serving-cert\") pod \"authentication-operator-69f744f599-9gr7w\" (UID: \"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.612813 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-service-ca\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.612856 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-image-import-ca\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.612881 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.612897 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-encryption-config\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.612912 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-oauth-serving-cert\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.613567 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-oauth-serving-cert\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.613883 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2tct\" (UniqueName: \"kubernetes.io/projected/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-kube-api-access-c2tct\") pod \"controller-manager-879f6c89f-fv5p6\" (UID: \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.615153 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-service-ca-bundle\") pod \"authentication-operator-69f744f599-9gr7w\" (UID: \"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.615733 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3cad1fa-7215-4807-8c41-cc85a25dcb32-service-ca\") pod \"cluster-version-operator-5c965bbfc6-69np8\" (UID: \"a3cad1fa-7215-4807-8c41-cc85a25dcb32\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.616494 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3cad1fa-7215-4807-8c41-cc85a25dcb32-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-69np8\" (UID: \"a3cad1fa-7215-4807-8c41-cc85a25dcb32\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.630052 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-serving-cert\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.631775 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ebd2f02f-3d33-46f5-b78f-c3a81e326627-image-import-ca\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.634562 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-config\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:33 crc kubenswrapper[4799]: E0127 07:47:33.635090 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:34.13506956 +0000 UTC m=+120.446173625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.635186 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-service-ca\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.636393 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-etcd-client\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.636388 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdss4\" (UniqueName: \"kubernetes.io/projected/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-kube-api-access-pdss4\") pod \"console-f9d7485db-bl4wn\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.637329 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-serving-cert\") pod \"authentication-operator-69f744f599-9gr7w\" (UID: \"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.640236 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6e1f4db-f2d9-4334-99ea-57ec0b6711e2-config\") pod \"authentication-operator-69f744f599-9gr7w\" (UID: \"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.641943 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ebd2f02f-3d33-46f5-b78f-c3a81e326627-encryption-config\") pod \"apiserver-76f77b778f-9t8n9\" (UID: \"ebd2f02f-3d33-46f5-b78f-c3a81e326627\") " pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.646111 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-bound-sa-token\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.663398 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gprsr\" (UniqueName: \"kubernetes.io/projected/5e84a898-a553-48bd-afbb-5688db92ff4b-kube-api-access-gprsr\") pod \"route-controller-manager-6576b87f9c-77f8f\" (UID: \"5e84a898-a553-48bd-afbb-5688db92ff4b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.669389 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9m4qv"] Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.670513 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.678168 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrrgx\" (UniqueName: \"kubernetes.io/projected/88a08ba8-7cde-40c0-9a88-b01b642c78df-kube-api-access-mrrgx\") pod \"dns-operator-744455d44c-mnv4z\" (UID: \"88a08ba8-7cde-40c0-9a88-b01b642c78df\") " pod="openshift-dns-operator/dns-operator-744455d44c-mnv4z" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.680265 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-dc6gt" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.687283 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8746\" (UniqueName: \"kubernetes.io/projected/6a7388c2-4452-4132-961e-3a2f24154237-kube-api-access-q8746\") pod \"apiserver-7bbb656c7d-rdpz8\" (UID: \"6a7388c2-4452-4132-961e-3a2f24154237\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.687458 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.706216 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.706438 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxcxf\" (UniqueName: \"kubernetes.io/projected/6cf7c2a6-012a-48df-9c00-e6eac17da885-kube-api-access-gxcxf\") pod \"ingress-canary-qvlrt\" (UID: \"6cf7c2a6-012a-48df-9c00-e6eac17da885\") " pod="openshift-ingress-canary/ingress-canary-qvlrt" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.708724 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zk6z" Jan 27 07:47:33 crc kubenswrapper[4799]: W0127 07:47:33.712139 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfea687ed_75f7_463e_9c99_c53398e244b5.slice/crio-e57cb09e5231b0d886fcaade1d1f2e25f15cc90ac23d7800484fdb278bc4a527 WatchSource:0}: Error finding container e57cb09e5231b0d886fcaade1d1f2e25f15cc90ac23d7800484fdb278bc4a527: Status 404 returned error can't find the container with id e57cb09e5231b0d886fcaade1d1f2e25f15cc90ac23d7800484fdb278bc4a527 Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.713550 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.713731 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/632a04e6-2ac7-4d81-a22c-2e3d4b58afe4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7gnsz\" (UID: \"632a04e6-2ac7-4d81-a22c-2e3d4b58afe4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7gnsz" Jan 27 07:47:33 crc kubenswrapper[4799]: E0127 07:47:33.714440 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:34.214418506 +0000 UTC m=+120.525522571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.716490 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rrhvs" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.717965 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/632a04e6-2ac7-4d81-a22c-2e3d4b58afe4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7gnsz\" (UID: \"632a04e6-2ac7-4d81-a22c-2e3d4b58afe4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7gnsz" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.725884 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae711767-328e-4007-94b6-59087a7ca625-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zsf62\" (UID: \"ae711767-328e-4007-94b6-59087a7ca625\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zsf62" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.737836 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3817b4c1-1d03-4512-8a0e-f339f9c2fb5f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lrjsc\" (UID: \"3817b4c1-1d03-4512-8a0e-f339f9c2fb5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrjsc" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.750466 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.758169 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-svppz" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.760476 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv4fs\" (UniqueName: \"kubernetes.io/projected/8f54e330-fce1-4959-89f0-76a62f86ae43-kube-api-access-gv4fs\") pod \"migrator-59844c95c7-tbm5t\" (UID: \"8f54e330-fce1-4959-89f0-76a62f86ae43\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tbm5t" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.766744 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mnv4z" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.788562 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5r8m\" (UniqueName: \"kubernetes.io/projected/3817b4c1-1d03-4512-8a0e-f339f9c2fb5f-kube-api-access-x5r8m\") pod \"ingress-operator-5b745b69d9-lrjsc\" (UID: \"3817b4c1-1d03-4512-8a0e-f339f9c2fb5f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrjsc" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.802767 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zsf62" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.815151 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:33 crc kubenswrapper[4799]: E0127 07:47:33.815493 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:34.315481832 +0000 UTC m=+120.626585897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.824193 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m88mj\" (UniqueName: \"kubernetes.io/projected/efe93705-6f73-4523-9c27-4e2b2486d7ad-kube-api-access-m88mj\") pod \"catalog-operator-68c6474976-497f2\" (UID: \"efe93705-6f73-4523-9c27-4e2b2486d7ad\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-497f2" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.850440 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzvhg\" (UniqueName: \"kubernetes.io/projected/bc66b736-3c9e-40dd-b203-a9238bf0789d-kube-api-access-lzvhg\") pod \"machine-config-server-clk6d\" (UID: \"bc66b736-3c9e-40dd-b203-a9238bf0789d\") " pod="openshift-machine-config-operator/machine-config-server-clk6d" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.850716 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.851172 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.851275 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tbm5t" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.855531 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.855630 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.873959 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.874919 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-497f2" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.882394 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-clk6d" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.882613 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvgt2\" (UniqueName: \"kubernetes.io/projected/82fa445d-953b-4729-8c80-a2bc760f0ce3-kube-api-access-pvgt2\") pod \"csi-hostpathplugin-8g24k\" (UID: \"82fa445d-953b-4729-8c80-a2bc760f0ce3\") " pod="hostpath-provisioner/csi-hostpathplugin-8g24k" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.883205 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9m47\" (UniqueName: \"kubernetes.io/projected/f6abdebd-9ed0-44e3-934a-9472c0f92bc7-kube-api-access-w9m47\") pod \"service-ca-9c57cc56f-d8mn9\" (UID: \"f6abdebd-9ed0-44e3-934a-9472c0f92bc7\") " pod="openshift-service-ca/service-ca-9c57cc56f-d8mn9" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.890346 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.893609 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f90c837a-43bf-4353-ba01-70a80be22306-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bhdxs\" (UID: \"f90c837a-43bf-4353-ba01-70a80be22306\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bhdxs" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.913380 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8g24k" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.915220 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qvlrt" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.915854 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:33 crc kubenswrapper[4799]: E0127 07:47:33.916455 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:34.416441094 +0000 UTC m=+120.727545159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.923927 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7gnsz" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.936353 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp4j9\" (UniqueName: \"kubernetes.io/projected/b4251b28-e3a3-4694-b8d3-8106bacdfe86-kube-api-access-vp4j9\") pod \"kube-storage-version-migrator-operator-b67b599dd-mn8tq\" (UID: \"b4251b28-e3a3-4694-b8d3-8106bacdfe86\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mn8tq" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.945653 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4whn\" (UniqueName: \"kubernetes.io/projected/dd94cd3b-bc32-422f-8c10-dc6d7cb52453-kube-api-access-b4whn\") pod \"packageserver-d55dfcdfc-lfclk\" (UID: \"dd94cd3b-bc32-422f-8c10-dc6d7cb52453\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lfclk" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.952895 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hl6v\" (UniqueName: \"kubernetes.io/projected/74820e6d-62e6-49db-8f4d-a49ae5fe95ee-kube-api-access-9hl6v\") pod \"dns-default-xjwwr\" (UID: \"74820e6d-62e6-49db-8f4d-a49ae5fe95ee\") " pod="openshift-dns/dns-default-xjwwr" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.962235 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djfv8\" (UniqueName: \"kubernetes.io/projected/de07a2d4-e916-4c2d-bb3b-b8a268461a71-kube-api-access-djfv8\") pod \"collect-profiles-29491665-4rrsc\" (UID: \"de07a2d4-e916-4c2d-bb3b-b8a268461a71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc" Jan 27 07:47:33 crc kubenswrapper[4799]: I0127 07:47:33.981138 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99nck\" (UniqueName: \"kubernetes.io/projected/fcf282b3-df77-4087-a390-c000adfd8f86-kube-api-access-99nck\") pod \"package-server-manager-789f6589d5-n2c6j\" (UID: \"fcf282b3-df77-4087-a390-c000adfd8f86\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n2c6j" Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.017504 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:34 crc kubenswrapper[4799]: E0127 07:47:34.017933 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:34.517916782 +0000 UTC m=+120.829020847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.021836 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzhbl\" (UniqueName: \"kubernetes.io/projected/2b678fa7-59f7-4a2c-8cae-3f71a17f8734-kube-api-access-rzhbl\") pod \"marketplace-operator-79b997595-2m2xz\" (UID: \"2b678fa7-59f7-4a2c-8cae-3f71a17f8734\") " pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.022398 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd2gn\" (UniqueName: \"kubernetes.io/projected/814c6bed-3956-4eff-9909-58d7b74247c5-kube-api-access-nd2gn\") pod \"service-ca-operator-777779d784-dxknv\" (UID: \"814c6bed-3956-4eff-9909-58d7b74247c5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dxknv" Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.025408 4799 request.go:700] Waited for 1.423324636s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.045079 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-lzvh6"] Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.048059 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ps8z\" (UniqueName: \"kubernetes.io/projected/c9eb96a5-27c6-4cab-889a-1938f92b95aa-kube-api-access-9ps8z\") pod \"olm-operator-6b444d44fb-lxqzc\" (UID: \"c9eb96a5-27c6-4cab-889a-1938f92b95aa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxqzc" Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.075965 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrjsc" Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.082865 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwc9g\" (UniqueName: \"kubernetes.io/projected/17f2f9b7-aad3-4959-8193-3e3e1d525141-kube-api-access-dwc9g\") pod \"control-plane-machine-set-operator-78cbb6b69f-fmzz6\" (UID: \"17f2f9b7-aad3-4959-8193-3e3e1d525141\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fmzz6" Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.085152 4799 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.085172 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n2c6j" Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.085217 4799 csr.go:257] certificate signing request csr-cw82v is issued Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.111204 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxqzc" Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.114875 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.120388 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:34 crc kubenswrapper[4799]: E0127 07:47:34.120777 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:34.620758377 +0000 UTC m=+120.931862442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.124784 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dxknv" Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.130817 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lfclk" Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.143798 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-d8mn9" Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.151561 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fmzz6" Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.156154 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xjwwr" Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.169063 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n67f6"] Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.172130 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc" Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.172137 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bhdxs" Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.214853 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" event={"ID":"a3cad1fa-7215-4807-8c41-cc85a25dcb32","Type":"ContainerStarted","Data":"6d32c59a1f07185f0f51816d8aaf0d2e11bf2fd449e74932fefbb75a3da88a27"} Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.216236 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mn8tq" Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.218825 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-l4462" event={"ID":"9664c11c-1653-4690-9eb4-9c4918070a0d","Type":"ContainerStarted","Data":"22f70b9deda3a49e570095051b36a2d4207e44284447750ac5ff26bd664e8442"} Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.218882 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-l4462" event={"ID":"9664c11c-1653-4690-9eb4-9c4918070a0d","Type":"ContainerStarted","Data":"6fb255703c4bfdd467bc113bb7cfac8a6c82cb932ff0a029ab2091c47b92428b"} Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.225122 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:34 crc kubenswrapper[4799]: E0127 07:47:34.225951 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:34.725911747 +0000 UTC m=+121.037015812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.232696 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tnr7q" event={"ID":"a593dc31-38ff-4849-9ad0-cbaf0b6d1547","Type":"ContainerStarted","Data":"5edf0794d3fb995a2d3c3e73b86b76de2640024a2a77741fae0840ebb6ada779"} Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.232745 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tnr7q" event={"ID":"a593dc31-38ff-4849-9ad0-cbaf0b6d1547","Type":"ContainerStarted","Data":"3ece638065466d5c508063fe86df6fbfcca32a7db2bcca8b68c155b3deae5f60"} Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.233619 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-tnr7q" Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.237584 4799 patch_prober.go:28] interesting pod/downloads-7954f5f757-tnr7q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.237657 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tnr7q" podUID="a593dc31-38ff-4849-9ad0-cbaf0b6d1547" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.259811 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-lzvh6" event={"ID":"208eb479-5aaa-44f5-91d4-7a9394a2aac2","Type":"ContainerStarted","Data":"ec5f7cdec1b8e2858c258139432d0984754655a9ca888bba8234348530a87adf"} Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.262491 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-clk6d" event={"ID":"bc66b736-3c9e-40dd-b203-a9238bf0789d","Type":"ContainerStarted","Data":"0219079d86b5459447bbfcfabf901af327fdccb66416a47adac9c1ac2d86c2e0"} Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.266046 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9m4qv" event={"ID":"fea687ed-75f7-463e-9c99-c53398e244b5","Type":"ContainerStarted","Data":"e57cb09e5231b0d886fcaade1d1f2e25f15cc90ac23d7800484fdb278bc4a527"} Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.326489 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:34 crc kubenswrapper[4799]: E0127 07:47:34.328036 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:34.828021173 +0000 UTC m=+121.139125238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.437985 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:34 crc kubenswrapper[4799]: E0127 07:47:34.438343 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:34.938330158 +0000 UTC m=+121.249434213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.441972 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7zk6z"] Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.445660 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-l4462" Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.448315 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-dc6gt"] Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.515286 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f"] Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.515380 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mnv4z"] Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.539580 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:34 crc kubenswrapper[4799]: E0127 07:47:34.539890 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:35.039833825 +0000 UTC m=+121.350937890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.540096 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:34 crc kubenswrapper[4799]: E0127 07:47:34.540520 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:35.040509565 +0000 UTC m=+121.351613690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.605896 4799 patch_prober.go:28] interesting pod/router-default-5444994796-l4462 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:47:34 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Jan 27 07:47:34 crc kubenswrapper[4799]: [+]process-running ok Jan 27 07:47:34 crc kubenswrapper[4799]: healthz check failed Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.606021 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4462" podUID="9664c11c-1653-4690-9eb4-9c4918070a0d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.640794 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:34 crc kubenswrapper[4799]: E0127 07:47:34.641344 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:35.141320883 +0000 UTC m=+121.452424948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.743343 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:34 crc kubenswrapper[4799]: E0127 07:47:34.744016 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:35.244005234 +0000 UTC m=+121.555109289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.794220 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rrhvs"] Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.849834 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:34 crc kubenswrapper[4799]: E0127 07:47:34.850168 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:35.350154393 +0000 UTC m=+121.661258458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:34 crc kubenswrapper[4799]: I0127 07:47:34.955249 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:34 crc kubenswrapper[4799]: E0127 07:47:34.962574 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:35.462547026 +0000 UTC m=+121.773651091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.017371 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-497f2"] Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.025110 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wppxn" podStartSLOduration=100.025092501 podStartE2EDuration="1m40.025092501s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:34.989603276 +0000 UTC m=+121.300707341" watchObservedRunningTime="2026-01-27 07:47:35.025092501 +0000 UTC m=+121.336196576" Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.025447 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8"] Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.035735 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-svppz"] Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.037261 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-64xcf"] Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.051901 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qvlrt"] Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.058485 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:35 crc kubenswrapper[4799]: E0127 07:47:35.058933 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:35.55891476 +0000 UTC m=+121.870018825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.089408 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 07:42:33 +0000 UTC, rotation deadline is 2026-10-19 14:02:10.209956843 +0000 UTC Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.089483 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6366h14m35.120476805s for next certificate rotation Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.160468 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:35 crc kubenswrapper[4799]: E0127 07:47:35.162511 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:35.662489646 +0000 UTC m=+121.973593711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.186269 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-g9lhq" podStartSLOduration=100.186252743 podStartE2EDuration="1m40.186252743s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:35.18545483 +0000 UTC m=+121.496558895" watchObservedRunningTime="2026-01-27 07:47:35.186252743 +0000 UTC m=+121.497356818" Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.189052 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nqdj2" podStartSLOduration=101.189039131 podStartE2EDuration="1m41.189039131s" podCreationTimestamp="2026-01-27 07:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:35.133498552 +0000 UTC m=+121.444602617" watchObservedRunningTime="2026-01-27 07:47:35.189039131 +0000 UTC m=+121.500143196" Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.256357 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-tnr7q" podStartSLOduration=101.256338179 podStartE2EDuration="1m41.256338179s" podCreationTimestamp="2026-01-27 07:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:35.255731963 +0000 UTC m=+121.566836028" watchObservedRunningTime="2026-01-27 07:47:35.256338179 +0000 UTC m=+121.567442244" Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.263054 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:35 crc kubenswrapper[4799]: E0127 07:47:35.263468 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:35.763449209 +0000 UTC m=+122.074553274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.273861 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-clk6d" event={"ID":"bc66b736-3c9e-40dd-b203-a9238bf0789d","Type":"ContainerStarted","Data":"aed124a6adad8094c6f8163b32f5ec797fc9fd03dc75e5f987ea24ba173737bd"} Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.276948 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-dc6gt" event={"ID":"7e88970a-7b70-4335-ab92-5b927f6864bd","Type":"ContainerStarted","Data":"151d7a0599334991d0badc689a8c6405f1e362f18ee38eb67c5b590d7914da78"} Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.279016 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9m4qv" event={"ID":"fea687ed-75f7-463e-9c99-c53398e244b5","Type":"ContainerStarted","Data":"c563bcca0872f0381af39230a88fb572ec9b344780bb595672cce9ef9941045f"} Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.279044 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9m4qv" event={"ID":"fea687ed-75f7-463e-9c99-c53398e244b5","Type":"ContainerStarted","Data":"2d176e70339913652e175b8fe5c36316d078731f95761d6e6d03dfd287cccd2d"} Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.280958 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" event={"ID":"a3cad1fa-7215-4807-8c41-cc85a25dcb32","Type":"ContainerStarted","Data":"c0ab56679215e9d25d4d8683b5f7f7b8eddc0c339346473ec4aa8039b601e049"} Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.281893 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" event={"ID":"810999fd-fa8e-4e6c-9b07-bc58f174202b","Type":"ContainerStarted","Data":"eecde9149ee5a3f8c9a25110cf026c0426a45fa3be3b5722732317b610ba01a9"} Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.294591 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-lzvh6" event={"ID":"208eb479-5aaa-44f5-91d4-7a9394a2aac2","Type":"ContainerStarted","Data":"ead238d27acd39dc0438627581da664dd70106f8e0ead6977820176fbec72841"} Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.295569 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-lzvh6" Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.300208 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zk6z" event={"ID":"5fac797c-c9f7-45e7-91dd-1efa96411e06","Type":"ContainerStarted","Data":"daa08dddf55a85f3c4624e511b6ac76e575605b0fc5d24d1cf49c27cb622290c"} Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.300275 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zk6z" event={"ID":"5fac797c-c9f7-45e7-91dd-1efa96411e06","Type":"ContainerStarted","Data":"008f9349c46b866e5c9e7a09aa6efde83bbf942a9da73bc94c221c876fea796a"} Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.307943 4799 patch_prober.go:28] interesting pod/console-operator-58897d9998-lzvh6 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.308003 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-lzvh6" podUID="208eb479-5aaa-44f5-91d4-7a9394a2aac2" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 27 07:47:35 crc kubenswrapper[4799]: W0127 07:47:35.312201 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcac170c6_2d9b_4966_873b_a92ce0f3da29.slice/crio-eefd512b5b01503c479d4aba3d13474bf598da0c089661e306080601a57821dd WatchSource:0}: Error finding container eefd512b5b01503c479d4aba3d13474bf598da0c089661e306080601a57821dd: Status 404 returned error can't find the container with id eefd512b5b01503c479d4aba3d13474bf598da0c089661e306080601a57821dd Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.312333 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mnv4z" event={"ID":"88a08ba8-7cde-40c0-9a88-b01b642c78df","Type":"ContainerStarted","Data":"cf46ed95a50ccdc024a07f521b5bedee3d857f24994891ca484b412e44cbe043"} Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.323771 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rrhvs" event={"ID":"5735c0d4-af84-4c65-b453-88a9086e0d8c","Type":"ContainerStarted","Data":"380f3a677693c09f1ef0df3cad374fc50c99534c341e46b839fb4bba77fa9fc0"} Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.343941 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" event={"ID":"5e84a898-a553-48bd-afbb-5688db92ff4b","Type":"ContainerStarted","Data":"e2b46e4dff22bf246cca5dacd1612f5af3f1620a202b4136c147080641bbe854"} Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.345997 4799 patch_prober.go:28] interesting pod/downloads-7954f5f757-tnr7q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.346029 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tnr7q" podUID="a593dc31-38ff-4849-9ad0-cbaf0b6d1547" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.365107 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:35 crc kubenswrapper[4799]: E0127 07:47:35.369090 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:35.869060083 +0000 UTC m=+122.180164218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.455714 4799 patch_prober.go:28] interesting pod/router-default-5444994796-l4462 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:47:35 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Jan 27 07:47:35 crc kubenswrapper[4799]: [+]process-running ok Jan 27 07:47:35 crc kubenswrapper[4799]: healthz check failed Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.455769 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4462" podUID="9664c11c-1653-4690-9eb4-9c4918070a0d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.465835 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:35 crc kubenswrapper[4799]: E0127 07:47:35.467098 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:35.967083853 +0000 UTC m=+122.278187918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.528406 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9t8n9"] Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.531518 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-bl4wn"] Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.534804 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6268k" podStartSLOduration=101.534769272 podStartE2EDuration="1m41.534769272s" podCreationTimestamp="2026-01-27 07:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:35.496544129 +0000 UTC m=+121.807648204" watchObservedRunningTime="2026-01-27 07:47:35.534769272 +0000 UTC m=+121.845873337" Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.555474 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tbm5t"] Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.559162 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zsf62"] Jan 27 07:47:35 crc kubenswrapper[4799]: W0127 07:47:35.560040 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebd2f02f_3d33_46f5_b78f_c3a81e326627.slice/crio-8761a39f3f35bddfdefebdd5b81a80c43486a8580e212fe4963853af1b9b8a11 WatchSource:0}: Error finding container 8761a39f3f35bddfdefebdd5b81a80c43486a8580e212fe4963853af1b9b8a11: Status 404 returned error can't find the container with id 8761a39f3f35bddfdefebdd5b81a80c43486a8580e212fe4963853af1b9b8a11 Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.574455 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:35 crc kubenswrapper[4799]: E0127 07:47:35.575056 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:36.075036722 +0000 UTC m=+122.386140787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.591029 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n2c6j"] Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.601609 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-l4462" podStartSLOduration=100.601583427 podStartE2EDuration="1m40.601583427s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:35.58139891 +0000 UTC m=+121.892502995" watchObservedRunningTime="2026-01-27 07:47:35.601583427 +0000 UTC m=+121.912687492" Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.608508 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fv5p6"] Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.610213 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7gnsz"] Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.627367 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8g24k"] Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.676356 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:35 crc kubenswrapper[4799]: E0127 07:47:35.676503 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:36.176480448 +0000 UTC m=+122.487584513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.676669 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:35 crc kubenswrapper[4799]: E0127 07:47:35.676975 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:36.176967271 +0000 UTC m=+122.488071336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:35 crc kubenswrapper[4799]: W0127 07:47:35.677312 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcf282b3_df77_4087_a390_c000adfd8f86.slice/crio-0ad03248da32a06d2e71da404cd530ffff04a9c01ae2f3e8a3157cd6696eb5a5 WatchSource:0}: Error finding container 0ad03248da32a06d2e71da404cd530ffff04a9c01ae2f3e8a3157cd6696eb5a5: Status 404 returned error can't find the container with id 0ad03248da32a06d2e71da404cd530ffff04a9c01ae2f3e8a3157cd6696eb5a5 Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.698270 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkdlq" podStartSLOduration=100.698254319 podStartE2EDuration="1m40.698254319s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:35.69760169 +0000 UTC m=+122.008705755" watchObservedRunningTime="2026-01-27 07:47:35.698254319 +0000 UTC m=+122.009358384" Jan 27 07:47:35 crc kubenswrapper[4799]: W0127 07:47:35.709987 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82fa445d_953b_4729_8c80_a2bc760f0ce3.slice/crio-ec88cf7eeb715ececda4451d88500925967863f3bdb3fbbe52855f095da69b73 WatchSource:0}: Error finding container ec88cf7eeb715ececda4451d88500925967863f3bdb3fbbe52855f095da69b73: Status 404 returned error can't find the container with id ec88cf7eeb715ececda4451d88500925967863f3bdb3fbbe52855f095da69b73 Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.774083 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9gr7w"] Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.778194 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:35 crc kubenswrapper[4799]: E0127 07:47:35.778769 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:36.278751677 +0000 UTC m=+122.589855742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:35 crc kubenswrapper[4799]: W0127 07:47:35.795690 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6e1f4db_f2d9_4334_99ea_57ec0b6711e2.slice/crio-385d25af01171b57939573f31524303779770b66d87db08bcd1f89950f06995d WatchSource:0}: Error finding container 385d25af01171b57939573f31524303779770b66d87db08bcd1f89950f06995d: Status 404 returned error can't find the container with id 385d25af01171b57939573f31524303779770b66d87db08bcd1f89950f06995d Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.882096 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:35 crc kubenswrapper[4799]: E0127 07:47:35.882437 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:36.382425666 +0000 UTC m=+122.693529731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.935886 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xjwwr"] Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.936183 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxqzc"] Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.939233 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-dxknv"] Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.953047 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lfclk"] Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.960395 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lrjsc"] Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.961354 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mn8tq"] Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.974532 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fmzz6"] Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.988000 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:35 crc kubenswrapper[4799]: E0127 07:47:35.988441 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:36.48842248 +0000 UTC m=+122.799526545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:35 crc kubenswrapper[4799]: I0127 07:47:35.997364 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bhdxs"] Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.010916 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc"] Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.023978 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-clk6d" podStartSLOduration=7.023962287 podStartE2EDuration="7.023962287s" podCreationTimestamp="2026-01-27 07:47:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:36.023500244 +0000 UTC m=+122.334604309" watchObservedRunningTime="2026-01-27 07:47:36.023962287 +0000 UTC m=+122.335066352" Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.032527 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2m2xz"] Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.045750 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-d8mn9"] Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.071630 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69np8" podStartSLOduration=102.071606114 podStartE2EDuration="1m42.071606114s" podCreationTimestamp="2026-01-27 07:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:36.059415103 +0000 UTC m=+122.370519178" watchObservedRunningTime="2026-01-27 07:47:36.071606114 +0000 UTC m=+122.382710179" Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.096205 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:36 crc kubenswrapper[4799]: E0127 07:47:36.096493 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:36.596482212 +0000 UTC m=+122.907586277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:36 crc kubenswrapper[4799]: W0127 07:47:36.106569 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6abdebd_9ed0_44e3_934a_9472c0f92bc7.slice/crio-666305e2a2269d650bc93b548c0c234bcb9ea5e3f5e562b363a4034a93edbc42 WatchSource:0}: Error finding container 666305e2a2269d650bc93b548c0c234bcb9ea5e3f5e562b363a4034a93edbc42: Status 404 returned error can't find the container with id 666305e2a2269d650bc93b548c0c234bcb9ea5e3f5e562b363a4034a93edbc42 Jan 27 07:47:36 crc kubenswrapper[4799]: W0127 07:47:36.134162 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3817b4c1_1d03_4512_8a0e_f339f9c2fb5f.slice/crio-4cea7e1245fbe7587869d17fb857b5326e491b92a2cf5d13b0effb11f3723769 WatchSource:0}: Error finding container 4cea7e1245fbe7587869d17fb857b5326e491b92a2cf5d13b0effb11f3723769: Status 404 returned error can't find the container with id 4cea7e1245fbe7587869d17fb857b5326e491b92a2cf5d13b0effb11f3723769 Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.197424 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:36 crc kubenswrapper[4799]: E0127 07:47:36.197779 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:36.697752593 +0000 UTC m=+123.008856658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.198677 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:36 crc kubenswrapper[4799]: E0127 07:47:36.199119 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:36.699106221 +0000 UTC m=+123.010210286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.308693 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:36 crc kubenswrapper[4799]: E0127 07:47:36.309152 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:36.809133639 +0000 UTC m=+123.120237704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.375034 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zsf62" event={"ID":"ae711767-328e-4007-94b6-59087a7ca625","Type":"ContainerStarted","Data":"5020a7d1ee2ed87a93d397d099242277ea572d42272f3082118bf34b67c68ad3"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.375460 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zsf62" event={"ID":"ae711767-328e-4007-94b6-59087a7ca625","Type":"ContainerStarted","Data":"761dbbfd2637ef6a86af40e68beafcf90ef884fc8a41029a2b1675f3a601cc6e"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.376978 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dxknv" event={"ID":"814c6bed-3956-4eff-9909-58d7b74247c5","Type":"ContainerStarted","Data":"9b432c709007c430a85d529193dc4419e4d45d449c664a1bdb4bae3f395fb1e4"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.377014 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9m4qv" podStartSLOduration=101.376996272 podStartE2EDuration="1m41.376996272s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:36.338605656 +0000 UTC m=+122.649709721" watchObservedRunningTime="2026-01-27 07:47:36.376996272 +0000 UTC m=+122.688100337" Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.380199 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-497f2" event={"ID":"efe93705-6f73-4523-9c27-4e2b2486d7ad","Type":"ContainerStarted","Data":"0a185b185be0faa720cba8dd5dd31437c2a2da23c145cf2afbfda0c255a7524c"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.380252 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-497f2" event={"ID":"efe93705-6f73-4523-9c27-4e2b2486d7ad","Type":"ContainerStarted","Data":"cf30979efee7fe181b51cfd225ba994a26539c6b2a29f8a2a575093267c31d30"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.380544 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-497f2" Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.385378 4799 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-497f2 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.385421 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-497f2" podUID="efe93705-6f73-4523-9c27-4e2b2486d7ad" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.390814 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-dc6gt" event={"ID":"7e88970a-7b70-4335-ab92-5b927f6864bd","Type":"ContainerStarted","Data":"a4213122ececabcf1cdbff4fa1b1ed2362326b7492ce1a3e194d5af9f9238c31"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.390861 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-dc6gt" event={"ID":"7e88970a-7b70-4335-ab92-5b927f6864bd","Type":"ContainerStarted","Data":"40a97ceffd4477ded1fd3bf958421fdbf52f0e092a340d44fda34fd7e0b18758"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.392232 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7gnsz" event={"ID":"632a04e6-2ac7-4d81-a22c-2e3d4b58afe4","Type":"ContainerStarted","Data":"cedb521f5eeb574edc8e345372cd4bbde73b7b130d4cc358e693e480ed52db06"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.393405 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" event={"ID":"5e84a898-a553-48bd-afbb-5688db92ff4b","Type":"ContainerStarted","Data":"a89c20ac09dff817fa88fbb0364d7473a99d7bbcf97a11e45e828693d21ef6c1"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.393651 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.410227 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:36 crc kubenswrapper[4799]: E0127 07:47:36.410553 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:36.910540254 +0000 UTC m=+123.221644319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.411100 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qvlrt" event={"ID":"6cf7c2a6-012a-48df-9c00-e6eac17da885","Type":"ContainerStarted","Data":"8e113669b9706b2447f9f93b14b8e4eecd67edd7e11cc79951bd136215033b52"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.411134 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qvlrt" event={"ID":"6cf7c2a6-012a-48df-9c00-e6eac17da885","Type":"ContainerStarted","Data":"497753c59703f04ddef81db84beccf1d0cdc149e1d410036567da070d1f8763b"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.425541 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" event={"ID":"ca13973c-b3d8-47d7-bd6c-14ebc72bd907","Type":"ContainerStarted","Data":"02e70d1c7f5046d310a17df04e8a18b14cc9fa637ddc1a3ce1d6d89a178813c9"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.425589 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" event={"ID":"ca13973c-b3d8-47d7-bd6c-14ebc72bd907","Type":"ContainerStarted","Data":"04722a010d0de8ff1b42723461b5ddf4072c880a4865649df10d19b71b5a6dd9"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.426198 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.438262 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fmzz6" event={"ID":"17f2f9b7-aad3-4959-8193-3e3e1d525141","Type":"ContainerStarted","Data":"8a3f04fbee09e65a2184637f5aba1d77ae371641e34fb637a2dc2345cc5e74cc"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.444762 4799 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fv5p6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.444804 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" podUID="ca13973c-b3d8-47d7-bd6c-14ebc72bd907" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.456058 4799 patch_prober.go:28] interesting pod/router-default-5444994796-l4462 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:47:36 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Jan 27 07:47:36 crc kubenswrapper[4799]: [+]process-running ok Jan 27 07:47:36 crc kubenswrapper[4799]: healthz check failed Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.456115 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4462" podUID="9664c11c-1653-4690-9eb4-9c4918070a0d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.471421 4799 generic.go:334] "Generic (PLEG): container finished" podID="ebd2f02f-3d33-46f5-b78f-c3a81e326627" containerID="3540dbc951a699c3a389ae24634a602a04a309b68472520bcaf7ed5233d951d6" exitCode=0 Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.472778 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" event={"ID":"ebd2f02f-3d33-46f5-b78f-c3a81e326627","Type":"ContainerDied","Data":"3540dbc951a699c3a389ae24634a602a04a309b68472520bcaf7ed5233d951d6"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.472805 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" event={"ID":"ebd2f02f-3d33-46f5-b78f-c3a81e326627","Type":"ContainerStarted","Data":"8761a39f3f35bddfdefebdd5b81a80c43486a8580e212fe4963853af1b9b8a11"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.502152 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" event={"ID":"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2","Type":"ContainerStarted","Data":"5dc246d0b0c2a0b6fe3c1d710ea008eb4fb74e8c957bdc67278acb91686e9d75"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.502186 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" event={"ID":"f6e1f4db-f2d9-4334-99ea-57ec0b6711e2","Type":"ContainerStarted","Data":"385d25af01171b57939573f31524303779770b66d87db08bcd1f89950f06995d"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.514507 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:36 crc kubenswrapper[4799]: E0127 07:47:36.515729 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:37.015709804 +0000 UTC m=+123.326813869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.528418 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.537206 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tbm5t" event={"ID":"8f54e330-fce1-4959-89f0-76a62f86ae43","Type":"ContainerStarted","Data":"bd62bb3a98408b1c7b4ee6ccfad40ca9bbf4604a84e0a56a88db31e256697a4f"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.537265 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tbm5t" event={"ID":"8f54e330-fce1-4959-89f0-76a62f86ae43","Type":"ContainerStarted","Data":"a561e789b8bf4f4294243ceb8ea5f461a36fbed453281161df0f6b273bb7d60f"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.550869 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-lzvh6" podStartSLOduration=102.550622924 podStartE2EDuration="1m42.550622924s" podCreationTimestamp="2026-01-27 07:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:36.549289187 +0000 UTC m=+122.860393252" watchObservedRunningTime="2026-01-27 07:47:36.550622924 +0000 UTC m=+122.861726989" Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.556082 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8g24k" event={"ID":"82fa445d-953b-4729-8c80-a2bc760f0ce3","Type":"ContainerStarted","Data":"ec88cf7eeb715ececda4451d88500925967863f3bdb3fbbe52855f095da69b73"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.610899 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc" event={"ID":"de07a2d4-e916-4c2d-bb3b-b8a268461a71","Type":"ContainerStarted","Data":"272261540d0312313c865d885e6a6658b4903491f9d1509e8f00f7e6e826e9cd"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.614933 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-d8mn9" event={"ID":"f6abdebd-9ed0-44e3-934a-9472c0f92bc7","Type":"ContainerStarted","Data":"666305e2a2269d650bc93b548c0c234bcb9ea5e3f5e562b363a4034a93edbc42"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.615937 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:36 crc kubenswrapper[4799]: E0127 07:47:36.616213 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:37.116200834 +0000 UTC m=+123.427304899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.620875 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bl4wn" event={"ID":"1c1b6ac6-0dc3-4f65-bb94-d448893ae317","Type":"ContainerStarted","Data":"105bac36d8218230a41cbf9a55411a5966a501cee3209f150c60d22a2003d5b5"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.620906 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bl4wn" event={"ID":"1c1b6ac6-0dc3-4f65-bb94-d448893ae317","Type":"ContainerStarted","Data":"0b4ba95d01efbdaf11f1a2f9f3c6434531db41dcd54291b58e351dd5dbdc1bbc"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.622705 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lfclk" event={"ID":"dd94cd3b-bc32-422f-8c10-dc6d7cb52453","Type":"ContainerStarted","Data":"748e3d48187d3aba3294bd9b9237a4b5009aa9e53bef4c38ea1ad7e385449f6f"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.670522 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxqzc" event={"ID":"c9eb96a5-27c6-4cab-889a-1938f92b95aa","Type":"ContainerStarted","Data":"a831244fdf841c9a57b994a738395117114b6a0f8624c6d11b54ebc2c910cba3"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.687513 4799 generic.go:334] "Generic (PLEG): container finished" podID="5fac797c-c9f7-45e7-91dd-1efa96411e06" containerID="daa08dddf55a85f3c4624e511b6ac76e575605b0fc5d24d1cf49c27cb622290c" exitCode=0 Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.687565 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zk6z" event={"ID":"5fac797c-c9f7-45e7-91dd-1efa96411e06","Type":"ContainerDied","Data":"daa08dddf55a85f3c4624e511b6ac76e575605b0fc5d24d1cf49c27cb622290c"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.700190 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-bl4wn" podStartSLOduration=102.700180071 podStartE2EDuration="1m42.700180071s" podCreationTimestamp="2026-01-27 07:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:36.699948714 +0000 UTC m=+123.011052779" watchObservedRunningTime="2026-01-27 07:47:36.700180071 +0000 UTC m=+123.011284136" Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.706514 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mnv4z" event={"ID":"88a08ba8-7cde-40c0-9a88-b01b642c78df","Type":"ContainerStarted","Data":"c85a4cfca92ea742729356aa888a3545f80ee1842afc09e5d6622daa2d7557ad"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.719247 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:36 crc kubenswrapper[4799]: E0127 07:47:36.720633 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:37.220618384 +0000 UTC m=+123.531722449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.736450 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xjwwr" event={"ID":"74820e6d-62e6-49db-8f4d-a49ae5fe95ee","Type":"ContainerStarted","Data":"6675aa7aa15e755673ddd3de4e1572d201636af1aed16489a5e19ab9f03c8b83"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.761616 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" event={"ID":"cac170c6-2d9b-4966-873b-a92ce0f3da29","Type":"ContainerStarted","Data":"701a17a979d237c9ef9bebfab48c6183191c7da0c532fb61ab2907ef50fcc58c"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.761676 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" event={"ID":"cac170c6-2d9b-4966-873b-a92ce0f3da29","Type":"ContainerStarted","Data":"eefd512b5b01503c479d4aba3d13474bf598da0c089661e306080601a57821dd"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.769399 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" podStartSLOduration=101.769382693 podStartE2EDuration="1m41.769382693s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:36.769109394 +0000 UTC m=+123.080213459" watchObservedRunningTime="2026-01-27 07:47:36.769382693 +0000 UTC m=+123.080486758" Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.785790 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrjsc" event={"ID":"3817b4c1-1d03-4512-8a0e-f339f9c2fb5f","Type":"ContainerStarted","Data":"4cea7e1245fbe7587869d17fb857b5326e491b92a2cf5d13b0effb11f3723769"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.809520 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-svppz" event={"ID":"2aec4ae4-6eeb-4e1f-8912-8401d5607d2d","Type":"ContainerStarted","Data":"8d6dfba7518cb36da47cb77b0627cc13438aa2677519d2f2e70f59ddbe8fcb57"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.809564 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-svppz" event={"ID":"2aec4ae4-6eeb-4e1f-8912-8401d5607d2d","Type":"ContainerStarted","Data":"77239951fd2855b009505f75ae1a240c2e09c9ccc30be72dbeb76d751971e46b"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.816564 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-dc6gt" podStartSLOduration=101.816555076 podStartE2EDuration="1m41.816555076s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:36.814679933 +0000 UTC m=+123.125783998" watchObservedRunningTime="2026-01-27 07:47:36.816555076 +0000 UTC m=+123.127659141" Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.821609 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:36 crc kubenswrapper[4799]: E0127 07:47:36.823286 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:37.323274244 +0000 UTC m=+123.634378309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.856931 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-497f2" podStartSLOduration=101.856909908 podStartE2EDuration="1m41.856909908s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:36.847723721 +0000 UTC m=+123.158827786" watchObservedRunningTime="2026-01-27 07:47:36.856909908 +0000 UTC m=+123.168013973" Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.875496 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-qvlrt" podStartSLOduration=7.875479979 podStartE2EDuration="7.875479979s" podCreationTimestamp="2026-01-27 07:47:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:36.87514037 +0000 UTC m=+123.186244455" watchObservedRunningTime="2026-01-27 07:47:36.875479979 +0000 UTC m=+123.186584044" Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.890515 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bhdxs" event={"ID":"f90c837a-43bf-4353-ba01-70a80be22306","Type":"ContainerStarted","Data":"2faf9fefc379bc83f01df9c2b2a986346ae4b5ad3c5793d41941e896915b8fb0"} Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.911562 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-9gr7w" podStartSLOduration=102.911546991 podStartE2EDuration="1m42.911546991s" podCreationTimestamp="2026-01-27 07:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:36.910469071 +0000 UTC m=+123.221573136" watchObservedRunningTime="2026-01-27 07:47:36.911546991 +0000 UTC m=+123.222651056" Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.923164 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:36 crc kubenswrapper[4799]: E0127 07:47:36.923413 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:37.423389214 +0000 UTC m=+123.734493279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.923485 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:36 crc kubenswrapper[4799]: E0127 07:47:36.924340 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:37.424285058 +0000 UTC m=+123.735389223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:36 crc kubenswrapper[4799]: I0127 07:47:36.948986 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zsf62" podStartSLOduration=101.948968571 podStartE2EDuration="1m41.948968571s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:36.948680664 +0000 UTC m=+123.259784729" watchObservedRunningTime="2026-01-27 07:47:36.948968571 +0000 UTC m=+123.260072636" Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:36.999215 4799 generic.go:334] "Generic (PLEG): container finished" podID="6a7388c2-4452-4132-961e-3a2f24154237" containerID="a3bceaf1acf3fb2ef63ff2a553ea6545fdfdde49f19962f3b2045723907090e0" exitCode=0 Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:36.999562 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" event={"ID":"6a7388c2-4452-4132-961e-3a2f24154237","Type":"ContainerDied","Data":"a3bceaf1acf3fb2ef63ff2a553ea6545fdfdde49f19962f3b2045723907090e0"} Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:36.999590 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" event={"ID":"6a7388c2-4452-4132-961e-3a2f24154237","Type":"ContainerStarted","Data":"693d4c8dbe27596156376adce701f701a9ced0ed336aa781c21a30db36a86382"} Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.031139 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:37 crc kubenswrapper[4799]: E0127 07:47:37.032164 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:37.532146845 +0000 UTC m=+123.843250910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.032984 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" podStartSLOduration=102.032969058 podStartE2EDuration="1m42.032969058s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:37.029649266 +0000 UTC m=+123.340753331" watchObservedRunningTime="2026-01-27 07:47:37.032969058 +0000 UTC m=+123.344073133" Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.040503 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mn8tq" event={"ID":"b4251b28-e3a3-4694-b8d3-8106bacdfe86","Type":"ContainerStarted","Data":"1a6ab71e02086ec85fd59dfc9dae0bbff3fec56dc91f8aee1182a73e2df40cb4"} Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.109911 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n2c6j" event={"ID":"fcf282b3-df77-4087-a390-c000adfd8f86","Type":"ContainerStarted","Data":"94a3781ad47a25c0cda9fdc1503c13617415b09aec6ac7fcbca09c38a5a7ba22"} Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.109959 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n2c6j" event={"ID":"fcf282b3-df77-4087-a390-c000adfd8f86","Type":"ContainerStarted","Data":"0ad03248da32a06d2e71da404cd530ffff04a9c01ae2f3e8a3157cd6696eb5a5"} Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.121575 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rrhvs" event={"ID":"5735c0d4-af84-4c65-b453-88a9086e0d8c","Type":"ContainerStarted","Data":"d6a8f59c8ed2f9f6c3c330e17580fe5cf1ca03550e684669e125805803bb96bd"} Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.136696 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" event={"ID":"810999fd-fa8e-4e6c-9b07-bc58f174202b","Type":"ContainerStarted","Data":"96c8a472f8501a852fad2ef4ed247e865751e3cdddc05d5b0bea346ce6b65c6a"} Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.137752 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.138310 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:37 crc kubenswrapper[4799]: E0127 07:47:37.138773 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:37.638755026 +0000 UTC m=+123.949859091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.186955 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" event={"ID":"2b678fa7-59f7-4a2c-8cae-3f71a17f8734","Type":"ContainerStarted","Data":"1208f4f477bb3a191a3a5ec57272b327bf0f95353f77bab7f6b7aaf0cb8ca5e7"} Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.196577 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.198271 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-lzvh6" Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.240410 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:37 crc kubenswrapper[4799]: E0127 07:47:37.240683 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:37.740578513 +0000 UTC m=+124.051682588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.250818 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-mnv4z" podStartSLOduration=102.250792479 podStartE2EDuration="1m42.250792479s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:37.250560583 +0000 UTC m=+123.561664648" watchObservedRunningTime="2026-01-27 07:47:37.250792479 +0000 UTC m=+123.561896544" Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.251103 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-64xcf" podStartSLOduration=102.251097699 podStartE2EDuration="1m42.251097699s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:37.196852946 +0000 UTC m=+123.507957001" watchObservedRunningTime="2026-01-27 07:47:37.251097699 +0000 UTC m=+123.562201764" Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.262881 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:37 crc kubenswrapper[4799]: E0127 07:47:37.270740 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:37.770713979 +0000 UTC m=+124.081818044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.301953 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-svppz" podStartSLOduration=102.301932385 podStartE2EDuration="1m42.301932385s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:37.301574995 +0000 UTC m=+123.612679070" watchObservedRunningTime="2026-01-27 07:47:37.301932385 +0000 UTC m=+123.613036450" Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.365170 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.365362 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" podStartSLOduration=103.365349044 podStartE2EDuration="1m43.365349044s" podCreationTimestamp="2026-01-27 07:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:37.363704148 +0000 UTC m=+123.674808213" watchObservedRunningTime="2026-01-27 07:47:37.365349044 +0000 UTC m=+123.676453099" Jan 27 07:47:37 crc kubenswrapper[4799]: E0127 07:47:37.366463 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:37.866446595 +0000 UTC m=+124.177550660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.436543 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rrhvs" podStartSLOduration=102.436516571 podStartE2EDuration="1m42.436516571s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:37.435452991 +0000 UTC m=+123.746557056" watchObservedRunningTime="2026-01-27 07:47:37.436516571 +0000 UTC m=+123.747620636" Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.451894 4799 patch_prober.go:28] interesting pod/router-default-5444994796-l4462 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:47:37 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Jan 27 07:47:37 crc kubenswrapper[4799]: [+]process-running ok Jan 27 07:47:37 crc kubenswrapper[4799]: healthz check failed Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.451962 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4462" podUID="9664c11c-1653-4690-9eb4-9c4918070a0d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.467173 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:37 crc kubenswrapper[4799]: E0127 07:47:37.467913 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:37.967897372 +0000 UTC m=+124.279001437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.568890 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:37 crc kubenswrapper[4799]: E0127 07:47:37.569252 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:38.069232545 +0000 UTC m=+124.380336610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.669966 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:37 crc kubenswrapper[4799]: E0127 07:47:37.670293 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:38.1702758 +0000 UTC m=+124.481379865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.771405 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:37 crc kubenswrapper[4799]: E0127 07:47:37.771780 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:38.271762248 +0000 UTC m=+124.582866313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.873434 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:37 crc kubenswrapper[4799]: E0127 07:47:37.873852 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:38.373835411 +0000 UTC m=+124.684939476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.974203 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:37 crc kubenswrapper[4799]: E0127 07:47:37.974438 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:38.474387142 +0000 UTC m=+124.785491217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:37 crc kubenswrapper[4799]: I0127 07:47:37.974901 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:37 crc kubenswrapper[4799]: E0127 07:47:37.975327 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:38.475313919 +0000 UTC m=+124.786418084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.075967 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:38 crc kubenswrapper[4799]: E0127 07:47:38.076187 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:38.576152848 +0000 UTC m=+124.887256913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.076255 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:38 crc kubenswrapper[4799]: E0127 07:47:38.076568 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:38.576553929 +0000 UTC m=+124.887657994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.177980 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:38 crc kubenswrapper[4799]: E0127 07:47:38.178157 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:38.678131309 +0000 UTC m=+124.989235374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.178354 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:38 crc kubenswrapper[4799]: E0127 07:47:38.178643 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:38.678631153 +0000 UTC m=+124.989735218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.194809 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" event={"ID":"ebd2f02f-3d33-46f5-b78f-c3a81e326627","Type":"ContainerStarted","Data":"33ef92bbdd254c79dea288a2d1449246dd2ea94645684e595cf09a4381f3d10d"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.194854 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" event={"ID":"ebd2f02f-3d33-46f5-b78f-c3a81e326627","Type":"ContainerStarted","Data":"fa0c3d9bac3f174acc65069c9251d9d26208010f31c2a3f4d87243cadfc0e8ed"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.197115 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mnv4z" event={"ID":"88a08ba8-7cde-40c0-9a88-b01b642c78df","Type":"ContainerStarted","Data":"b00b1af9e7999fd846163909b2a68d606c66e92e9db5865b604be08dc3b2b086"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.200414 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrjsc" event={"ID":"3817b4c1-1d03-4512-8a0e-f339f9c2fb5f","Type":"ContainerStarted","Data":"4a7970a2c2a65d303b50b3a70c2f59b093ddb07ae1aabd46a50eefeefd17682a"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.200449 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrjsc" event={"ID":"3817b4c1-1d03-4512-8a0e-f339f9c2fb5f","Type":"ContainerStarted","Data":"764617c0214f5c8bb6b967bfb1d0d38696cc1da3c6beec821b003985a089c848"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.202233 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-svppz" event={"ID":"2aec4ae4-6eeb-4e1f-8912-8401d5607d2d","Type":"ContainerStarted","Data":"16fb80d1db8fdff07ccb7916bd76a756d66385be3616e1a603a2b422521889be"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.204596 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xjwwr" event={"ID":"74820e6d-62e6-49db-8f4d-a49ae5fe95ee","Type":"ContainerStarted","Data":"854f3b6a2e76eb29ca2c70d75714a82e01a537c23e578ccd45b3c80ec6aec581"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.204645 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xjwwr" event={"ID":"74820e6d-62e6-49db-8f4d-a49ae5fe95ee","Type":"ContainerStarted","Data":"88c5f3cf61149bb92e29419035d2604cf1db810387d84d33a3544e42df3e36f4"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.204709 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-xjwwr" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.206910 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n2c6j" event={"ID":"fcf282b3-df77-4087-a390-c000adfd8f86","Type":"ContainerStarted","Data":"0659a1da26e1cd0381c9ac733333a6f8ccad9726ccad666ce3671380610e29c4"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.207006 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n2c6j" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.208321 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-d8mn9" event={"ID":"f6abdebd-9ed0-44e3-934a-9472c0f92bc7","Type":"ContainerStarted","Data":"3da08fe63710beb10d23471a8f427cd393bbb07c84b29878cd91f85870262b19"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.210088 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc" event={"ID":"de07a2d4-e916-4c2d-bb3b-b8a268461a71","Type":"ContainerStarted","Data":"2b9bdf16be7602c152391c3f4392da1ce116663c6453dcd9991c2f2de697ea9a"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.211744 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dxknv" event={"ID":"814c6bed-3956-4eff-9909-58d7b74247c5","Type":"ContainerStarted","Data":"87d75fa1015d3931907d1f4d737972e84ea4c481056e986da7089f56cd8b1201"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.214182 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" event={"ID":"6a7388c2-4452-4132-961e-3a2f24154237","Type":"ContainerStarted","Data":"2b87237fbc64c2ecd5a507a872bf3ed509f234fc0399dca06508a5833870acbd"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.215917 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8g24k" event={"ID":"82fa445d-953b-4729-8c80-a2bc760f0ce3","Type":"ContainerStarted","Data":"c35972054a033a5b6f2781585aeb1dadd137d9ec5e2c459137465cbd3b938eab"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.217504 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fmzz6" event={"ID":"17f2f9b7-aad3-4959-8193-3e3e1d525141","Type":"ContainerStarted","Data":"34102875f10d49a8a518c678e0ffcb70cbd69b6fc1949912a70269766c26aba9"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.219153 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mn8tq" event={"ID":"b4251b28-e3a3-4694-b8d3-8106bacdfe86","Type":"ContainerStarted","Data":"24deabeb2a7d165b3d819f1a9ebe60983fecb52dbd42491892650c93680c69bf"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.221013 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" event={"ID":"2b678fa7-59f7-4a2c-8cae-3f71a17f8734","Type":"ContainerStarted","Data":"bcd2d2ca977f8fd428a7df0dde6d40340d4ccfa3d48a6b1994c7268011ed7e3a"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.221213 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.222654 4799 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2m2xz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.222701 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" podUID="2b678fa7-59f7-4a2c-8cae-3f71a17f8734" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.222861 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7gnsz" event={"ID":"632a04e6-2ac7-4d81-a22c-2e3d4b58afe4","Type":"ContainerStarted","Data":"54d90241428a4d261cd2763d0b5ce864bcd8378715d85ea8738824b2cbdbf296"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.222901 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7gnsz" event={"ID":"632a04e6-2ac7-4d81-a22c-2e3d4b58afe4","Type":"ContainerStarted","Data":"b1c7306a225d73d83f021a35178258bc544107f50a8b19fadeffac9e86f11a11"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.223409 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" podStartSLOduration=104.223395319 podStartE2EDuration="1m44.223395319s" podCreationTimestamp="2026-01-27 07:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:38.219236512 +0000 UTC m=+124.530340607" watchObservedRunningTime="2026-01-27 07:47:38.223395319 +0000 UTC m=+124.534499384" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.225080 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zk6z" event={"ID":"5fac797c-c9f7-45e7-91dd-1efa96411e06","Type":"ContainerStarted","Data":"033c66c65f9daf25ca50ec33a2d5e48cb3cbb950a5b96ea2d25ca992c2311aaa"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.225598 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zk6z" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.227408 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxqzc" event={"ID":"c9eb96a5-27c6-4cab-889a-1938f92b95aa","Type":"ContainerStarted","Data":"8db9e44c0ef4354666000eaf4a6a26097514ca6e22d73d5148596178ee116651"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.227574 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxqzc" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.229586 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bhdxs" event={"ID":"f90c837a-43bf-4353-ba01-70a80be22306","Type":"ContainerStarted","Data":"626e24cf579b39bf0b54eee2fe813f57d6d406a1df0771bd0ed0629ab09ab241"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.233313 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tbm5t" event={"ID":"8f54e330-fce1-4959-89f0-76a62f86ae43","Type":"ContainerStarted","Data":"a8edb159a8f6d7368dbf124fbd86c5b4899d3790ee3216c058c0aee066416faf"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.235733 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lfclk" event={"ID":"dd94cd3b-bc32-422f-8c10-dc6d7cb52453","Type":"ContainerStarted","Data":"e1ee6f2c20ddbcaab9a79562d985359753aa24248e14bccde107f306eddc1a73"} Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.240421 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-497f2" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.242913 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.245290 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mn8tq" podStartSLOduration=103.245279663 podStartE2EDuration="1m43.245279663s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:38.245048657 +0000 UTC m=+124.556152722" watchObservedRunningTime="2026-01-27 07:47:38.245279663 +0000 UTC m=+124.556383718" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.249421 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxqzc" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.273625 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lrjsc" podStartSLOduration=103.273609608 podStartE2EDuration="1m43.273609608s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:38.271157569 +0000 UTC m=+124.582261654" watchObservedRunningTime="2026-01-27 07:47:38.273609608 +0000 UTC m=+124.584713673" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.279456 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:38 crc kubenswrapper[4799]: E0127 07:47:38.279636 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:38.779604916 +0000 UTC m=+125.090709011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.281394 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:38 crc kubenswrapper[4799]: E0127 07:47:38.282827 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:38.782806426 +0000 UTC m=+125.093910491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.296976 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n2c6j" podStartSLOduration=103.296960193 podStartE2EDuration="1m43.296960193s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:38.295640066 +0000 UTC m=+124.606744131" watchObservedRunningTime="2026-01-27 07:47:38.296960193 +0000 UTC m=+124.608064258" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.346398 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fmzz6" podStartSLOduration=103.34637981 podStartE2EDuration="1m43.34637981s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:38.345166806 +0000 UTC m=+124.656270871" watchObservedRunningTime="2026-01-27 07:47:38.34637981 +0000 UTC m=+124.657483875" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.368106 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dxknv" podStartSLOduration=103.368084978 podStartE2EDuration="1m43.368084978s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:38.366934187 +0000 UTC m=+124.678038252" watchObservedRunningTime="2026-01-27 07:47:38.368084978 +0000 UTC m=+124.679189033" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.383343 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:38 crc kubenswrapper[4799]: E0127 07:47:38.383550 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:38.883525772 +0000 UTC m=+125.194629847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.394665 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:38 crc kubenswrapper[4799]: E0127 07:47:38.395143 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:38.895130838 +0000 UTC m=+125.206234903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.398895 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" podStartSLOduration=103.398871462 podStartE2EDuration="1m43.398871462s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:38.395145588 +0000 UTC m=+124.706249663" watchObservedRunningTime="2026-01-27 07:47:38.398871462 +0000 UTC m=+124.709975547" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.429388 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-xjwwr" podStartSLOduration=9.429372618 podStartE2EDuration="9.429372618s" podCreationTimestamp="2026-01-27 07:47:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:38.426753154 +0000 UTC m=+124.737857229" watchObservedRunningTime="2026-01-27 07:47:38.429372618 +0000 UTC m=+124.740476683" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.451866 4799 patch_prober.go:28] interesting pod/router-default-5444994796-l4462 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:47:38 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Jan 27 07:47:38 crc kubenswrapper[4799]: [+]process-running ok Jan 27 07:47:38 crc kubenswrapper[4799]: healthz check failed Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.451928 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4462" podUID="9664c11c-1653-4690-9eb4-9c4918070a0d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.460626 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc" podStartSLOduration=104.460611855 podStartE2EDuration="1m44.460611855s" podCreationTimestamp="2026-01-27 07:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:38.458787383 +0000 UTC m=+124.769891448" watchObservedRunningTime="2026-01-27 07:47:38.460611855 +0000 UTC m=+124.771715920" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.507043 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:38 crc kubenswrapper[4799]: E0127 07:47:38.507371 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:39.007356606 +0000 UTC m=+125.318460671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.529797 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-d8mn9" podStartSLOduration=103.529747035 podStartE2EDuration="1m43.529747035s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:38.513414086 +0000 UTC m=+124.824518151" watchObservedRunningTime="2026-01-27 07:47:38.529747035 +0000 UTC m=+124.840851100" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.595005 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxqzc" podStartSLOduration=103.594988466 podStartE2EDuration="1m43.594988466s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:38.594960444 +0000 UTC m=+124.906064519" watchObservedRunningTime="2026-01-27 07:47:38.594988466 +0000 UTC m=+124.906092531" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.608173 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:38 crc kubenswrapper[4799]: E0127 07:47:38.608587 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:39.108566426 +0000 UTC m=+125.419670581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.635870 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7gnsz" podStartSLOduration=104.635852072 podStartE2EDuration="1m44.635852072s" podCreationTimestamp="2026-01-27 07:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:38.633679661 +0000 UTC m=+124.944783726" watchObservedRunningTime="2026-01-27 07:47:38.635852072 +0000 UTC m=+124.946956137" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.688143 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zk6z" podStartSLOduration=104.688127578 podStartE2EDuration="1m44.688127578s" podCreationTimestamp="2026-01-27 07:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:38.686605376 +0000 UTC m=+124.997709431" watchObservedRunningTime="2026-01-27 07:47:38.688127578 +0000 UTC m=+124.999231643" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.707170 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.707244 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.708987 4799 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-rdpz8 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.16:8443/livez\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.709043 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" podUID="6a7388c2-4452-4132-961e-3a2f24154237" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.16:8443/livez\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.709533 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:38 crc kubenswrapper[4799]: E0127 07:47:38.709703 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:39.209681053 +0000 UTC m=+125.520785118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.709990 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:38 crc kubenswrapper[4799]: E0127 07:47:38.710335 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:39.210321931 +0000 UTC m=+125.521425996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.735084 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lfclk" podStartSLOduration=103.735063926 podStartE2EDuration="1m43.735063926s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:38.730960661 +0000 UTC m=+125.042064726" watchObservedRunningTime="2026-01-27 07:47:38.735063926 +0000 UTC m=+125.046167991" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.737947 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g6ktz"] Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.738916 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6ktz" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.746078 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.783944 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g6ktz"] Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.811144 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:38 crc kubenswrapper[4799]: E0127 07:47:38.811501 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:39.311483289 +0000 UTC m=+125.622587344 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.843376 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tbm5t" podStartSLOduration=103.843355674 podStartE2EDuration="1m43.843355674s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:38.843236861 +0000 UTC m=+125.154340926" watchObservedRunningTime="2026-01-27 07:47:38.843355674 +0000 UTC m=+125.154459739" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.844458 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" podStartSLOduration=103.844452115 podStartE2EDuration="1m43.844452115s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:38.826414009 +0000 UTC m=+125.137518084" watchObservedRunningTime="2026-01-27 07:47:38.844452115 +0000 UTC m=+125.155556170" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.856117 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.856163 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.858104 4799 patch_prober.go:28] interesting pod/apiserver-76f77b778f-9t8n9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.858170 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" podUID="ebd2f02f-3d33-46f5-b78f-c3a81e326627" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.912889 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.913033 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c808aeb6-0065-4efc-9d98-9ee6c97e3250-catalog-content\") pod \"community-operators-g6ktz\" (UID: \"c808aeb6-0065-4efc-9d98-9ee6c97e3250\") " pod="openshift-marketplace/community-operators-g6ktz" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.913070 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c808aeb6-0065-4efc-9d98-9ee6c97e3250-utilities\") pod \"community-operators-g6ktz\" (UID: \"c808aeb6-0065-4efc-9d98-9ee6c97e3250\") " pod="openshift-marketplace/community-operators-g6ktz" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.913113 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfd8z\" (UniqueName: \"kubernetes.io/projected/c808aeb6-0065-4efc-9d98-9ee6c97e3250-kube-api-access-pfd8z\") pod \"community-operators-g6ktz\" (UID: \"c808aeb6-0065-4efc-9d98-9ee6c97e3250\") " pod="openshift-marketplace/community-operators-g6ktz" Jan 27 07:47:38 crc kubenswrapper[4799]: E0127 07:47:38.913603 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:39.413582654 +0000 UTC m=+125.724686759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.930509 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bhdxs" podStartSLOduration=103.930492538 podStartE2EDuration="1m43.930492538s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:38.874623751 +0000 UTC m=+125.185727816" watchObservedRunningTime="2026-01-27 07:47:38.930492538 +0000 UTC m=+125.241596613" Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.971124 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kr6pr"] Jan 27 07:47:38 crc kubenswrapper[4799]: I0127 07:47:38.985497 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kr6pr" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.001893 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.016929 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:39 crc kubenswrapper[4799]: E0127 07:47:39.017190 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:39.51715892 +0000 UTC m=+125.828262985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.017240 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.017461 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c808aeb6-0065-4efc-9d98-9ee6c97e3250-catalog-content\") pod \"community-operators-g6ktz\" (UID: \"c808aeb6-0065-4efc-9d98-9ee6c97e3250\") " pod="openshift-marketplace/community-operators-g6ktz" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.017485 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c808aeb6-0065-4efc-9d98-9ee6c97e3250-utilities\") pod \"community-operators-g6ktz\" (UID: \"c808aeb6-0065-4efc-9d98-9ee6c97e3250\") " pod="openshift-marketplace/community-operators-g6ktz" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.017518 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfd8z\" (UniqueName: \"kubernetes.io/projected/c808aeb6-0065-4efc-9d98-9ee6c97e3250-kube-api-access-pfd8z\") pod \"community-operators-g6ktz\" (UID: \"c808aeb6-0065-4efc-9d98-9ee6c97e3250\") " pod="openshift-marketplace/community-operators-g6ktz" Jan 27 07:47:39 crc kubenswrapper[4799]: E0127 07:47:39.018230 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:39.518223 +0000 UTC m=+125.829327065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.018766 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c808aeb6-0065-4efc-9d98-9ee6c97e3250-catalog-content\") pod \"community-operators-g6ktz\" (UID: \"c808aeb6-0065-4efc-9d98-9ee6c97e3250\") " pod="openshift-marketplace/community-operators-g6ktz" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.018960 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c808aeb6-0065-4efc-9d98-9ee6c97e3250-utilities\") pod \"community-operators-g6ktz\" (UID: \"c808aeb6-0065-4efc-9d98-9ee6c97e3250\") " pod="openshift-marketplace/community-operators-g6ktz" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.029682 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kr6pr"] Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.075239 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfd8z\" (UniqueName: \"kubernetes.io/projected/c808aeb6-0065-4efc-9d98-9ee6c97e3250-kube-api-access-pfd8z\") pod \"community-operators-g6ktz\" (UID: \"c808aeb6-0065-4efc-9d98-9ee6c97e3250\") " pod="openshift-marketplace/community-operators-g6ktz" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.119122 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.119338 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd-catalog-content\") pod \"certified-operators-kr6pr\" (UID: \"1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd\") " pod="openshift-marketplace/certified-operators-kr6pr" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.119394 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd-utilities\") pod \"certified-operators-kr6pr\" (UID: \"1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd\") " pod="openshift-marketplace/certified-operators-kr6pr" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.119445 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plhtn\" (UniqueName: \"kubernetes.io/projected/1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd-kube-api-access-plhtn\") pod \"certified-operators-kr6pr\" (UID: \"1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd\") " pod="openshift-marketplace/certified-operators-kr6pr" Jan 27 07:47:39 crc kubenswrapper[4799]: E0127 07:47:39.119599 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:39.619583774 +0000 UTC m=+125.930687839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.155472 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j7dw8"] Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.156400 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j7dw8" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.220944 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd-catalog-content\") pod \"certified-operators-kr6pr\" (UID: \"1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd\") " pod="openshift-marketplace/certified-operators-kr6pr" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.220987 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.221019 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd-utilities\") pod \"certified-operators-kr6pr\" (UID: \"1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd\") " pod="openshift-marketplace/certified-operators-kr6pr" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.221062 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plhtn\" (UniqueName: \"kubernetes.io/projected/1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd-kube-api-access-plhtn\") pod \"certified-operators-kr6pr\" (UID: \"1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd\") " pod="openshift-marketplace/certified-operators-kr6pr" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.221674 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd-catalog-content\") pod \"certified-operators-kr6pr\" (UID: \"1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd\") " pod="openshift-marketplace/certified-operators-kr6pr" Jan 27 07:47:39 crc kubenswrapper[4799]: E0127 07:47:39.221900 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:39.721889454 +0000 UTC m=+126.032993519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.222205 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd-utilities\") pod \"certified-operators-kr6pr\" (UID: \"1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd\") " pod="openshift-marketplace/certified-operators-kr6pr" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.241176 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8g24k" event={"ID":"82fa445d-953b-4729-8c80-a2bc760f0ce3","Type":"ContainerStarted","Data":"8b25b4bafa87329f3d97ac5d75a0f30e7653e2742a5f8cb25f25096cc6f9b918"} Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.241437 4799 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2m2xz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.241488 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" podUID="2b678fa7-59f7-4a2c-8cae-3f71a17f8734" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.245672 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lfclk" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.247793 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j7dw8"] Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.265285 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plhtn\" (UniqueName: \"kubernetes.io/projected/1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd-kube-api-access-plhtn\") pod \"certified-operators-kr6pr\" (UID: \"1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd\") " pod="openshift-marketplace/certified-operators-kr6pr" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.322351 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:39 crc kubenswrapper[4799]: E0127 07:47:39.322577 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:39.822534178 +0000 UTC m=+126.133638243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.322650 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef0e0c84-7483-438a-8ad1-b105cd4e2cc7-utilities\") pod \"community-operators-j7dw8\" (UID: \"ef0e0c84-7483-438a-8ad1-b105cd4e2cc7\") " pod="openshift-marketplace/community-operators-j7dw8" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.322698 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sjcm\" (UniqueName: \"kubernetes.io/projected/ef0e0c84-7483-438a-8ad1-b105cd4e2cc7-kube-api-access-7sjcm\") pod \"community-operators-j7dw8\" (UID: \"ef0e0c84-7483-438a-8ad1-b105cd4e2cc7\") " pod="openshift-marketplace/community-operators-j7dw8" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.322755 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.322917 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef0e0c84-7483-438a-8ad1-b105cd4e2cc7-catalog-content\") pod \"community-operators-j7dw8\" (UID: \"ef0e0c84-7483-438a-8ad1-b105cd4e2cc7\") " pod="openshift-marketplace/community-operators-j7dw8" Jan 27 07:47:39 crc kubenswrapper[4799]: E0127 07:47:39.323050 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:39.823038692 +0000 UTC m=+126.134142757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.327221 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kr6pr" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.337216 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x2wfc"] Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.338183 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x2wfc" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.351704 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x2wfc"] Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.353447 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6ktz" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.424324 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:39 crc kubenswrapper[4799]: E0127 07:47:39.424526 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:39.924499659 +0000 UTC m=+126.235603724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.424563 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sjcm\" (UniqueName: \"kubernetes.io/projected/ef0e0c84-7483-438a-8ad1-b105cd4e2cc7-kube-api-access-7sjcm\") pod \"community-operators-j7dw8\" (UID: \"ef0e0c84-7483-438a-8ad1-b105cd4e2cc7\") " pod="openshift-marketplace/community-operators-j7dw8" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.424676 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.424881 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef0e0c84-7483-438a-8ad1-b105cd4e2cc7-catalog-content\") pod \"community-operators-j7dw8\" (UID: \"ef0e0c84-7483-438a-8ad1-b105cd4e2cc7\") " pod="openshift-marketplace/community-operators-j7dw8" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.425448 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef0e0c84-7483-438a-8ad1-b105cd4e2cc7-utilities\") pod \"community-operators-j7dw8\" (UID: \"ef0e0c84-7483-438a-8ad1-b105cd4e2cc7\") " pod="openshift-marketplace/community-operators-j7dw8" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.425996 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef0e0c84-7483-438a-8ad1-b105cd4e2cc7-catalog-content\") pod \"community-operators-j7dw8\" (UID: \"ef0e0c84-7483-438a-8ad1-b105cd4e2cc7\") " pod="openshift-marketplace/community-operators-j7dw8" Jan 27 07:47:39 crc kubenswrapper[4799]: E0127 07:47:39.426280 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:39.926270569 +0000 UTC m=+126.237374634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.427472 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef0e0c84-7483-438a-8ad1-b105cd4e2cc7-utilities\") pod \"community-operators-j7dw8\" (UID: \"ef0e0c84-7483-438a-8ad1-b105cd4e2cc7\") " pod="openshift-marketplace/community-operators-j7dw8" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.446995 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sjcm\" (UniqueName: \"kubernetes.io/projected/ef0e0c84-7483-438a-8ad1-b105cd4e2cc7-kube-api-access-7sjcm\") pod \"community-operators-j7dw8\" (UID: \"ef0e0c84-7483-438a-8ad1-b105cd4e2cc7\") " pod="openshift-marketplace/community-operators-j7dw8" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.448779 4799 patch_prober.go:28] interesting pod/router-default-5444994796-l4462 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:47:39 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Jan 27 07:47:39 crc kubenswrapper[4799]: [+]process-running ok Jan 27 07:47:39 crc kubenswrapper[4799]: healthz check failed Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.448833 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4462" podUID="9664c11c-1653-4690-9eb4-9c4918070a0d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.449950 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lfclk" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.474643 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j7dw8" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.527403 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:39 crc kubenswrapper[4799]: E0127 07:47:39.527582 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:40.027556741 +0000 UTC m=+126.338660806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.527743 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d96b8a37-8325-4d8c-b8ce-94f40dd0a21a-utilities\") pod \"certified-operators-x2wfc\" (UID: \"d96b8a37-8325-4d8c-b8ce-94f40dd0a21a\") " pod="openshift-marketplace/certified-operators-x2wfc" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.527853 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d96b8a37-8325-4d8c-b8ce-94f40dd0a21a-catalog-content\") pod \"certified-operators-x2wfc\" (UID: \"d96b8a37-8325-4d8c-b8ce-94f40dd0a21a\") " pod="openshift-marketplace/certified-operators-x2wfc" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.527891 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnbpp\" (UniqueName: \"kubernetes.io/projected/d96b8a37-8325-4d8c-b8ce-94f40dd0a21a-kube-api-access-wnbpp\") pod \"certified-operators-x2wfc\" (UID: \"d96b8a37-8325-4d8c-b8ce-94f40dd0a21a\") " pod="openshift-marketplace/certified-operators-x2wfc" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.527939 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:39 crc kubenswrapper[4799]: E0127 07:47:39.528205 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:40.028195829 +0000 UTC m=+126.339299894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.631943 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.632200 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d96b8a37-8325-4d8c-b8ce-94f40dd0a21a-catalog-content\") pod \"certified-operators-x2wfc\" (UID: \"d96b8a37-8325-4d8c-b8ce-94f40dd0a21a\") " pod="openshift-marketplace/certified-operators-x2wfc" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.632229 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnbpp\" (UniqueName: \"kubernetes.io/projected/d96b8a37-8325-4d8c-b8ce-94f40dd0a21a-kube-api-access-wnbpp\") pod \"certified-operators-x2wfc\" (UID: \"d96b8a37-8325-4d8c-b8ce-94f40dd0a21a\") " pod="openshift-marketplace/certified-operators-x2wfc" Jan 27 07:47:39 crc kubenswrapper[4799]: E0127 07:47:39.632262 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:40.132235808 +0000 UTC m=+126.443339873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.632314 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.632462 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d96b8a37-8325-4d8c-b8ce-94f40dd0a21a-utilities\") pod \"certified-operators-x2wfc\" (UID: \"d96b8a37-8325-4d8c-b8ce-94f40dd0a21a\") " pod="openshift-marketplace/certified-operators-x2wfc" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.632851 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d96b8a37-8325-4d8c-b8ce-94f40dd0a21a-utilities\") pod \"certified-operators-x2wfc\" (UID: \"d96b8a37-8325-4d8c-b8ce-94f40dd0a21a\") " pod="openshift-marketplace/certified-operators-x2wfc" Jan 27 07:47:39 crc kubenswrapper[4799]: E0127 07:47:39.633072 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:40.133065251 +0000 UTC m=+126.444169316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.633127 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d96b8a37-8325-4d8c-b8ce-94f40dd0a21a-catalog-content\") pod \"certified-operators-x2wfc\" (UID: \"d96b8a37-8325-4d8c-b8ce-94f40dd0a21a\") " pod="openshift-marketplace/certified-operators-x2wfc" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.668731 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnbpp\" (UniqueName: \"kubernetes.io/projected/d96b8a37-8325-4d8c-b8ce-94f40dd0a21a-kube-api-access-wnbpp\") pod \"certified-operators-x2wfc\" (UID: \"d96b8a37-8325-4d8c-b8ce-94f40dd0a21a\") " pod="openshift-marketplace/certified-operators-x2wfc" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.735897 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:39 crc kubenswrapper[4799]: E0127 07:47:39.736238 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:40.236223276 +0000 UTC m=+126.547327341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.840185 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:39 crc kubenswrapper[4799]: E0127 07:47:39.840734 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:40.340722398 +0000 UTC m=+126.651826463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.975851 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x2wfc" Jan 27 07:47:39 crc kubenswrapper[4799]: I0127 07:47:39.980149 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:39 crc kubenswrapper[4799]: E0127 07:47:39.980791 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:40.480771077 +0000 UTC m=+126.791875142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.083017 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:40 crc kubenswrapper[4799]: E0127 07:47:40.083722 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:40.583709706 +0000 UTC m=+126.894813771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.097672 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g6ktz"] Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.183819 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:40 crc kubenswrapper[4799]: E0127 07:47:40.184076 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:40.68404833 +0000 UTC m=+126.995152395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.196636 4799 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.295053 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:40 crc kubenswrapper[4799]: E0127 07:47:40.295736 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:40.795724214 +0000 UTC m=+127.106828279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.301489 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6ktz" event={"ID":"c808aeb6-0065-4efc-9d98-9ee6c97e3250","Type":"ContainerStarted","Data":"c652a30720441b8d5fcd7ee284beab017f9e4e51e040b619d56450afb171b77d"} Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.374611 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7zk6z" Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.402016 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:40 crc kubenswrapper[4799]: E0127 07:47:40.402521 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:40.90250052 +0000 UTC m=+127.213604585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.454425 4799 patch_prober.go:28] interesting pod/router-default-5444994796-l4462 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:47:40 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Jan 27 07:47:40 crc kubenswrapper[4799]: [+]process-running ok Jan 27 07:47:40 crc kubenswrapper[4799]: healthz check failed Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.454471 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4462" podUID="9664c11c-1653-4690-9eb4-9c4918070a0d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.504910 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:40 crc kubenswrapper[4799]: E0127 07:47:40.505161 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:41.005151101 +0000 UTC m=+127.316255156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.519439 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kr6pr"] Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.606192 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:40 crc kubenswrapper[4799]: E0127 07:47:40.607036 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 07:47:41.107011899 +0000 UTC m=+127.418115964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.626449 4799 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T07:47:40.196659324Z","Handler":null,"Name":""} Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.627058 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j7dw8"] Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.710125 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:40 crc kubenswrapper[4799]: E0127 07:47:40.710552 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 07:47:41.210538433 +0000 UTC m=+127.521642498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ww5r" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.714548 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x2wfc"] Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.720467 4799 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.720502 4799 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.736986 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tm4nj"] Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.737940 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tm4nj" Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.750416 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.752962 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tm4nj"] Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.814915 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.831281 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.916204 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f28fa44-7662-40a4-a2c2-81bb5a9c4ace-catalog-content\") pod \"redhat-marketplace-tm4nj\" (UID: \"0f28fa44-7662-40a4-a2c2-81bb5a9c4ace\") " pod="openshift-marketplace/redhat-marketplace-tm4nj" Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.916317 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g62kr\" (UniqueName: \"kubernetes.io/projected/0f28fa44-7662-40a4-a2c2-81bb5a9c4ace-kube-api-access-g62kr\") pod \"redhat-marketplace-tm4nj\" (UID: \"0f28fa44-7662-40a4-a2c2-81bb5a9c4ace\") " pod="openshift-marketplace/redhat-marketplace-tm4nj" Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.916352 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.916397 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f28fa44-7662-40a4-a2c2-81bb5a9c4ace-utilities\") pod \"redhat-marketplace-tm4nj\" (UID: \"0f28fa44-7662-40a4-a2c2-81bb5a9c4ace\") " pod="openshift-marketplace/redhat-marketplace-tm4nj" Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.939183 4799 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 07:47:40 crc kubenswrapper[4799]: I0127 07:47:40.939224 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.001927 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ww5r\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.017906 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g62kr\" (UniqueName: \"kubernetes.io/projected/0f28fa44-7662-40a4-a2c2-81bb5a9c4ace-kube-api-access-g62kr\") pod \"redhat-marketplace-tm4nj\" (UID: \"0f28fa44-7662-40a4-a2c2-81bb5a9c4ace\") " pod="openshift-marketplace/redhat-marketplace-tm4nj" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.017979 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f28fa44-7662-40a4-a2c2-81bb5a9c4ace-utilities\") pod \"redhat-marketplace-tm4nj\" (UID: \"0f28fa44-7662-40a4-a2c2-81bb5a9c4ace\") " pod="openshift-marketplace/redhat-marketplace-tm4nj" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.018020 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f28fa44-7662-40a4-a2c2-81bb5a9c4ace-catalog-content\") pod \"redhat-marketplace-tm4nj\" (UID: \"0f28fa44-7662-40a4-a2c2-81bb5a9c4ace\") " pod="openshift-marketplace/redhat-marketplace-tm4nj" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.018418 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f28fa44-7662-40a4-a2c2-81bb5a9c4ace-catalog-content\") pod \"redhat-marketplace-tm4nj\" (UID: \"0f28fa44-7662-40a4-a2c2-81bb5a9c4ace\") " pod="openshift-marketplace/redhat-marketplace-tm4nj" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.018883 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f28fa44-7662-40a4-a2c2-81bb5a9c4ace-utilities\") pod \"redhat-marketplace-tm4nj\" (UID: \"0f28fa44-7662-40a4-a2c2-81bb5a9c4ace\") " pod="openshift-marketplace/redhat-marketplace-tm4nj" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.048435 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g62kr\" (UniqueName: \"kubernetes.io/projected/0f28fa44-7662-40a4-a2c2-81bb5a9c4ace-kube-api-access-g62kr\") pod \"redhat-marketplace-tm4nj\" (UID: \"0f28fa44-7662-40a4-a2c2-81bb5a9c4ace\") " pod="openshift-marketplace/redhat-marketplace-tm4nj" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.115671 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-76z59"] Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.117195 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76z59" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.135381 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-76z59"] Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.149531 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tm4nj" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.203249 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.212668 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.220147 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5-utilities\") pod \"redhat-marketplace-76z59\" (UID: \"5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5\") " pod="openshift-marketplace/redhat-marketplace-76z59" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.220215 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5-catalog-content\") pod \"redhat-marketplace-76z59\" (UID: \"5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5\") " pod="openshift-marketplace/redhat-marketplace-76z59" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.220240 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4fmv\" (UniqueName: \"kubernetes.io/projected/5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5-kube-api-access-x4fmv\") pod \"redhat-marketplace-76z59\" (UID: \"5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5\") " pod="openshift-marketplace/redhat-marketplace-76z59" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.321949 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5-utilities\") pod \"redhat-marketplace-76z59\" (UID: \"5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5\") " pod="openshift-marketplace/redhat-marketplace-76z59" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.322004 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5-catalog-content\") pod \"redhat-marketplace-76z59\" (UID: \"5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5\") " pod="openshift-marketplace/redhat-marketplace-76z59" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.322025 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4fmv\" (UniqueName: \"kubernetes.io/projected/5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5-kube-api-access-x4fmv\") pod \"redhat-marketplace-76z59\" (UID: \"5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5\") " pod="openshift-marketplace/redhat-marketplace-76z59" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.322723 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5-utilities\") pod \"redhat-marketplace-76z59\" (UID: \"5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5\") " pod="openshift-marketplace/redhat-marketplace-76z59" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.322938 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5-catalog-content\") pod \"redhat-marketplace-76z59\" (UID: \"5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5\") " pod="openshift-marketplace/redhat-marketplace-76z59" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.344082 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4fmv\" (UniqueName: \"kubernetes.io/projected/5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5-kube-api-access-x4fmv\") pod \"redhat-marketplace-76z59\" (UID: \"5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5\") " pod="openshift-marketplace/redhat-marketplace-76z59" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.360117 4799 generic.go:334] "Generic (PLEG): container finished" podID="c808aeb6-0065-4efc-9d98-9ee6c97e3250" containerID="7eb4f637c7941d0690710ad77ea9bdd198746dd85ed973b298a9460cbdbde8d5" exitCode=0 Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.360229 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6ktz" event={"ID":"c808aeb6-0065-4efc-9d98-9ee6c97e3250","Type":"ContainerDied","Data":"7eb4f637c7941d0690710ad77ea9bdd198746dd85ed973b298a9460cbdbde8d5"} Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.363527 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.373741 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8g24k" event={"ID":"82fa445d-953b-4729-8c80-a2bc760f0ce3","Type":"ContainerStarted","Data":"44780fef9791a4318e9b253f990fcdb6a52f34fe6c1ac6c407dc0c86abd79747"} Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.374159 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8g24k" event={"ID":"82fa445d-953b-4729-8c80-a2bc760f0ce3","Type":"ContainerStarted","Data":"cc8ff191b018d5e1feb7dea83d1c27d568240a0fe3a2feaa36f8aeecdc0d445b"} Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.375265 4799 generic.go:334] "Generic (PLEG): container finished" podID="d96b8a37-8325-4d8c-b8ce-94f40dd0a21a" containerID="b315f2d217d14c5faa843fde12396659021872465ecb116f152432f7abfb94e7" exitCode=0 Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.375372 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2wfc" event={"ID":"d96b8a37-8325-4d8c-b8ce-94f40dd0a21a","Type":"ContainerDied","Data":"b315f2d217d14c5faa843fde12396659021872465ecb116f152432f7abfb94e7"} Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.375398 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2wfc" event={"ID":"d96b8a37-8325-4d8c-b8ce-94f40dd0a21a","Type":"ContainerStarted","Data":"8e134aa9306fd64388dd0b422dbb363baadf3995f99da26925faaabb2b95b6b4"} Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.380075 4799 generic.go:334] "Generic (PLEG): container finished" podID="ef0e0c84-7483-438a-8ad1-b105cd4e2cc7" containerID="e4f7c8a8929df5cd9c75adb43050ca80063dfa2b314aac3b96a1522808a4d572" exitCode=0 Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.380148 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7dw8" event={"ID":"ef0e0c84-7483-438a-8ad1-b105cd4e2cc7","Type":"ContainerDied","Data":"e4f7c8a8929df5cd9c75adb43050ca80063dfa2b314aac3b96a1522808a4d572"} Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.380209 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7dw8" event={"ID":"ef0e0c84-7483-438a-8ad1-b105cd4e2cc7","Type":"ContainerStarted","Data":"a9b7d5b77c8f93652264ad8f5659c8420005b105b538b532d443b006a702ac78"} Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.390668 4799 generic.go:334] "Generic (PLEG): container finished" podID="1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd" containerID="4685affb3f16eb5020fb30f6bb676ff8756e7abc5145235df53586b833c9df01" exitCode=0 Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.390808 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kr6pr" event={"ID":"1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd","Type":"ContainerDied","Data":"4685affb3f16eb5020fb30f6bb676ff8756e7abc5145235df53586b833c9df01"} Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.390876 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kr6pr" event={"ID":"1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd","Type":"ContainerStarted","Data":"3627d321c0c285955306d77dca7cb838c3313036b6f8f85176e579ea727ab0a5"} Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.429175 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76z59" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.450777 4799 patch_prober.go:28] interesting pod/router-default-5444994796-l4462 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:47:41 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Jan 27 07:47:41 crc kubenswrapper[4799]: [+]process-running ok Jan 27 07:47:41 crc kubenswrapper[4799]: healthz check failed Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.450814 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4462" podUID="9664c11c-1653-4690-9eb4-9c4918070a0d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.471840 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-8g24k" podStartSLOduration=12.471821893 podStartE2EDuration="12.471821893s" podCreationTimestamp="2026-01-27 07:47:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:41.465220927 +0000 UTC m=+127.776324992" watchObservedRunningTime="2026-01-27 07:47:41.471821893 +0000 UTC m=+127.782925958" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.548675 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tm4nj"] Jan 27 07:47:41 crc kubenswrapper[4799]: W0127 07:47:41.565417 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f28fa44_7662_40a4_a2c2_81bb5a9c4ace.slice/crio-bdee9a0214aa7ae0857b3e3694a11f04db99a3f24c58e9a7f6cb8b71abae107e WatchSource:0}: Error finding container bdee9a0214aa7ae0857b3e3694a11f04db99a3f24c58e9a7f6cb8b71abae107e: Status 404 returned error can't find the container with id bdee9a0214aa7ae0857b3e3694a11f04db99a3f24c58e9a7f6cb8b71abae107e Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.668764 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6ww5r"] Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.752087 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.753250 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.756426 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.756712 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.774183 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.837906 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/318bddae-38ad-45d5-b5ee-08a28fa55b39-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"318bddae-38ad-45d5-b5ee-08a28fa55b39\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.837987 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/318bddae-38ad-45d5-b5ee-08a28fa55b39-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"318bddae-38ad-45d5-b5ee-08a28fa55b39\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.902519 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-76z59"] Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.939571 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/318bddae-38ad-45d5-b5ee-08a28fa55b39-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"318bddae-38ad-45d5-b5ee-08a28fa55b39\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.939668 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/318bddae-38ad-45d5-b5ee-08a28fa55b39-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"318bddae-38ad-45d5-b5ee-08a28fa55b39\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.939749 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/318bddae-38ad-45d5-b5ee-08a28fa55b39-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"318bddae-38ad-45d5-b5ee-08a28fa55b39\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:47:41 crc kubenswrapper[4799]: I0127 07:47:41.964530 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/318bddae-38ad-45d5-b5ee-08a28fa55b39-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"318bddae-38ad-45d5-b5ee-08a28fa55b39\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.084291 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.118401 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zxtw5"] Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.124970 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxtw5" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.128742 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.133162 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zxtw5"] Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.243819 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/472d8035-24d2-4d6c-bb9d-4f932d4be020-catalog-content\") pod \"redhat-operators-zxtw5\" (UID: \"472d8035-24d2-4d6c-bb9d-4f932d4be020\") " pod="openshift-marketplace/redhat-operators-zxtw5" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.243899 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/472d8035-24d2-4d6c-bb9d-4f932d4be020-utilities\") pod \"redhat-operators-zxtw5\" (UID: \"472d8035-24d2-4d6c-bb9d-4f932d4be020\") " pod="openshift-marketplace/redhat-operators-zxtw5" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.243913 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tbk4\" (UniqueName: \"kubernetes.io/projected/472d8035-24d2-4d6c-bb9d-4f932d4be020-kube-api-access-6tbk4\") pod \"redhat-operators-zxtw5\" (UID: \"472d8035-24d2-4d6c-bb9d-4f932d4be020\") " pod="openshift-marketplace/redhat-operators-zxtw5" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.345564 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/472d8035-24d2-4d6c-bb9d-4f932d4be020-catalog-content\") pod \"redhat-operators-zxtw5\" (UID: \"472d8035-24d2-4d6c-bb9d-4f932d4be020\") " pod="openshift-marketplace/redhat-operators-zxtw5" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.345666 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/472d8035-24d2-4d6c-bb9d-4f932d4be020-utilities\") pod \"redhat-operators-zxtw5\" (UID: \"472d8035-24d2-4d6c-bb9d-4f932d4be020\") " pod="openshift-marketplace/redhat-operators-zxtw5" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.345689 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tbk4\" (UniqueName: \"kubernetes.io/projected/472d8035-24d2-4d6c-bb9d-4f932d4be020-kube-api-access-6tbk4\") pod \"redhat-operators-zxtw5\" (UID: \"472d8035-24d2-4d6c-bb9d-4f932d4be020\") " pod="openshift-marketplace/redhat-operators-zxtw5" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.348090 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/472d8035-24d2-4d6c-bb9d-4f932d4be020-catalog-content\") pod \"redhat-operators-zxtw5\" (UID: \"472d8035-24d2-4d6c-bb9d-4f932d4be020\") " pod="openshift-marketplace/redhat-operators-zxtw5" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.348329 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/472d8035-24d2-4d6c-bb9d-4f932d4be020-utilities\") pod \"redhat-operators-zxtw5\" (UID: \"472d8035-24d2-4d6c-bb9d-4f932d4be020\") " pod="openshift-marketplace/redhat-operators-zxtw5" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.382862 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.389225 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tbk4\" (UniqueName: \"kubernetes.io/projected/472d8035-24d2-4d6c-bb9d-4f932d4be020-kube-api-access-6tbk4\") pod \"redhat-operators-zxtw5\" (UID: \"472d8035-24d2-4d6c-bb9d-4f932d4be020\") " pod="openshift-marketplace/redhat-operators-zxtw5" Jan 27 07:47:42 crc kubenswrapper[4799]: W0127 07:47:42.408656 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod318bddae_38ad_45d5_b5ee_08a28fa55b39.slice/crio-2dbcb591f5fb0f7e216e84282489453ae0834aaa093fa1688ac720673d9fb76a WatchSource:0}: Error finding container 2dbcb591f5fb0f7e216e84282489453ae0834aaa093fa1688ac720673d9fb76a: Status 404 returned error can't find the container with id 2dbcb591f5fb0f7e216e84282489453ae0834aaa093fa1688ac720673d9fb76a Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.419466 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" event={"ID":"13ca58bb-9a4a-420d-b692-9ceda01d8b0c","Type":"ContainerStarted","Data":"e79cabd32cd0be2025f74671726694225c41a00bc1d7788c11d191d3c21e3f47"} Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.419507 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" event={"ID":"13ca58bb-9a4a-420d-b692-9ceda01d8b0c","Type":"ContainerStarted","Data":"075c302c9836f1cd2f0a0ee00a87d27f09e37426a11318e8226bbde65915c475"} Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.419546 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.422009 4799 generic.go:334] "Generic (PLEG): container finished" podID="0f28fa44-7662-40a4-a2c2-81bb5a9c4ace" containerID="5484c333fc6d8a39b340305f837170ec46cc3b9486a08d7893d751fd1ff91983" exitCode=0 Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.422120 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tm4nj" event={"ID":"0f28fa44-7662-40a4-a2c2-81bb5a9c4ace","Type":"ContainerDied","Data":"5484c333fc6d8a39b340305f837170ec46cc3b9486a08d7893d751fd1ff91983"} Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.422148 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tm4nj" event={"ID":"0f28fa44-7662-40a4-a2c2-81bb5a9c4ace","Type":"ContainerStarted","Data":"bdee9a0214aa7ae0857b3e3694a11f04db99a3f24c58e9a7f6cb8b71abae107e"} Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.424312 4799 generic.go:334] "Generic (PLEG): container finished" podID="5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5" containerID="0ff65c562afd22278e78efb72ecb0f88f0211ecbc03ca0b0532cc6f3d7d0fc38" exitCode=0 Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.424367 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76z59" event={"ID":"5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5","Type":"ContainerDied","Data":"0ff65c562afd22278e78efb72ecb0f88f0211ecbc03ca0b0532cc6f3d7d0fc38"} Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.424388 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76z59" event={"ID":"5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5","Type":"ContainerStarted","Data":"c3f01ed9bdc03bdb4abcf1a39379d74717ca9b7199fdadae298e6c89a4495f4b"} Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.426969 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"318bddae-38ad-45d5-b5ee-08a28fa55b39","Type":"ContainerStarted","Data":"2dbcb591f5fb0f7e216e84282489453ae0834aaa093fa1688ac720673d9fb76a"} Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.450847 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" podStartSLOduration=107.450825411 podStartE2EDuration="1m47.450825411s" podCreationTimestamp="2026-01-27 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:42.449181015 +0000 UTC m=+128.760285080" watchObservedRunningTime="2026-01-27 07:47:42.450825411 +0000 UTC m=+128.761929466" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.456715 4799 patch_prober.go:28] interesting pod/router-default-5444994796-l4462 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:47:42 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Jan 27 07:47:42 crc kubenswrapper[4799]: [+]process-running ok Jan 27 07:47:42 crc kubenswrapper[4799]: healthz check failed Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.456759 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4462" podUID="9664c11c-1653-4690-9eb4-9c4918070a0d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.487454 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxtw5" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.492640 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.518120 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g7thd"] Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.519420 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g7thd" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.533173 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g7thd"] Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.649138 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpq8m\" (UniqueName: \"kubernetes.io/projected/6be235d3-0500-4c98-abf6-a8709c12e8a7-kube-api-access-qpq8m\") pod \"redhat-operators-g7thd\" (UID: \"6be235d3-0500-4c98-abf6-a8709c12e8a7\") " pod="openshift-marketplace/redhat-operators-g7thd" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.649207 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be235d3-0500-4c98-abf6-a8709c12e8a7-catalog-content\") pod \"redhat-operators-g7thd\" (UID: \"6be235d3-0500-4c98-abf6-a8709c12e8a7\") " pod="openshift-marketplace/redhat-operators-g7thd" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.649402 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be235d3-0500-4c98-abf6-a8709c12e8a7-utilities\") pod \"redhat-operators-g7thd\" (UID: \"6be235d3-0500-4c98-abf6-a8709c12e8a7\") " pod="openshift-marketplace/redhat-operators-g7thd" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.750395 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be235d3-0500-4c98-abf6-a8709c12e8a7-utilities\") pod \"redhat-operators-g7thd\" (UID: \"6be235d3-0500-4c98-abf6-a8709c12e8a7\") " pod="openshift-marketplace/redhat-operators-g7thd" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.750442 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpq8m\" (UniqueName: \"kubernetes.io/projected/6be235d3-0500-4c98-abf6-a8709c12e8a7-kube-api-access-qpq8m\") pod \"redhat-operators-g7thd\" (UID: \"6be235d3-0500-4c98-abf6-a8709c12e8a7\") " pod="openshift-marketplace/redhat-operators-g7thd" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.750476 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be235d3-0500-4c98-abf6-a8709c12e8a7-catalog-content\") pod \"redhat-operators-g7thd\" (UID: \"6be235d3-0500-4c98-abf6-a8709c12e8a7\") " pod="openshift-marketplace/redhat-operators-g7thd" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.750848 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be235d3-0500-4c98-abf6-a8709c12e8a7-utilities\") pod \"redhat-operators-g7thd\" (UID: \"6be235d3-0500-4c98-abf6-a8709c12e8a7\") " pod="openshift-marketplace/redhat-operators-g7thd" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.752632 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be235d3-0500-4c98-abf6-a8709c12e8a7-catalog-content\") pod \"redhat-operators-g7thd\" (UID: \"6be235d3-0500-4c98-abf6-a8709c12e8a7\") " pod="openshift-marketplace/redhat-operators-g7thd" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.773020 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpq8m\" (UniqueName: \"kubernetes.io/projected/6be235d3-0500-4c98-abf6-a8709c12e8a7-kube-api-access-qpq8m\") pod \"redhat-operators-g7thd\" (UID: \"6be235d3-0500-4c98-abf6-a8709c12e8a7\") " pod="openshift-marketplace/redhat-operators-g7thd" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.839050 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g7thd" Jan 27 07:47:42 crc kubenswrapper[4799]: I0127 07:47:42.864484 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zxtw5"] Jan 27 07:47:42 crc kubenswrapper[4799]: W0127 07:47:42.892951 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod472d8035_24d2_4d6c_bb9d_4f932d4be020.slice/crio-3ecbf07cf07a66853956b24adbfb44442c2b2cbdf218c845b7e4451289f8c364 WatchSource:0}: Error finding container 3ecbf07cf07a66853956b24adbfb44442c2b2cbdf218c845b7e4451289f8c364: Status 404 returned error can't find the container with id 3ecbf07cf07a66853956b24adbfb44442c2b2cbdf218c845b7e4451289f8c364 Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.189210 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g7thd"] Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.335061 4799 patch_prober.go:28] interesting pod/downloads-7954f5f757-tnr7q container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.335107 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tnr7q" podUID="a593dc31-38ff-4849-9ad0-cbaf0b6d1547" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.335145 4799 patch_prober.go:28] interesting pod/downloads-7954f5f757-tnr7q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.335195 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tnr7q" podUID="a593dc31-38ff-4849-9ad0-cbaf0b6d1547" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.440275 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"318bddae-38ad-45d5-b5ee-08a28fa55b39","Type":"ContainerStarted","Data":"2cff14c8affb38b9a483f291320395a27f63dd858289f4f0ef5a8f7f10841091"} Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.445942 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-l4462" Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.446541 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g7thd" event={"ID":"6be235d3-0500-4c98-abf6-a8709c12e8a7","Type":"ContainerStarted","Data":"58978e751c1455ad3b65c627b5f82e263865cecd44473883d2809e540a948e6b"} Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.448345 4799 generic.go:334] "Generic (PLEG): container finished" podID="de07a2d4-e916-4c2d-bb3b-b8a268461a71" containerID="2b9bdf16be7602c152391c3f4392da1ce116663c6453dcd9991c2f2de697ea9a" exitCode=0 Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.448388 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc" event={"ID":"de07a2d4-e916-4c2d-bb3b-b8a268461a71","Type":"ContainerDied","Data":"2b9bdf16be7602c152391c3f4392da1ce116663c6453dcd9991c2f2de697ea9a"} Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.449282 4799 patch_prober.go:28] interesting pod/router-default-5444994796-l4462 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:47:43 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Jan 27 07:47:43 crc kubenswrapper[4799]: [+]process-running ok Jan 27 07:47:43 crc kubenswrapper[4799]: healthz check failed Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.449350 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4462" podUID="9664c11c-1653-4690-9eb4-9c4918070a0d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.450759 4799 generic.go:334] "Generic (PLEG): container finished" podID="472d8035-24d2-4d6c-bb9d-4f932d4be020" containerID="0fdf6de4de5f8a31a322db013c3818736efa536dc72f4cd2e84ae77a1bc13230" exitCode=0 Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.451228 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxtw5" event={"ID":"472d8035-24d2-4d6c-bb9d-4f932d4be020","Type":"ContainerDied","Data":"0fdf6de4de5f8a31a322db013c3818736efa536dc72f4cd2e84ae77a1bc13230"} Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.451257 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxtw5" event={"ID":"472d8035-24d2-4d6c-bb9d-4f932d4be020","Type":"ContainerStarted","Data":"3ecbf07cf07a66853956b24adbfb44442c2b2cbdf218c845b7e4451289f8c364"} Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.466241 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.466223133 podStartE2EDuration="2.466223133s" podCreationTimestamp="2026-01-27 07:47:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:47:43.457326293 +0000 UTC m=+129.768430368" watchObservedRunningTime="2026-01-27 07:47:43.466223133 +0000 UTC m=+129.777327198" Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.717572 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.724540 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rdpz8" Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.852568 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.853780 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.862009 4799 patch_prober.go:28] interesting pod/console-f9d7485db-bl4wn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.862111 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-bl4wn" podUID="1c1b6ac6-0dc3-4f65-bb94-d448893ae317" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.863708 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:43 crc kubenswrapper[4799]: I0127 07:47:43.870429 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-9t8n9" Jan 27 07:47:44 crc kubenswrapper[4799]: I0127 07:47:44.119492 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" Jan 27 07:47:44 crc kubenswrapper[4799]: I0127 07:47:44.448316 4799 patch_prober.go:28] interesting pod/router-default-5444994796-l4462 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:47:44 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Jan 27 07:47:44 crc kubenswrapper[4799]: [+]process-running ok Jan 27 07:47:44 crc kubenswrapper[4799]: healthz check failed Jan 27 07:47:44 crc kubenswrapper[4799]: I0127 07:47:44.448437 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4462" podUID="9664c11c-1653-4690-9eb4-9c4918070a0d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:47:44 crc kubenswrapper[4799]: I0127 07:47:44.481687 4799 generic.go:334] "Generic (PLEG): container finished" podID="318bddae-38ad-45d5-b5ee-08a28fa55b39" containerID="2cff14c8affb38b9a483f291320395a27f63dd858289f4f0ef5a8f7f10841091" exitCode=0 Jan 27 07:47:44 crc kubenswrapper[4799]: I0127 07:47:44.481830 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"318bddae-38ad-45d5-b5ee-08a28fa55b39","Type":"ContainerDied","Data":"2cff14c8affb38b9a483f291320395a27f63dd858289f4f0ef5a8f7f10841091"} Jan 27 07:47:44 crc kubenswrapper[4799]: I0127 07:47:44.489483 4799 generic.go:334] "Generic (PLEG): container finished" podID="6be235d3-0500-4c98-abf6-a8709c12e8a7" containerID="9bbf152b37a42ae20f5788c1de15ac74a921171f3fa2c8c34dea28e3543783c4" exitCode=0 Jan 27 07:47:44 crc kubenswrapper[4799]: I0127 07:47:44.490653 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g7thd" event={"ID":"6be235d3-0500-4c98-abf6-a8709c12e8a7","Type":"ContainerDied","Data":"9bbf152b37a42ae20f5788c1de15ac74a921171f3fa2c8c34dea28e3543783c4"} Jan 27 07:47:44 crc kubenswrapper[4799]: I0127 07:47:44.825872 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc" Jan 27 07:47:44 crc kubenswrapper[4799]: I0127 07:47:44.919924 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de07a2d4-e916-4c2d-bb3b-b8a268461a71-config-volume\") pod \"de07a2d4-e916-4c2d-bb3b-b8a268461a71\" (UID: \"de07a2d4-e916-4c2d-bb3b-b8a268461a71\") " Jan 27 07:47:44 crc kubenswrapper[4799]: I0127 07:47:44.920073 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djfv8\" (UniqueName: \"kubernetes.io/projected/de07a2d4-e916-4c2d-bb3b-b8a268461a71-kube-api-access-djfv8\") pod \"de07a2d4-e916-4c2d-bb3b-b8a268461a71\" (UID: \"de07a2d4-e916-4c2d-bb3b-b8a268461a71\") " Jan 27 07:47:44 crc kubenswrapper[4799]: I0127 07:47:44.920104 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de07a2d4-e916-4c2d-bb3b-b8a268461a71-secret-volume\") pod \"de07a2d4-e916-4c2d-bb3b-b8a268461a71\" (UID: \"de07a2d4-e916-4c2d-bb3b-b8a268461a71\") " Jan 27 07:47:44 crc kubenswrapper[4799]: I0127 07:47:44.923000 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de07a2d4-e916-4c2d-bb3b-b8a268461a71-config-volume" (OuterVolumeSpecName: "config-volume") pod "de07a2d4-e916-4c2d-bb3b-b8a268461a71" (UID: "de07a2d4-e916-4c2d-bb3b-b8a268461a71"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:47:44 crc kubenswrapper[4799]: I0127 07:47:44.929483 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de07a2d4-e916-4c2d-bb3b-b8a268461a71-kube-api-access-djfv8" (OuterVolumeSpecName: "kube-api-access-djfv8") pod "de07a2d4-e916-4c2d-bb3b-b8a268461a71" (UID: "de07a2d4-e916-4c2d-bb3b-b8a268461a71"). InnerVolumeSpecName "kube-api-access-djfv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:47:44 crc kubenswrapper[4799]: I0127 07:47:44.951776 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de07a2d4-e916-4c2d-bb3b-b8a268461a71-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "de07a2d4-e916-4c2d-bb3b-b8a268461a71" (UID: "de07a2d4-e916-4c2d-bb3b-b8a268461a71"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:47:45 crc kubenswrapper[4799]: I0127 07:47:45.021725 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djfv8\" (UniqueName: \"kubernetes.io/projected/de07a2d4-e916-4c2d-bb3b-b8a268461a71-kube-api-access-djfv8\") on node \"crc\" DevicePath \"\"" Jan 27 07:47:45 crc kubenswrapper[4799]: I0127 07:47:45.021760 4799 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de07a2d4-e916-4c2d-bb3b-b8a268461a71-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 07:47:45 crc kubenswrapper[4799]: I0127 07:47:45.021769 4799 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de07a2d4-e916-4c2d-bb3b-b8a268461a71-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 07:47:45 crc kubenswrapper[4799]: I0127 07:47:45.448924 4799 patch_prober.go:28] interesting pod/router-default-5444994796-l4462 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:47:45 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Jan 27 07:47:45 crc kubenswrapper[4799]: [+]process-running ok Jan 27 07:47:45 crc kubenswrapper[4799]: healthz check failed Jan 27 07:47:45 crc kubenswrapper[4799]: I0127 07:47:45.448982 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4462" podUID="9664c11c-1653-4690-9eb4-9c4918070a0d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:47:45 crc kubenswrapper[4799]: I0127 07:47:45.539342 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc" event={"ID":"de07a2d4-e916-4c2d-bb3b-b8a268461a71","Type":"ContainerDied","Data":"272261540d0312313c865d885e6a6658b4903491f9d1509e8f00f7e6e826e9cd"} Jan 27 07:47:45 crc kubenswrapper[4799]: I0127 07:47:45.539408 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="272261540d0312313c865d885e6a6658b4903491f9d1509e8f00f7e6e826e9cd" Jan 27 07:47:45 crc kubenswrapper[4799]: I0127 07:47:45.539368 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc" Jan 27 07:47:45 crc kubenswrapper[4799]: E0127 07:47:45.702180 4799 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde07a2d4_e916_4c2d_bb3b_b8a268461a71.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde07a2d4_e916_4c2d_bb3b_b8a268461a71.slice/crio-272261540d0312313c865d885e6a6658b4903491f9d1509e8f00f7e6e826e9cd\": RecentStats: unable to find data in memory cache]" Jan 27 07:47:45 crc kubenswrapper[4799]: I0127 07:47:45.884646 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:47:46 crc kubenswrapper[4799]: I0127 07:47:46.039177 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/318bddae-38ad-45d5-b5ee-08a28fa55b39-kubelet-dir\") pod \"318bddae-38ad-45d5-b5ee-08a28fa55b39\" (UID: \"318bddae-38ad-45d5-b5ee-08a28fa55b39\") " Jan 27 07:47:46 crc kubenswrapper[4799]: I0127 07:47:46.039229 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/318bddae-38ad-45d5-b5ee-08a28fa55b39-kube-api-access\") pod \"318bddae-38ad-45d5-b5ee-08a28fa55b39\" (UID: \"318bddae-38ad-45d5-b5ee-08a28fa55b39\") " Jan 27 07:47:46 crc kubenswrapper[4799]: I0127 07:47:46.039878 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/318bddae-38ad-45d5-b5ee-08a28fa55b39-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "318bddae-38ad-45d5-b5ee-08a28fa55b39" (UID: "318bddae-38ad-45d5-b5ee-08a28fa55b39"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:47:46 crc kubenswrapper[4799]: I0127 07:47:46.050479 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/318bddae-38ad-45d5-b5ee-08a28fa55b39-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "318bddae-38ad-45d5-b5ee-08a28fa55b39" (UID: "318bddae-38ad-45d5-b5ee-08a28fa55b39"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:47:46 crc kubenswrapper[4799]: I0127 07:47:46.141063 4799 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/318bddae-38ad-45d5-b5ee-08a28fa55b39-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:47:46 crc kubenswrapper[4799]: I0127 07:47:46.141110 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/318bddae-38ad-45d5-b5ee-08a28fa55b39-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:47:46 crc kubenswrapper[4799]: I0127 07:47:46.161544 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-xjwwr" Jan 27 07:47:46 crc kubenswrapper[4799]: I0127 07:47:46.447871 4799 patch_prober.go:28] interesting pod/router-default-5444994796-l4462 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:47:46 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Jan 27 07:47:46 crc kubenswrapper[4799]: [+]process-running ok Jan 27 07:47:46 crc kubenswrapper[4799]: healthz check failed Jan 27 07:47:46 crc kubenswrapper[4799]: I0127 07:47:46.448255 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4462" podUID="9664c11c-1653-4690-9eb4-9c4918070a0d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:47:46 crc kubenswrapper[4799]: I0127 07:47:46.554961 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"318bddae-38ad-45d5-b5ee-08a28fa55b39","Type":"ContainerDied","Data":"2dbcb591f5fb0f7e216e84282489453ae0834aaa093fa1688ac720673d9fb76a"} Jan 27 07:47:46 crc kubenswrapper[4799]: I0127 07:47:46.555010 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dbcb591f5fb0f7e216e84282489453ae0834aaa093fa1688ac720673d9fb76a" Jan 27 07:47:46 crc kubenswrapper[4799]: I0127 07:47:46.555072 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 07:47:47 crc kubenswrapper[4799]: I0127 07:47:47.197043 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 07:47:47 crc kubenswrapper[4799]: E0127 07:47:47.197253 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de07a2d4-e916-4c2d-bb3b-b8a268461a71" containerName="collect-profiles" Jan 27 07:47:47 crc kubenswrapper[4799]: I0127 07:47:47.197264 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="de07a2d4-e916-4c2d-bb3b-b8a268461a71" containerName="collect-profiles" Jan 27 07:47:47 crc kubenswrapper[4799]: E0127 07:47:47.197277 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318bddae-38ad-45d5-b5ee-08a28fa55b39" containerName="pruner" Jan 27 07:47:47 crc kubenswrapper[4799]: I0127 07:47:47.197285 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="318bddae-38ad-45d5-b5ee-08a28fa55b39" containerName="pruner" Jan 27 07:47:47 crc kubenswrapper[4799]: I0127 07:47:47.197422 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="de07a2d4-e916-4c2d-bb3b-b8a268461a71" containerName="collect-profiles" Jan 27 07:47:47 crc kubenswrapper[4799]: I0127 07:47:47.197439 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="318bddae-38ad-45d5-b5ee-08a28fa55b39" containerName="pruner" Jan 27 07:47:47 crc kubenswrapper[4799]: I0127 07:47:47.197762 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:47:47 crc kubenswrapper[4799]: I0127 07:47:47.201592 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 07:47:47 crc kubenswrapper[4799]: I0127 07:47:47.201706 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 07:47:47 crc kubenswrapper[4799]: I0127 07:47:47.205619 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 07:47:47 crc kubenswrapper[4799]: I0127 07:47:47.264155 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/520007ee-d81e-4c47-9a7d-a5d50997c3b7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"520007ee-d81e-4c47-9a7d-a5d50997c3b7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:47:47 crc kubenswrapper[4799]: I0127 07:47:47.264231 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/520007ee-d81e-4c47-9a7d-a5d50997c3b7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"520007ee-d81e-4c47-9a7d-a5d50997c3b7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:47:47 crc kubenswrapper[4799]: I0127 07:47:47.365676 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/520007ee-d81e-4c47-9a7d-a5d50997c3b7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"520007ee-d81e-4c47-9a7d-a5d50997c3b7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:47:47 crc kubenswrapper[4799]: I0127 07:47:47.365738 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/520007ee-d81e-4c47-9a7d-a5d50997c3b7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"520007ee-d81e-4c47-9a7d-a5d50997c3b7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:47:47 crc kubenswrapper[4799]: I0127 07:47:47.365866 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/520007ee-d81e-4c47-9a7d-a5d50997c3b7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"520007ee-d81e-4c47-9a7d-a5d50997c3b7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:47:47 crc kubenswrapper[4799]: I0127 07:47:47.417811 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/520007ee-d81e-4c47-9a7d-a5d50997c3b7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"520007ee-d81e-4c47-9a7d-a5d50997c3b7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:47:47 crc kubenswrapper[4799]: I0127 07:47:47.447214 4799 patch_prober.go:28] interesting pod/router-default-5444994796-l4462 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:47:47 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Jan 27 07:47:47 crc kubenswrapper[4799]: [+]process-running ok Jan 27 07:47:47 crc kubenswrapper[4799]: healthz check failed Jan 27 07:47:47 crc kubenswrapper[4799]: I0127 07:47:47.447667 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4462" podUID="9664c11c-1653-4690-9eb4-9c4918070a0d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:47:47 crc kubenswrapper[4799]: I0127 07:47:47.626949 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:47:48 crc kubenswrapper[4799]: I0127 07:47:48.447771 4799 patch_prober.go:28] interesting pod/router-default-5444994796-l4462 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:47:48 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Jan 27 07:47:48 crc kubenswrapper[4799]: [+]process-running ok Jan 27 07:47:48 crc kubenswrapper[4799]: healthz check failed Jan 27 07:47:48 crc kubenswrapper[4799]: I0127 07:47:48.447830 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4462" podUID="9664c11c-1653-4690-9eb4-9c4918070a0d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:47:48 crc kubenswrapper[4799]: I0127 07:47:48.855947 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 07:47:49 crc kubenswrapper[4799]: I0127 07:47:49.456691 4799 patch_prober.go:28] interesting pod/router-default-5444994796-l4462 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:47:49 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Jan 27 07:47:49 crc kubenswrapper[4799]: [+]process-running ok Jan 27 07:47:49 crc kubenswrapper[4799]: healthz check failed Jan 27 07:47:49 crc kubenswrapper[4799]: I0127 07:47:49.457350 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4462" podUID="9664c11c-1653-4690-9eb4-9c4918070a0d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:47:49 crc kubenswrapper[4799]: I0127 07:47:49.612038 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"520007ee-d81e-4c47-9a7d-a5d50997c3b7","Type":"ContainerStarted","Data":"d78aa8b960cbc7f54307f627f8089d1e741a75a808a81ce752dde41598390a16"} Jan 27 07:47:50 crc kubenswrapper[4799]: I0127 07:47:50.446875 4799 patch_prober.go:28] interesting pod/router-default-5444994796-l4462 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:47:50 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Jan 27 07:47:50 crc kubenswrapper[4799]: [+]process-running ok Jan 27 07:47:50 crc kubenswrapper[4799]: healthz check failed Jan 27 07:47:50 crc kubenswrapper[4799]: I0127 07:47:50.446962 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4462" podUID="9664c11c-1653-4690-9eb4-9c4918070a0d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:47:50 crc kubenswrapper[4799]: I0127 07:47:50.621776 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"520007ee-d81e-4c47-9a7d-a5d50997c3b7","Type":"ContainerStarted","Data":"6a2262d08e2ebc6104ee1f6b8f60454e6be583e6d5feed84652b15dc888f3b9f"} Jan 27 07:47:51 crc kubenswrapper[4799]: I0127 07:47:51.446535 4799 patch_prober.go:28] interesting pod/router-default-5444994796-l4462 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:47:51 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Jan 27 07:47:51 crc kubenswrapper[4799]: [+]process-running ok Jan 27 07:47:51 crc kubenswrapper[4799]: healthz check failed Jan 27 07:47:51 crc kubenswrapper[4799]: I0127 07:47:51.447285 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4462" podUID="9664c11c-1653-4690-9eb4-9c4918070a0d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:47:51 crc kubenswrapper[4799]: I0127 07:47:51.649438 4799 generic.go:334] "Generic (PLEG): container finished" podID="520007ee-d81e-4c47-9a7d-a5d50997c3b7" containerID="6a2262d08e2ebc6104ee1f6b8f60454e6be583e6d5feed84652b15dc888f3b9f" exitCode=0 Jan 27 07:47:51 crc kubenswrapper[4799]: I0127 07:47:51.649484 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"520007ee-d81e-4c47-9a7d-a5d50997c3b7","Type":"ContainerDied","Data":"6a2262d08e2ebc6104ee1f6b8f60454e6be583e6d5feed84652b15dc888f3b9f"} Jan 27 07:47:52 crc kubenswrapper[4799]: I0127 07:47:52.446507 4799 patch_prober.go:28] interesting pod/router-default-5444994796-l4462 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:47:52 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Jan 27 07:47:52 crc kubenswrapper[4799]: [+]process-running ok Jan 27 07:47:52 crc kubenswrapper[4799]: healthz check failed Jan 27 07:47:52 crc kubenswrapper[4799]: I0127 07:47:52.446594 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4462" podUID="9664c11c-1653-4690-9eb4-9c4918070a0d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:47:53 crc kubenswrapper[4799]: I0127 07:47:53.335670 4799 patch_prober.go:28] interesting pod/downloads-7954f5f757-tnr7q container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 27 07:47:53 crc kubenswrapper[4799]: I0127 07:47:53.336131 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tnr7q" podUID="a593dc31-38ff-4849-9ad0-cbaf0b6d1547" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 27 07:47:53 crc kubenswrapper[4799]: I0127 07:47:53.335688 4799 patch_prober.go:28] interesting pod/downloads-7954f5f757-tnr7q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 27 07:47:53 crc kubenswrapper[4799]: I0127 07:47:53.336236 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tnr7q" podUID="a593dc31-38ff-4849-9ad0-cbaf0b6d1547" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 27 07:47:53 crc kubenswrapper[4799]: I0127 07:47:53.447235 4799 patch_prober.go:28] interesting pod/router-default-5444994796-l4462 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:47:53 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Jan 27 07:47:53 crc kubenswrapper[4799]: [+]process-running ok Jan 27 07:47:53 crc kubenswrapper[4799]: healthz check failed Jan 27 07:47:53 crc kubenswrapper[4799]: I0127 07:47:53.447337 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4462" podUID="9664c11c-1653-4690-9eb4-9c4918070a0d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:47:53 crc kubenswrapper[4799]: I0127 07:47:53.853036 4799 patch_prober.go:28] interesting pod/console-f9d7485db-bl4wn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 07:47:53 crc kubenswrapper[4799]: I0127 07:47:53.853100 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-bl4wn" podUID="1c1b6ac6-0dc3-4f65-bb94-d448893ae317" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 07:47:54 crc kubenswrapper[4799]: I0127 07:47:54.247114 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fv5p6"] Jan 27 07:47:54 crc kubenswrapper[4799]: I0127 07:47:54.247647 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" podUID="ca13973c-b3d8-47d7-bd6c-14ebc72bd907" containerName="controller-manager" containerID="cri-o://02e70d1c7f5046d310a17df04e8a18b14cc9fa637ddc1a3ce1d6d89a178813c9" gracePeriod=30 Jan 27 07:47:54 crc kubenswrapper[4799]: I0127 07:47:54.262665 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f"] Jan 27 07:47:54 crc kubenswrapper[4799]: I0127 07:47:54.263118 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" podUID="5e84a898-a553-48bd-afbb-5688db92ff4b" containerName="route-controller-manager" containerID="cri-o://a89c20ac09dff817fa88fbb0364d7473a99d7bbcf97a11e45e828693d21ef6c1" gracePeriod=30 Jan 27 07:47:54 crc kubenswrapper[4799]: I0127 07:47:54.448191 4799 patch_prober.go:28] interesting pod/router-default-5444994796-l4462 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 07:47:54 crc kubenswrapper[4799]: [+]has-synced ok Jan 27 07:47:54 crc kubenswrapper[4799]: [+]process-running ok Jan 27 07:47:54 crc kubenswrapper[4799]: healthz check failed Jan 27 07:47:54 crc kubenswrapper[4799]: I0127 07:47:54.448337 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l4462" podUID="9664c11c-1653-4690-9eb4-9c4918070a0d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 07:47:54 crc kubenswrapper[4799]: I0127 07:47:54.689512 4799 generic.go:334] "Generic (PLEG): container finished" podID="5e84a898-a553-48bd-afbb-5688db92ff4b" containerID="a89c20ac09dff817fa88fbb0364d7473a99d7bbcf97a11e45e828693d21ef6c1" exitCode=0 Jan 27 07:47:54 crc kubenswrapper[4799]: I0127 07:47:54.689553 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" event={"ID":"5e84a898-a553-48bd-afbb-5688db92ff4b","Type":"ContainerDied","Data":"a89c20ac09dff817fa88fbb0364d7473a99d7bbcf97a11e45e828693d21ef6c1"} Jan 27 07:47:55 crc kubenswrapper[4799]: I0127 07:47:55.451081 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-l4462" Jan 27 07:47:55 crc kubenswrapper[4799]: I0127 07:47:55.453604 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-l4462" Jan 27 07:47:55 crc kubenswrapper[4799]: I0127 07:47:55.699171 4799 generic.go:334] "Generic (PLEG): container finished" podID="ca13973c-b3d8-47d7-bd6c-14ebc72bd907" containerID="02e70d1c7f5046d310a17df04e8a18b14cc9fa637ddc1a3ce1d6d89a178813c9" exitCode=0 Jan 27 07:47:55 crc kubenswrapper[4799]: I0127 07:47:55.699237 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" event={"ID":"ca13973c-b3d8-47d7-bd6c-14ebc72bd907","Type":"ContainerDied","Data":"02e70d1c7f5046d310a17df04e8a18b14cc9fa637ddc1a3ce1d6d89a178813c9"} Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.128513 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.142122 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.300276 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e84a898-a553-48bd-afbb-5688db92ff4b-serving-cert\") pod \"5e84a898-a553-48bd-afbb-5688db92ff4b\" (UID: \"5e84a898-a553-48bd-afbb-5688db92ff4b\") " Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.300407 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/520007ee-d81e-4c47-9a7d-a5d50997c3b7-kube-api-access\") pod \"520007ee-d81e-4c47-9a7d-a5d50997c3b7\" (UID: \"520007ee-d81e-4c47-9a7d-a5d50997c3b7\") " Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.300440 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/520007ee-d81e-4c47-9a7d-a5d50997c3b7-kubelet-dir\") pod \"520007ee-d81e-4c47-9a7d-a5d50997c3b7\" (UID: \"520007ee-d81e-4c47-9a7d-a5d50997c3b7\") " Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.300487 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e84a898-a553-48bd-afbb-5688db92ff4b-config\") pod \"5e84a898-a553-48bd-afbb-5688db92ff4b\" (UID: \"5e84a898-a553-48bd-afbb-5688db92ff4b\") " Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.300517 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gprsr\" (UniqueName: \"kubernetes.io/projected/5e84a898-a553-48bd-afbb-5688db92ff4b-kube-api-access-gprsr\") pod \"5e84a898-a553-48bd-afbb-5688db92ff4b\" (UID: \"5e84a898-a553-48bd-afbb-5688db92ff4b\") " Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.300610 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e84a898-a553-48bd-afbb-5688db92ff4b-client-ca\") pod \"5e84a898-a553-48bd-afbb-5688db92ff4b\" (UID: \"5e84a898-a553-48bd-afbb-5688db92ff4b\") " Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.301804 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e84a898-a553-48bd-afbb-5688db92ff4b-config" (OuterVolumeSpecName: "config") pod "5e84a898-a553-48bd-afbb-5688db92ff4b" (UID: "5e84a898-a553-48bd-afbb-5688db92ff4b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.301885 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/520007ee-d81e-4c47-9a7d-a5d50997c3b7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "520007ee-d81e-4c47-9a7d-a5d50997c3b7" (UID: "520007ee-d81e-4c47-9a7d-a5d50997c3b7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.301890 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e84a898-a553-48bd-afbb-5688db92ff4b-client-ca" (OuterVolumeSpecName: "client-ca") pod "5e84a898-a553-48bd-afbb-5688db92ff4b" (UID: "5e84a898-a553-48bd-afbb-5688db92ff4b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.305496 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/520007ee-d81e-4c47-9a7d-a5d50997c3b7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "520007ee-d81e-4c47-9a7d-a5d50997c3b7" (UID: "520007ee-d81e-4c47-9a7d-a5d50997c3b7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.308979 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e84a898-a553-48bd-afbb-5688db92ff4b-kube-api-access-gprsr" (OuterVolumeSpecName: "kube-api-access-gprsr") pod "5e84a898-a553-48bd-afbb-5688db92ff4b" (UID: "5e84a898-a553-48bd-afbb-5688db92ff4b"). InnerVolumeSpecName "kube-api-access-gprsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.309033 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e84a898-a553-48bd-afbb-5688db92ff4b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5e84a898-a553-48bd-afbb-5688db92ff4b" (UID: "5e84a898-a553-48bd-afbb-5688db92ff4b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.402864 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e84a898-a553-48bd-afbb-5688db92ff4b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.403229 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/520007ee-d81e-4c47-9a7d-a5d50997c3b7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.403243 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e84a898-a553-48bd-afbb-5688db92ff4b-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.403255 4799 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/520007ee-d81e-4c47-9a7d-a5d50997c3b7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.403268 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gprsr\" (UniqueName: \"kubernetes.io/projected/5e84a898-a553-48bd-afbb-5688db92ff4b-kube-api-access-gprsr\") on node \"crc\" DevicePath \"\"" Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.403279 4799 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e84a898-a553-48bd-afbb-5688db92ff4b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.720085 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"520007ee-d81e-4c47-9a7d-a5d50997c3b7","Type":"ContainerDied","Data":"d78aa8b960cbc7f54307f627f8089d1e741a75a808a81ce752dde41598390a16"} Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.720127 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d78aa8b960cbc7f54307f627f8089d1e741a75a808a81ce752dde41598390a16" Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.720102 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.721565 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" event={"ID":"5e84a898-a553-48bd-afbb-5688db92ff4b","Type":"ContainerDied","Data":"e2b46e4dff22bf246cca5dacd1612f5af3f1620a202b4136c147080641bbe854"} Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.721635 4799 scope.go:117] "RemoveContainer" containerID="a89c20ac09dff817fa88fbb0364d7473a99d7bbcf97a11e45e828693d21ef6c1" Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.721581 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f" Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.747447 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f"] Jan 27 07:47:59 crc kubenswrapper[4799]: I0127 07:47:59.750437 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-77f8f"] Jan 27 07:48:00 crc kubenswrapper[4799]: I0127 07:48:00.458364 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e84a898-a553-48bd-afbb-5688db92ff4b" path="/var/lib/kubelet/pods/5e84a898-a553-48bd-afbb-5688db92ff4b/volumes" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.229167 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.592647 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6"] Jan 27 07:48:01 crc kubenswrapper[4799]: E0127 07:48:01.595477 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e84a898-a553-48bd-afbb-5688db92ff4b" containerName="route-controller-manager" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.595501 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e84a898-a553-48bd-afbb-5688db92ff4b" containerName="route-controller-manager" Jan 27 07:48:01 crc kubenswrapper[4799]: E0127 07:48:01.595518 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="520007ee-d81e-4c47-9a7d-a5d50997c3b7" containerName="pruner" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.595524 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="520007ee-d81e-4c47-9a7d-a5d50997c3b7" containerName="pruner" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.595826 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="520007ee-d81e-4c47-9a7d-a5d50997c3b7" containerName="pruner" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.595850 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e84a898-a553-48bd-afbb-5688db92ff4b" containerName="route-controller-manager" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.596507 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.598807 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.598998 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.599062 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.600619 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.601968 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.611865 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.616969 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6"] Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.740134 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4zk9\" (UniqueName: \"kubernetes.io/projected/60808c76-07ea-4482-877a-c1ab1eff8ef0-kube-api-access-d4zk9\") pod \"route-controller-manager-75ff8d4784-hh2n6\" (UID: \"60808c76-07ea-4482-877a-c1ab1eff8ef0\") " pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.740187 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60808c76-07ea-4482-877a-c1ab1eff8ef0-config\") pod \"route-controller-manager-75ff8d4784-hh2n6\" (UID: \"60808c76-07ea-4482-877a-c1ab1eff8ef0\") " pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.740457 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/60808c76-07ea-4482-877a-c1ab1eff8ef0-client-ca\") pod \"route-controller-manager-75ff8d4784-hh2n6\" (UID: \"60808c76-07ea-4482-877a-c1ab1eff8ef0\") " pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.740668 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60808c76-07ea-4482-877a-c1ab1eff8ef0-serving-cert\") pod \"route-controller-manager-75ff8d4784-hh2n6\" (UID: \"60808c76-07ea-4482-877a-c1ab1eff8ef0\") " pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.842336 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/60808c76-07ea-4482-877a-c1ab1eff8ef0-client-ca\") pod \"route-controller-manager-75ff8d4784-hh2n6\" (UID: \"60808c76-07ea-4482-877a-c1ab1eff8ef0\") " pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.842445 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60808c76-07ea-4482-877a-c1ab1eff8ef0-serving-cert\") pod \"route-controller-manager-75ff8d4784-hh2n6\" (UID: \"60808c76-07ea-4482-877a-c1ab1eff8ef0\") " pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.843339 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/60808c76-07ea-4482-877a-c1ab1eff8ef0-client-ca\") pod \"route-controller-manager-75ff8d4784-hh2n6\" (UID: \"60808c76-07ea-4482-877a-c1ab1eff8ef0\") " pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.845070 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4zk9\" (UniqueName: \"kubernetes.io/projected/60808c76-07ea-4482-877a-c1ab1eff8ef0-kube-api-access-d4zk9\") pod \"route-controller-manager-75ff8d4784-hh2n6\" (UID: \"60808c76-07ea-4482-877a-c1ab1eff8ef0\") " pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.845106 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60808c76-07ea-4482-877a-c1ab1eff8ef0-config\") pod \"route-controller-manager-75ff8d4784-hh2n6\" (UID: \"60808c76-07ea-4482-877a-c1ab1eff8ef0\") " pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.846575 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60808c76-07ea-4482-877a-c1ab1eff8ef0-config\") pod \"route-controller-manager-75ff8d4784-hh2n6\" (UID: \"60808c76-07ea-4482-877a-c1ab1eff8ef0\") " pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.851403 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60808c76-07ea-4482-877a-c1ab1eff8ef0-serving-cert\") pod \"route-controller-manager-75ff8d4784-hh2n6\" (UID: \"60808c76-07ea-4482-877a-c1ab1eff8ef0\") " pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.860030 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4zk9\" (UniqueName: \"kubernetes.io/projected/60808c76-07ea-4482-877a-c1ab1eff8ef0-kube-api-access-d4zk9\") pod \"route-controller-manager-75ff8d4784-hh2n6\" (UID: \"60808c76-07ea-4482-877a-c1ab1eff8ef0\") " pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" Jan 27 07:48:01 crc kubenswrapper[4799]: I0127 07:48:01.920992 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" Jan 27 07:48:02 crc kubenswrapper[4799]: I0127 07:48:02.249992 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:48:02 crc kubenswrapper[4799]: I0127 07:48:02.250111 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:48:02 crc kubenswrapper[4799]: I0127 07:48:02.250214 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:48:02 crc kubenswrapper[4799]: I0127 07:48:02.250247 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:48:02 crc kubenswrapper[4799]: I0127 07:48:02.251877 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 07:48:02 crc kubenswrapper[4799]: I0127 07:48:02.251929 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 07:48:02 crc kubenswrapper[4799]: I0127 07:48:02.252030 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 07:48:02 crc kubenswrapper[4799]: I0127 07:48:02.262119 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 07:48:02 crc kubenswrapper[4799]: I0127 07:48:02.262131 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:48:02 crc kubenswrapper[4799]: I0127 07:48:02.263570 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:48:02 crc kubenswrapper[4799]: I0127 07:48:02.274968 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:48:02 crc kubenswrapper[4799]: I0127 07:48:02.275525 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:48:02 crc kubenswrapper[4799]: I0127 07:48:02.276226 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:48:02 crc kubenswrapper[4799]: I0127 07:48:02.372552 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 07:48:02 crc kubenswrapper[4799]: I0127 07:48:02.569830 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 07:48:03 crc kubenswrapper[4799]: I0127 07:48:03.348829 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-tnr7q" Jan 27 07:48:03 crc kubenswrapper[4799]: I0127 07:48:03.855438 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:48:03 crc kubenswrapper[4799]: I0127 07:48:03.859952 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 07:48:04 crc kubenswrapper[4799]: I0127 07:48:04.892342 4799 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fv5p6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 07:48:04 crc kubenswrapper[4799]: I0127 07:48:04.892418 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" podUID="ca13973c-b3d8-47d7-bd6c-14ebc72bd907" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 07:48:14 crc kubenswrapper[4799]: I0127 07:48:14.092051 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n2c6j" Jan 27 07:48:14 crc kubenswrapper[4799]: I0127 07:48:14.316051 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6"] Jan 27 07:48:14 crc kubenswrapper[4799]: I0127 07:48:14.892200 4799 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fv5p6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 07:48:14 crc kubenswrapper[4799]: I0127 07:48:14.892263 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" podUID="ca13973c-b3d8-47d7-bd6c-14ebc72bd907" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.452954 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.481610 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69f74844c8-r8kkz"] Jan 27 07:48:15 crc kubenswrapper[4799]: E0127 07:48:15.482655 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca13973c-b3d8-47d7-bd6c-14ebc72bd907" containerName="controller-manager" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.482685 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca13973c-b3d8-47d7-bd6c-14ebc72bd907" containerName="controller-manager" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.482848 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca13973c-b3d8-47d7-bd6c-14ebc72bd907" containerName="controller-manager" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.483278 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.497051 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69f74844c8-r8kkz"] Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.539909 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-config\") pod \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\" (UID: \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\") " Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.539977 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-client-ca\") pod \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\" (UID: \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\") " Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.540044 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2tct\" (UniqueName: \"kubernetes.io/projected/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-kube-api-access-c2tct\") pod \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\" (UID: \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\") " Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.540106 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-serving-cert\") pod \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\" (UID: \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\") " Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.540164 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-proxy-ca-bundles\") pod \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\" (UID: \"ca13973c-b3d8-47d7-bd6c-14ebc72bd907\") " Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.540344 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c849a377-ab04-4f2a-b659-4e96dfa619cb-config\") pod \"controller-manager-69f74844c8-r8kkz\" (UID: \"c849a377-ab04-4f2a-b659-4e96dfa619cb\") " pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.540433 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7zqx\" (UniqueName: \"kubernetes.io/projected/c849a377-ab04-4f2a-b659-4e96dfa619cb-kube-api-access-c7zqx\") pod \"controller-manager-69f74844c8-r8kkz\" (UID: \"c849a377-ab04-4f2a-b659-4e96dfa619cb\") " pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.540468 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c849a377-ab04-4f2a-b659-4e96dfa619cb-proxy-ca-bundles\") pod \"controller-manager-69f74844c8-r8kkz\" (UID: \"c849a377-ab04-4f2a-b659-4e96dfa619cb\") " pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.540486 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c849a377-ab04-4f2a-b659-4e96dfa619cb-client-ca\") pod \"controller-manager-69f74844c8-r8kkz\" (UID: \"c849a377-ab04-4f2a-b659-4e96dfa619cb\") " pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.540525 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c849a377-ab04-4f2a-b659-4e96dfa619cb-serving-cert\") pod \"controller-manager-69f74844c8-r8kkz\" (UID: \"c849a377-ab04-4f2a-b659-4e96dfa619cb\") " pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.541480 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ca13973c-b3d8-47d7-bd6c-14ebc72bd907" (UID: "ca13973c-b3d8-47d7-bd6c-14ebc72bd907"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.542542 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-client-ca" (OuterVolumeSpecName: "client-ca") pod "ca13973c-b3d8-47d7-bd6c-14ebc72bd907" (UID: "ca13973c-b3d8-47d7-bd6c-14ebc72bd907"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.542696 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-config" (OuterVolumeSpecName: "config") pod "ca13973c-b3d8-47d7-bd6c-14ebc72bd907" (UID: "ca13973c-b3d8-47d7-bd6c-14ebc72bd907"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.548351 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-kube-api-access-c2tct" (OuterVolumeSpecName: "kube-api-access-c2tct") pod "ca13973c-b3d8-47d7-bd6c-14ebc72bd907" (UID: "ca13973c-b3d8-47d7-bd6c-14ebc72bd907"). InnerVolumeSpecName "kube-api-access-c2tct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.548879 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ca13973c-b3d8-47d7-bd6c-14ebc72bd907" (UID: "ca13973c-b3d8-47d7-bd6c-14ebc72bd907"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.642168 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7zqx\" (UniqueName: \"kubernetes.io/projected/c849a377-ab04-4f2a-b659-4e96dfa619cb-kube-api-access-c7zqx\") pod \"controller-manager-69f74844c8-r8kkz\" (UID: \"c849a377-ab04-4f2a-b659-4e96dfa619cb\") " pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.642940 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c849a377-ab04-4f2a-b659-4e96dfa619cb-proxy-ca-bundles\") pod \"controller-manager-69f74844c8-r8kkz\" (UID: \"c849a377-ab04-4f2a-b659-4e96dfa619cb\") " pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.643021 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c849a377-ab04-4f2a-b659-4e96dfa619cb-client-ca\") pod \"controller-manager-69f74844c8-r8kkz\" (UID: \"c849a377-ab04-4f2a-b659-4e96dfa619cb\") " pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.643127 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c849a377-ab04-4f2a-b659-4e96dfa619cb-serving-cert\") pod \"controller-manager-69f74844c8-r8kkz\" (UID: \"c849a377-ab04-4f2a-b659-4e96dfa619cb\") " pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.643256 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c849a377-ab04-4f2a-b659-4e96dfa619cb-config\") pod \"controller-manager-69f74844c8-r8kkz\" (UID: \"c849a377-ab04-4f2a-b659-4e96dfa619cb\") " pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.643404 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.643477 4799 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.643542 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.643610 4799 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.643667 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2tct\" (UniqueName: \"kubernetes.io/projected/ca13973c-b3d8-47d7-bd6c-14ebc72bd907-kube-api-access-c2tct\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.645177 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c849a377-ab04-4f2a-b659-4e96dfa619cb-proxy-ca-bundles\") pod \"controller-manager-69f74844c8-r8kkz\" (UID: \"c849a377-ab04-4f2a-b659-4e96dfa619cb\") " pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.645650 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c849a377-ab04-4f2a-b659-4e96dfa619cb-client-ca\") pod \"controller-manager-69f74844c8-r8kkz\" (UID: \"c849a377-ab04-4f2a-b659-4e96dfa619cb\") " pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.648747 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c849a377-ab04-4f2a-b659-4e96dfa619cb-serving-cert\") pod \"controller-manager-69f74844c8-r8kkz\" (UID: \"c849a377-ab04-4f2a-b659-4e96dfa619cb\") " pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.675016 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7zqx\" (UniqueName: \"kubernetes.io/projected/c849a377-ab04-4f2a-b659-4e96dfa619cb-kube-api-access-c7zqx\") pod \"controller-manager-69f74844c8-r8kkz\" (UID: \"c849a377-ab04-4f2a-b659-4e96dfa619cb\") " pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.675663 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c849a377-ab04-4f2a-b659-4e96dfa619cb-config\") pod \"controller-manager-69f74844c8-r8kkz\" (UID: \"c849a377-ab04-4f2a-b659-4e96dfa619cb\") " pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.797147 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.808704 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" event={"ID":"ca13973c-b3d8-47d7-bd6c-14ebc72bd907","Type":"ContainerDied","Data":"04722a010d0de8ff1b42723461b5ddf4072c880a4865649df10d19b71b5a6dd9"} Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.808804 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fv5p6" Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.839818 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fv5p6"] Jan 27 07:48:15 crc kubenswrapper[4799]: I0127 07:48:15.839882 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fv5p6"] Jan 27 07:48:16 crc kubenswrapper[4799]: I0127 07:48:16.460760 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca13973c-b3d8-47d7-bd6c-14ebc72bd907" path="/var/lib/kubelet/pods/ca13973c-b3d8-47d7-bd6c-14ebc72bd907/volumes" Jan 27 07:48:18 crc kubenswrapper[4799]: I0127 07:48:18.191993 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs\") pod \"network-metrics-daemon-qq7cx\" (UID: \"0af5040b-0391-423c-b87d-90df4965f58f\") " pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:48:18 crc kubenswrapper[4799]: I0127 07:48:18.197086 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 07:48:18 crc kubenswrapper[4799]: I0127 07:48:18.218563 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0af5040b-0391-423c-b87d-90df4965f58f-metrics-certs\") pod \"network-metrics-daemon-qq7cx\" (UID: \"0af5040b-0391-423c-b87d-90df4965f58f\") " pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:48:18 crc kubenswrapper[4799]: I0127 07:48:18.269146 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 07:48:18 crc kubenswrapper[4799]: I0127 07:48:18.276807 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qq7cx" Jan 27 07:48:19 crc kubenswrapper[4799]: E0127 07:48:19.874000 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 07:48:19 crc kubenswrapper[4799]: E0127 07:48:19.874704 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pfd8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-g6ktz_openshift-marketplace(c808aeb6-0065-4efc-9d98-9ee6c97e3250): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 07:48:19 crc kubenswrapper[4799]: E0127 07:48:19.876240 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-g6ktz" podUID="c808aeb6-0065-4efc-9d98-9ee6c97e3250" Jan 27 07:48:19 crc kubenswrapper[4799]: E0127 07:48:19.948634 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 07:48:19 crc kubenswrapper[4799]: E0127 07:48:19.948964 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qpq8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-g7thd_openshift-marketplace(6be235d3-0500-4c98-abf6-a8709c12e8a7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 07:48:19 crc kubenswrapper[4799]: E0127 07:48:19.950239 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-g7thd" podUID="6be235d3-0500-4c98-abf6-a8709c12e8a7" Jan 27 07:48:21 crc kubenswrapper[4799]: E0127 07:48:21.354888 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-g6ktz" podUID="c808aeb6-0065-4efc-9d98-9ee6c97e3250" Jan 27 07:48:21 crc kubenswrapper[4799]: E0127 07:48:21.354906 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-g7thd" podUID="6be235d3-0500-4c98-abf6-a8709c12e8a7" Jan 27 07:48:21 crc kubenswrapper[4799]: E0127 07:48:21.455388 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 07:48:21 crc kubenswrapper[4799]: E0127 07:48:21.455595 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wnbpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-x2wfc_openshift-marketplace(d96b8a37-8325-4d8c-b8ce-94f40dd0a21a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 07:48:21 crc kubenswrapper[4799]: E0127 07:48:21.457665 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-x2wfc" podUID="d96b8a37-8325-4d8c-b8ce-94f40dd0a21a" Jan 27 07:48:22 crc kubenswrapper[4799]: E0127 07:48:22.806960 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-x2wfc" podUID="d96b8a37-8325-4d8c-b8ce-94f40dd0a21a" Jan 27 07:48:22 crc kubenswrapper[4799]: I0127 07:48:22.817314 4799 scope.go:117] "RemoveContainer" containerID="02e70d1c7f5046d310a17df04e8a18b14cc9fa637ddc1a3ce1d6d89a178813c9" Jan 27 07:48:22 crc kubenswrapper[4799]: E0127 07:48:22.878247 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 07:48:22 crc kubenswrapper[4799]: E0127 07:48:22.878440 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g62kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-tm4nj_openshift-marketplace(0f28fa44-7662-40a4-a2c2-81bb5a9c4ace): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 07:48:22 crc kubenswrapper[4799]: E0127 07:48:22.879679 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-tm4nj" podUID="0f28fa44-7662-40a4-a2c2-81bb5a9c4ace" Jan 27 07:48:22 crc kubenswrapper[4799]: E0127 07:48:22.951894 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 07:48:22 crc kubenswrapper[4799]: E0127 07:48:22.952363 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plhtn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-kr6pr_openshift-marketplace(1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 07:48:22 crc kubenswrapper[4799]: E0127 07:48:22.956254 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-kr6pr" podUID="1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd" Jan 27 07:48:22 crc kubenswrapper[4799]: E0127 07:48:22.962329 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 07:48:22 crc kubenswrapper[4799]: E0127 07:48:22.962444 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4fmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-76z59_openshift-marketplace(5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 07:48:22 crc kubenswrapper[4799]: E0127 07:48:22.964933 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-76z59" podUID="5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5" Jan 27 07:48:23 crc kubenswrapper[4799]: E0127 07:48:23.079036 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 07:48:23 crc kubenswrapper[4799]: E0127 07:48:23.079434 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7sjcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-j7dw8_openshift-marketplace(ef0e0c84-7483-438a-8ad1-b105cd4e2cc7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 07:48:23 crc kubenswrapper[4799]: E0127 07:48:23.082903 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-j7dw8" podUID="ef0e0c84-7483-438a-8ad1-b105cd4e2cc7" Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.350694 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6"] Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.433495 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qq7cx"] Jan 27 07:48:23 crc kubenswrapper[4799]: W0127 07:48:23.442498 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc849a377_ab04_4f2a_b659_4e96dfa619cb.slice/crio-65705656a0b14348df6cc08ef94309affff0153e392edd1f7b2ce71b0f56b58d WatchSource:0}: Error finding container 65705656a0b14348df6cc08ef94309affff0153e392edd1f7b2ce71b0f56b58d: Status 404 returned error can't find the container with id 65705656a0b14348df6cc08ef94309affff0153e392edd1f7b2ce71b0f56b58d Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.445332 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69f74844c8-r8kkz"] Jan 27 07:48:23 crc kubenswrapper[4799]: W0127 07:48:23.451155 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0af5040b_0391_423c_b87d_90df4965f58f.slice/crio-cc07b2fe912b18f256feaca76fd58360a6bf52c9e9ec4a9851e702a5fc78664d WatchSource:0}: Error finding container cc07b2fe912b18f256feaca76fd58360a6bf52c9e9ec4a9851e702a5fc78664d: Status 404 returned error can't find the container with id cc07b2fe912b18f256feaca76fd58360a6bf52c9e9ec4a9851e702a5fc78664d Jan 27 07:48:23 crc kubenswrapper[4799]: W0127 07:48:23.461369 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-e8359efb7bffe7f0c72e9eabbed5d3b24a3d637247e0c73827815b58357cba3a WatchSource:0}: Error finding container e8359efb7bffe7f0c72e9eabbed5d3b24a3d637247e0c73827815b58357cba3a: Status 404 returned error can't find the container with id e8359efb7bffe7f0c72e9eabbed5d3b24a3d637247e0c73827815b58357cba3a Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.731740 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.731810 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.801937 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.803905 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.807486 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.808138 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.815158 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.884172 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0b41986f1e359ec10f3b968a82ba94f4c8f28fcc35647b02abae47551501a951"} Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.884237 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"79dd6055e053aed64cac7fa8aa791de3061f911c87efebac9dff1a0a729ca8ec"} Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.885133 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.885722 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b31defd1-26ef-4ee4-9dde-a23184013013-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b31defd1-26ef-4ee4-9dde-a23184013013\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.885756 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b31defd1-26ef-4ee4-9dde-a23184013013-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b31defd1-26ef-4ee4-9dde-a23184013013\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.916554 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qq7cx" event={"ID":"0af5040b-0391-423c-b87d-90df4965f58f","Type":"ContainerStarted","Data":"107141d3d2974ceedfe205ae807727e3cf42c197f42fe8edfeaf3db3624729ba"} Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.916614 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qq7cx" event={"ID":"0af5040b-0391-423c-b87d-90df4965f58f","Type":"ContainerStarted","Data":"cc07b2fe912b18f256feaca76fd58360a6bf52c9e9ec4a9851e702a5fc78664d"} Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.920385 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9eca2cc08d307983e8c52cba578ce3d292ff0f63a2eba2fc8a78acd443586704"} Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.920412 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b23782d827966ed7f52efb6dc21cf27eb5b92a8e6160cfe958527ed38e554d7e"} Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.944216 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5860dbda920c0f409b87b8946966380cbaf502090d74f6c6c64112de4dcc3208"} Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.944282 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e8359efb7bffe7f0c72e9eabbed5d3b24a3d637247e0c73827815b58357cba3a"} Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.948422 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" event={"ID":"c849a377-ab04-4f2a-b659-4e96dfa619cb","Type":"ContainerStarted","Data":"16a6e4307adbd8b9d4b007fa98ee3d094c7c00f0d67eb54a25ea0cfc876e9287"} Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.948482 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" event={"ID":"c849a377-ab04-4f2a-b659-4e96dfa619cb","Type":"ContainerStarted","Data":"65705656a0b14348df6cc08ef94309affff0153e392edd1f7b2ce71b0f56b58d"} Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.948966 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.950567 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" event={"ID":"60808c76-07ea-4482-877a-c1ab1eff8ef0","Type":"ContainerStarted","Data":"b3fcbf7945efb00f91b6643e693df77b9a4ca8e7acd01af511eab6cda5d94da8"} Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.950640 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" event={"ID":"60808c76-07ea-4482-877a-c1ab1eff8ef0","Type":"ContainerStarted","Data":"e6178d35a239260f1102463f4d6c5be7a08cd51657312b49f8785efd87745d0b"} Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.950824 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" podUID="60808c76-07ea-4482-877a-c1ab1eff8ef0" containerName="route-controller-manager" containerID="cri-o://b3fcbf7945efb00f91b6643e693df77b9a4ca8e7acd01af511eab6cda5d94da8" gracePeriod=30 Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.951505 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.966207 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxtw5" event={"ID":"472d8035-24d2-4d6c-bb9d-4f932d4be020","Type":"ContainerStarted","Data":"8bb1f0254ad0ee194dc2f2f2f7c525441daaf634b0087a21f9463c0e6b85c3d0"} Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.968732 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" Jan 27 07:48:23 crc kubenswrapper[4799]: E0127 07:48:23.969130 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-tm4nj" podUID="0f28fa44-7662-40a4-a2c2-81bb5a9c4ace" Jan 27 07:48:23 crc kubenswrapper[4799]: E0127 07:48:23.969215 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-76z59" podUID="5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5" Jan 27 07:48:23 crc kubenswrapper[4799]: E0127 07:48:23.969262 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-kr6pr" podUID="1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd" Jan 27 07:48:23 crc kubenswrapper[4799]: E0127 07:48:23.980526 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-j7dw8" podUID="ef0e0c84-7483-438a-8ad1-b105cd4e2cc7" Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.987454 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b31defd1-26ef-4ee4-9dde-a23184013013-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b31defd1-26ef-4ee4-9dde-a23184013013\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.987644 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b31defd1-26ef-4ee4-9dde-a23184013013-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b31defd1-26ef-4ee4-9dde-a23184013013\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.988133 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b31defd1-26ef-4ee4-9dde-a23184013013-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b31defd1-26ef-4ee4-9dde-a23184013013\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:48:23 crc kubenswrapper[4799]: I0127 07:48:23.996845 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" podStartSLOduration=9.996820258 podStartE2EDuration="9.996820258s" podCreationTimestamp="2026-01-27 07:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:48:23.996026606 +0000 UTC m=+170.307130671" watchObservedRunningTime="2026-01-27 07:48:23.996820258 +0000 UTC m=+170.307924323" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.020123 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b31defd1-26ef-4ee4-9dde-a23184013013-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b31defd1-26ef-4ee4-9dde-a23184013013\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.028053 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" podStartSLOduration=30.028030714 podStartE2EDuration="30.028030714s" podCreationTimestamp="2026-01-27 07:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:48:24.025855693 +0000 UTC m=+170.336959758" watchObservedRunningTime="2026-01-27 07:48:24.028030714 +0000 UTC m=+170.339134779" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.036225 4799 patch_prober.go:28] interesting pod/route-controller-manager-75ff8d4784-hh2n6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:57120->10.217.0.54:8443: read: connection reset by peer" start-of-body= Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.036322 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" podUID="60808c76-07ea-4482-877a-c1ab1eff8ef0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:57120->10.217.0.54:8443: read: connection reset by peer" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.122018 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.395105 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-75ff8d4784-hh2n6_60808c76-07ea-4482-877a-c1ab1eff8ef0/route-controller-manager/0.log" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.395691 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.431872 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr"] Jan 27 07:48:24 crc kubenswrapper[4799]: E0127 07:48:24.432105 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60808c76-07ea-4482-877a-c1ab1eff8ef0" containerName="route-controller-manager" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.432122 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="60808c76-07ea-4482-877a-c1ab1eff8ef0" containerName="route-controller-manager" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.434664 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="60808c76-07ea-4482-877a-c1ab1eff8ef0" containerName="route-controller-manager" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.435081 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.439567 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr"] Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.497829 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60808c76-07ea-4482-877a-c1ab1eff8ef0-serving-cert\") pod \"60808c76-07ea-4482-877a-c1ab1eff8ef0\" (UID: \"60808c76-07ea-4482-877a-c1ab1eff8ef0\") " Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.497888 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4zk9\" (UniqueName: \"kubernetes.io/projected/60808c76-07ea-4482-877a-c1ab1eff8ef0-kube-api-access-d4zk9\") pod \"60808c76-07ea-4482-877a-c1ab1eff8ef0\" (UID: \"60808c76-07ea-4482-877a-c1ab1eff8ef0\") " Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.497995 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/60808c76-07ea-4482-877a-c1ab1eff8ef0-client-ca\") pod \"60808c76-07ea-4482-877a-c1ab1eff8ef0\" (UID: \"60808c76-07ea-4482-877a-c1ab1eff8ef0\") " Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.498046 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60808c76-07ea-4482-877a-c1ab1eff8ef0-config\") pod \"60808c76-07ea-4482-877a-c1ab1eff8ef0\" (UID: \"60808c76-07ea-4482-877a-c1ab1eff8ef0\") " Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.498247 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t24l7\" (UniqueName: \"kubernetes.io/projected/536cb82e-df7d-4758-9c45-f00d28dfc3fd-kube-api-access-t24l7\") pod \"route-controller-manager-6bc97b9f78-kmzzr\" (UID: \"536cb82e-df7d-4758-9c45-f00d28dfc3fd\") " pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.498278 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536cb82e-df7d-4758-9c45-f00d28dfc3fd-config\") pod \"route-controller-manager-6bc97b9f78-kmzzr\" (UID: \"536cb82e-df7d-4758-9c45-f00d28dfc3fd\") " pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.498313 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/536cb82e-df7d-4758-9c45-f00d28dfc3fd-client-ca\") pod \"route-controller-manager-6bc97b9f78-kmzzr\" (UID: \"536cb82e-df7d-4758-9c45-f00d28dfc3fd\") " pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.498343 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/536cb82e-df7d-4758-9c45-f00d28dfc3fd-serving-cert\") pod \"route-controller-manager-6bc97b9f78-kmzzr\" (UID: \"536cb82e-df7d-4758-9c45-f00d28dfc3fd\") " pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.500042 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60808c76-07ea-4482-877a-c1ab1eff8ef0-client-ca" (OuterVolumeSpecName: "client-ca") pod "60808c76-07ea-4482-877a-c1ab1eff8ef0" (UID: "60808c76-07ea-4482-877a-c1ab1eff8ef0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.500609 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60808c76-07ea-4482-877a-c1ab1eff8ef0-config" (OuterVolumeSpecName: "config") pod "60808c76-07ea-4482-877a-c1ab1eff8ef0" (UID: "60808c76-07ea-4482-877a-c1ab1eff8ef0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.506398 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60808c76-07ea-4482-877a-c1ab1eff8ef0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "60808c76-07ea-4482-877a-c1ab1eff8ef0" (UID: "60808c76-07ea-4482-877a-c1ab1eff8ef0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.507608 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60808c76-07ea-4482-877a-c1ab1eff8ef0-kube-api-access-d4zk9" (OuterVolumeSpecName: "kube-api-access-d4zk9") pod "60808c76-07ea-4482-877a-c1ab1eff8ef0" (UID: "60808c76-07ea-4482-877a-c1ab1eff8ef0"). InnerVolumeSpecName "kube-api-access-d4zk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.599361 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24l7\" (UniqueName: \"kubernetes.io/projected/536cb82e-df7d-4758-9c45-f00d28dfc3fd-kube-api-access-t24l7\") pod \"route-controller-manager-6bc97b9f78-kmzzr\" (UID: \"536cb82e-df7d-4758-9c45-f00d28dfc3fd\") " pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.600188 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536cb82e-df7d-4758-9c45-f00d28dfc3fd-config\") pod \"route-controller-manager-6bc97b9f78-kmzzr\" (UID: \"536cb82e-df7d-4758-9c45-f00d28dfc3fd\") " pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.600360 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/536cb82e-df7d-4758-9c45-f00d28dfc3fd-client-ca\") pod \"route-controller-manager-6bc97b9f78-kmzzr\" (UID: \"536cb82e-df7d-4758-9c45-f00d28dfc3fd\") " pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.600490 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/536cb82e-df7d-4758-9c45-f00d28dfc3fd-serving-cert\") pod \"route-controller-manager-6bc97b9f78-kmzzr\" (UID: \"536cb82e-df7d-4758-9c45-f00d28dfc3fd\") " pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.600664 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4zk9\" (UniqueName: \"kubernetes.io/projected/60808c76-07ea-4482-877a-c1ab1eff8ef0-kube-api-access-d4zk9\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.600778 4799 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/60808c76-07ea-4482-877a-c1ab1eff8ef0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.600863 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60808c76-07ea-4482-877a-c1ab1eff8ef0-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.600953 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60808c76-07ea-4482-877a-c1ab1eff8ef0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.602334 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536cb82e-df7d-4758-9c45-f00d28dfc3fd-config\") pod \"route-controller-manager-6bc97b9f78-kmzzr\" (UID: \"536cb82e-df7d-4758-9c45-f00d28dfc3fd\") " pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.602848 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/536cb82e-df7d-4758-9c45-f00d28dfc3fd-client-ca\") pod \"route-controller-manager-6bc97b9f78-kmzzr\" (UID: \"536cb82e-df7d-4758-9c45-f00d28dfc3fd\") " pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.608786 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/536cb82e-df7d-4758-9c45-f00d28dfc3fd-serving-cert\") pod \"route-controller-manager-6bc97b9f78-kmzzr\" (UID: \"536cb82e-df7d-4758-9c45-f00d28dfc3fd\") " pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.623910 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t24l7\" (UniqueName: \"kubernetes.io/projected/536cb82e-df7d-4758-9c45-f00d28dfc3fd-kube-api-access-t24l7\") pod \"route-controller-manager-6bc97b9f78-kmzzr\" (UID: \"536cb82e-df7d-4758-9c45-f00d28dfc3fd\") " pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.658874 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.764891 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.974459 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b31defd1-26ef-4ee4-9dde-a23184013013","Type":"ContainerStarted","Data":"73d54cb8dcc6251217cb74678c9a5361c7f70898790964e6ec85a31748b016c1"} Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.977059 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qq7cx" event={"ID":"0af5040b-0391-423c-b87d-90df4965f58f","Type":"ContainerStarted","Data":"c3ade667b6606ce96c9c657d5b7d60347d8fceeb724287ba910f088ce3ad5e34"} Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.983158 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-75ff8d4784-hh2n6_60808c76-07ea-4482-877a-c1ab1eff8ef0/route-controller-manager/0.log" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.983238 4799 generic.go:334] "Generic (PLEG): container finished" podID="60808c76-07ea-4482-877a-c1ab1eff8ef0" containerID="b3fcbf7945efb00f91b6643e693df77b9a4ca8e7acd01af511eab6cda5d94da8" exitCode=255 Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.983450 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.984409 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" event={"ID":"60808c76-07ea-4482-877a-c1ab1eff8ef0","Type":"ContainerDied","Data":"b3fcbf7945efb00f91b6643e693df77b9a4ca8e7acd01af511eab6cda5d94da8"} Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.984447 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6" event={"ID":"60808c76-07ea-4482-877a-c1ab1eff8ef0","Type":"ContainerDied","Data":"e6178d35a239260f1102463f4d6c5be7a08cd51657312b49f8785efd87745d0b"} Jan 27 07:48:24 crc kubenswrapper[4799]: I0127 07:48:24.984467 4799 scope.go:117] "RemoveContainer" containerID="b3fcbf7945efb00f91b6643e693df77b9a4ca8e7acd01af511eab6cda5d94da8" Jan 27 07:48:25 crc kubenswrapper[4799]: I0127 07:48:25.000346 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-qq7cx" podStartSLOduration=151.000319445 podStartE2EDuration="2m31.000319445s" podCreationTimestamp="2026-01-27 07:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:48:24.995249452 +0000 UTC m=+171.306353517" watchObservedRunningTime="2026-01-27 07:48:25.000319445 +0000 UTC m=+171.311423510" Jan 27 07:48:25 crc kubenswrapper[4799]: I0127 07:48:25.008573 4799 generic.go:334] "Generic (PLEG): container finished" podID="472d8035-24d2-4d6c-bb9d-4f932d4be020" containerID="8bb1f0254ad0ee194dc2f2f2f7c525441daaf634b0087a21f9463c0e6b85c3d0" exitCode=0 Jan 27 07:48:25 crc kubenswrapper[4799]: I0127 07:48:25.009655 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxtw5" event={"ID":"472d8035-24d2-4d6c-bb9d-4f932d4be020","Type":"ContainerDied","Data":"8bb1f0254ad0ee194dc2f2f2f7c525441daaf634b0087a21f9463c0e6b85c3d0"} Jan 27 07:48:25 crc kubenswrapper[4799]: I0127 07:48:25.009848 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxtw5" event={"ID":"472d8035-24d2-4d6c-bb9d-4f932d4be020","Type":"ContainerStarted","Data":"f164310698041329ec9082c07b5ba51f4981d76573bf60662cc5705346c757e4"} Jan 27 07:48:25 crc kubenswrapper[4799]: I0127 07:48:25.018825 4799 scope.go:117] "RemoveContainer" containerID="b3fcbf7945efb00f91b6643e693df77b9a4ca8e7acd01af511eab6cda5d94da8" Jan 27 07:48:25 crc kubenswrapper[4799]: E0127 07:48:25.019555 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3fcbf7945efb00f91b6643e693df77b9a4ca8e7acd01af511eab6cda5d94da8\": container with ID starting with b3fcbf7945efb00f91b6643e693df77b9a4ca8e7acd01af511eab6cda5d94da8 not found: ID does not exist" containerID="b3fcbf7945efb00f91b6643e693df77b9a4ca8e7acd01af511eab6cda5d94da8" Jan 27 07:48:25 crc kubenswrapper[4799]: I0127 07:48:25.019618 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3fcbf7945efb00f91b6643e693df77b9a4ca8e7acd01af511eab6cda5d94da8"} err="failed to get container status \"b3fcbf7945efb00f91b6643e693df77b9a4ca8e7acd01af511eab6cda5d94da8\": rpc error: code = NotFound desc = could not find container \"b3fcbf7945efb00f91b6643e693df77b9a4ca8e7acd01af511eab6cda5d94da8\": container with ID starting with b3fcbf7945efb00f91b6643e693df77b9a4ca8e7acd01af511eab6cda5d94da8 not found: ID does not exist" Jan 27 07:48:25 crc kubenswrapper[4799]: I0127 07:48:25.049925 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zxtw5" podStartSLOduration=2.0346646919999998 podStartE2EDuration="43.049889865s" podCreationTimestamp="2026-01-27 07:47:42 +0000 UTC" firstStartedPulling="2026-01-27 07:47:43.461526961 +0000 UTC m=+129.772631026" lastFinishedPulling="2026-01-27 07:48:24.476752114 +0000 UTC m=+170.787856199" observedRunningTime="2026-01-27 07:48:25.031385636 +0000 UTC m=+171.342489711" watchObservedRunningTime="2026-01-27 07:48:25.049889865 +0000 UTC m=+171.360993960" Jan 27 07:48:25 crc kubenswrapper[4799]: I0127 07:48:25.056849 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6"] Jan 27 07:48:25 crc kubenswrapper[4799]: I0127 07:48:25.063249 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75ff8d4784-hh2n6"] Jan 27 07:48:25 crc kubenswrapper[4799]: I0127 07:48:25.200033 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr"] Jan 27 07:48:26 crc kubenswrapper[4799]: I0127 07:48:26.019055 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" event={"ID":"536cb82e-df7d-4758-9c45-f00d28dfc3fd","Type":"ContainerStarted","Data":"7d253e14aafb1da631b3425f0054e89c69d266a23187087cf86e8aa8dc6c15be"} Jan 27 07:48:26 crc kubenswrapper[4799]: I0127 07:48:26.019710 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" event={"ID":"536cb82e-df7d-4758-9c45-f00d28dfc3fd","Type":"ContainerStarted","Data":"6e838c1f9382ae9de821db794d590b4f1ad1302b114e49167181f2e4683343d4"} Jan 27 07:48:26 crc kubenswrapper[4799]: I0127 07:48:26.019731 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" Jan 27 07:48:26 crc kubenswrapper[4799]: I0127 07:48:26.025190 4799 generic.go:334] "Generic (PLEG): container finished" podID="b31defd1-26ef-4ee4-9dde-a23184013013" containerID="31f310c1d4175b66879d96072a1b88192f66c858b447ad9b76d09aeef8364677" exitCode=0 Jan 27 07:48:26 crc kubenswrapper[4799]: I0127 07:48:26.025283 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b31defd1-26ef-4ee4-9dde-a23184013013","Type":"ContainerDied","Data":"31f310c1d4175b66879d96072a1b88192f66c858b447ad9b76d09aeef8364677"} Jan 27 07:48:26 crc kubenswrapper[4799]: I0127 07:48:26.026083 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" Jan 27 07:48:26 crc kubenswrapper[4799]: I0127 07:48:26.046641 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" podStartSLOduration=12.04658953 podStartE2EDuration="12.04658953s" podCreationTimestamp="2026-01-27 07:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:48:26.039727898 +0000 UTC m=+172.350831973" watchObservedRunningTime="2026-01-27 07:48:26.04658953 +0000 UTC m=+172.357693605" Jan 27 07:48:26 crc kubenswrapper[4799]: I0127 07:48:26.464735 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60808c76-07ea-4482-877a-c1ab1eff8ef0" path="/var/lib/kubelet/pods/60808c76-07ea-4482-877a-c1ab1eff8ef0/volumes" Jan 27 07:48:27 crc kubenswrapper[4799]: I0127 07:48:27.494138 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:48:27 crc kubenswrapper[4799]: I0127 07:48:27.555543 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b31defd1-26ef-4ee4-9dde-a23184013013-kube-api-access\") pod \"b31defd1-26ef-4ee4-9dde-a23184013013\" (UID: \"b31defd1-26ef-4ee4-9dde-a23184013013\") " Jan 27 07:48:27 crc kubenswrapper[4799]: I0127 07:48:27.555683 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b31defd1-26ef-4ee4-9dde-a23184013013-kubelet-dir\") pod \"b31defd1-26ef-4ee4-9dde-a23184013013\" (UID: \"b31defd1-26ef-4ee4-9dde-a23184013013\") " Jan 27 07:48:27 crc kubenswrapper[4799]: I0127 07:48:27.555771 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b31defd1-26ef-4ee4-9dde-a23184013013-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b31defd1-26ef-4ee4-9dde-a23184013013" (UID: "b31defd1-26ef-4ee4-9dde-a23184013013"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:48:27 crc kubenswrapper[4799]: I0127 07:48:27.556019 4799 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b31defd1-26ef-4ee4-9dde-a23184013013-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:27 crc kubenswrapper[4799]: I0127 07:48:27.567565 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b31defd1-26ef-4ee4-9dde-a23184013013-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b31defd1-26ef-4ee4-9dde-a23184013013" (UID: "b31defd1-26ef-4ee4-9dde-a23184013013"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:48:27 crc kubenswrapper[4799]: I0127 07:48:27.657097 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b31defd1-26ef-4ee4-9dde-a23184013013-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:28 crc kubenswrapper[4799]: I0127 07:48:28.044432 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b31defd1-26ef-4ee4-9dde-a23184013013","Type":"ContainerDied","Data":"73d54cb8dcc6251217cb74678c9a5361c7f70898790964e6ec85a31748b016c1"} Jan 27 07:48:28 crc kubenswrapper[4799]: I0127 07:48:28.044471 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 07:48:28 crc kubenswrapper[4799]: I0127 07:48:28.044488 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73d54cb8dcc6251217cb74678c9a5361c7f70898790964e6ec85a31748b016c1" Jan 27 07:48:30 crc kubenswrapper[4799]: I0127 07:48:30.016089 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 07:48:30 crc kubenswrapper[4799]: E0127 07:48:30.017699 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b31defd1-26ef-4ee4-9dde-a23184013013" containerName="pruner" Jan 27 07:48:30 crc kubenswrapper[4799]: I0127 07:48:30.017748 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b31defd1-26ef-4ee4-9dde-a23184013013" containerName="pruner" Jan 27 07:48:30 crc kubenswrapper[4799]: I0127 07:48:30.018042 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="b31defd1-26ef-4ee4-9dde-a23184013013" containerName="pruner" Jan 27 07:48:30 crc kubenswrapper[4799]: I0127 07:48:30.018848 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:48:30 crc kubenswrapper[4799]: I0127 07:48:30.024765 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 07:48:30 crc kubenswrapper[4799]: I0127 07:48:30.025345 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 07:48:30 crc kubenswrapper[4799]: I0127 07:48:30.028988 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 07:48:30 crc kubenswrapper[4799]: I0127 07:48:30.100269 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f10e10b-7684-4395-b1f1-051344597338-kube-api-access\") pod \"installer-9-crc\" (UID: \"7f10e10b-7684-4395-b1f1-051344597338\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:48:30 crc kubenswrapper[4799]: I0127 07:48:30.100898 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f10e10b-7684-4395-b1f1-051344597338-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7f10e10b-7684-4395-b1f1-051344597338\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:48:30 crc kubenswrapper[4799]: I0127 07:48:30.100937 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f10e10b-7684-4395-b1f1-051344597338-var-lock\") pod \"installer-9-crc\" (UID: \"7f10e10b-7684-4395-b1f1-051344597338\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:48:30 crc kubenswrapper[4799]: I0127 07:48:30.201934 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f10e10b-7684-4395-b1f1-051344597338-kube-api-access\") pod \"installer-9-crc\" (UID: \"7f10e10b-7684-4395-b1f1-051344597338\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:48:30 crc kubenswrapper[4799]: I0127 07:48:30.202470 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f10e10b-7684-4395-b1f1-051344597338-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7f10e10b-7684-4395-b1f1-051344597338\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:48:30 crc kubenswrapper[4799]: I0127 07:48:30.202600 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f10e10b-7684-4395-b1f1-051344597338-var-lock\") pod \"installer-9-crc\" (UID: \"7f10e10b-7684-4395-b1f1-051344597338\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:48:30 crc kubenswrapper[4799]: I0127 07:48:30.202828 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f10e10b-7684-4395-b1f1-051344597338-var-lock\") pod \"installer-9-crc\" (UID: \"7f10e10b-7684-4395-b1f1-051344597338\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:48:30 crc kubenswrapper[4799]: I0127 07:48:30.203448 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f10e10b-7684-4395-b1f1-051344597338-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7f10e10b-7684-4395-b1f1-051344597338\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:48:30 crc kubenswrapper[4799]: I0127 07:48:30.229513 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f10e10b-7684-4395-b1f1-051344597338-kube-api-access\") pod \"installer-9-crc\" (UID: \"7f10e10b-7684-4395-b1f1-051344597338\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:48:30 crc kubenswrapper[4799]: I0127 07:48:30.358900 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:48:30 crc kubenswrapper[4799]: I0127 07:48:30.866405 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 07:48:31 crc kubenswrapper[4799]: I0127 07:48:31.063043 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7f10e10b-7684-4395-b1f1-051344597338","Type":"ContainerStarted","Data":"bc6ec405638ea2ae08b39c47c34dd008e54a92450ead2d443b3b52388403cd56"} Jan 27 07:48:32 crc kubenswrapper[4799]: I0127 07:48:32.072452 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7f10e10b-7684-4395-b1f1-051344597338","Type":"ContainerStarted","Data":"9a8041ccb5d3f6c95794158278dadd40862066abcba280aac420da88bc6f3646"} Jan 27 07:48:32 crc kubenswrapper[4799]: I0127 07:48:32.100841 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.100819689 podStartE2EDuration="3.100819689s" podCreationTimestamp="2026-01-27 07:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:48:32.100261414 +0000 UTC m=+178.411365479" watchObservedRunningTime="2026-01-27 07:48:32.100819689 +0000 UTC m=+178.411923754" Jan 27 07:48:32 crc kubenswrapper[4799]: I0127 07:48:32.488281 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zxtw5" Jan 27 07:48:32 crc kubenswrapper[4799]: I0127 07:48:32.488392 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zxtw5" Jan 27 07:48:32 crc kubenswrapper[4799]: I0127 07:48:32.712191 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zxtw5" Jan 27 07:48:33 crc kubenswrapper[4799]: I0127 07:48:33.127070 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zxtw5" Jan 27 07:48:34 crc kubenswrapper[4799]: I0127 07:48:34.983961 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n67f6"] Jan 27 07:48:38 crc kubenswrapper[4799]: I0127 07:48:38.141464 4799 generic.go:334] "Generic (PLEG): container finished" podID="6be235d3-0500-4c98-abf6-a8709c12e8a7" containerID="cbff8834952886e59e7932bc2d9456e0b2ac134a292683c57c2fe0f9e552069e" exitCode=0 Jan 27 07:48:38 crc kubenswrapper[4799]: I0127 07:48:38.141660 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g7thd" event={"ID":"6be235d3-0500-4c98-abf6-a8709c12e8a7","Type":"ContainerDied","Data":"cbff8834952886e59e7932bc2d9456e0b2ac134a292683c57c2fe0f9e552069e"} Jan 27 07:48:38 crc kubenswrapper[4799]: I0127 07:48:38.149662 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kr6pr" event={"ID":"1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd","Type":"ContainerStarted","Data":"c5ef2aa6aadce8a0c3f3428d9cef3c481e6d8bf77f24a16e9a9d314d3114b8b2"} Jan 27 07:48:38 crc kubenswrapper[4799]: I0127 07:48:38.165187 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6ktz" event={"ID":"c808aeb6-0065-4efc-9d98-9ee6c97e3250","Type":"ContainerStarted","Data":"99f4f1bd15a237315b123a470efbac1da0bb78f140fe0205ec6520ee60ed5e39"} Jan 27 07:48:38 crc kubenswrapper[4799]: I0127 07:48:38.179894 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2wfc" event={"ID":"d96b8a37-8325-4d8c-b8ce-94f40dd0a21a","Type":"ContainerStarted","Data":"2eedf418ef5cd514829096e50686b9a5f69934d4ae75f02c4cd8766021a990cd"} Jan 27 07:48:38 crc kubenswrapper[4799]: I0127 07:48:38.187184 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7dw8" event={"ID":"ef0e0c84-7483-438a-8ad1-b105cd4e2cc7","Type":"ContainerStarted","Data":"71113dfbb7b90c0cda0aa98c883a9983309907b4f1c99fe5f4188e41aa308961"} Jan 27 07:48:38 crc kubenswrapper[4799]: I0127 07:48:38.190511 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tm4nj" event={"ID":"0f28fa44-7662-40a4-a2c2-81bb5a9c4ace","Type":"ContainerStarted","Data":"c340097a7a7e6b9698ba9b2d82eb74ac091a5bd4b5935c30997ebf920a04f7d5"} Jan 27 07:48:39 crc kubenswrapper[4799]: I0127 07:48:39.217679 4799 generic.go:334] "Generic (PLEG): container finished" podID="ef0e0c84-7483-438a-8ad1-b105cd4e2cc7" containerID="71113dfbb7b90c0cda0aa98c883a9983309907b4f1c99fe5f4188e41aa308961" exitCode=0 Jan 27 07:48:39 crc kubenswrapper[4799]: I0127 07:48:39.218307 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7dw8" event={"ID":"ef0e0c84-7483-438a-8ad1-b105cd4e2cc7","Type":"ContainerDied","Data":"71113dfbb7b90c0cda0aa98c883a9983309907b4f1c99fe5f4188e41aa308961"} Jan 27 07:48:39 crc kubenswrapper[4799]: I0127 07:48:39.223743 4799 generic.go:334] "Generic (PLEG): container finished" podID="0f28fa44-7662-40a4-a2c2-81bb5a9c4ace" containerID="c340097a7a7e6b9698ba9b2d82eb74ac091a5bd4b5935c30997ebf920a04f7d5" exitCode=0 Jan 27 07:48:39 crc kubenswrapper[4799]: I0127 07:48:39.223843 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tm4nj" event={"ID":"0f28fa44-7662-40a4-a2c2-81bb5a9c4ace","Type":"ContainerDied","Data":"c340097a7a7e6b9698ba9b2d82eb74ac091a5bd4b5935c30997ebf920a04f7d5"} Jan 27 07:48:39 crc kubenswrapper[4799]: I0127 07:48:39.233998 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g7thd" event={"ID":"6be235d3-0500-4c98-abf6-a8709c12e8a7","Type":"ContainerStarted","Data":"3742cdd6bdaac839e0851671886abbecc1cc89ca426025aada04db0bc6c41ae5"} Jan 27 07:48:39 crc kubenswrapper[4799]: I0127 07:48:39.236623 4799 generic.go:334] "Generic (PLEG): container finished" podID="1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd" containerID="c5ef2aa6aadce8a0c3f3428d9cef3c481e6d8bf77f24a16e9a9d314d3114b8b2" exitCode=0 Jan 27 07:48:39 crc kubenswrapper[4799]: I0127 07:48:39.236688 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kr6pr" event={"ID":"1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd","Type":"ContainerDied","Data":"c5ef2aa6aadce8a0c3f3428d9cef3c481e6d8bf77f24a16e9a9d314d3114b8b2"} Jan 27 07:48:39 crc kubenswrapper[4799]: I0127 07:48:39.244269 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76z59" event={"ID":"5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5","Type":"ContainerStarted","Data":"9d591871fed8cbb53f3156f30051e6cfdb28af56ec607462d6fb9ec234621c3d"} Jan 27 07:48:39 crc kubenswrapper[4799]: I0127 07:48:39.247663 4799 generic.go:334] "Generic (PLEG): container finished" podID="c808aeb6-0065-4efc-9d98-9ee6c97e3250" containerID="99f4f1bd15a237315b123a470efbac1da0bb78f140fe0205ec6520ee60ed5e39" exitCode=0 Jan 27 07:48:39 crc kubenswrapper[4799]: I0127 07:48:39.247734 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6ktz" event={"ID":"c808aeb6-0065-4efc-9d98-9ee6c97e3250","Type":"ContainerDied","Data":"99f4f1bd15a237315b123a470efbac1da0bb78f140fe0205ec6520ee60ed5e39"} Jan 27 07:48:39 crc kubenswrapper[4799]: I0127 07:48:39.251007 4799 generic.go:334] "Generic (PLEG): container finished" podID="d96b8a37-8325-4d8c-b8ce-94f40dd0a21a" containerID="2eedf418ef5cd514829096e50686b9a5f69934d4ae75f02c4cd8766021a990cd" exitCode=0 Jan 27 07:48:39 crc kubenswrapper[4799]: I0127 07:48:39.251041 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2wfc" event={"ID":"d96b8a37-8325-4d8c-b8ce-94f40dd0a21a","Type":"ContainerDied","Data":"2eedf418ef5cd514829096e50686b9a5f69934d4ae75f02c4cd8766021a990cd"} Jan 27 07:48:39 crc kubenswrapper[4799]: I0127 07:48:39.328182 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g7thd" podStartSLOduration=3.271026323 podStartE2EDuration="57.328147514s" podCreationTimestamp="2026-01-27 07:47:42 +0000 UTC" firstStartedPulling="2026-01-27 07:47:44.493202808 +0000 UTC m=+130.804306873" lastFinishedPulling="2026-01-27 07:48:38.550323999 +0000 UTC m=+184.861428064" observedRunningTime="2026-01-27 07:48:39.307988828 +0000 UTC m=+185.619092903" watchObservedRunningTime="2026-01-27 07:48:39.328147514 +0000 UTC m=+185.639251579" Jan 27 07:48:40 crc kubenswrapper[4799]: I0127 07:48:40.260227 4799 generic.go:334] "Generic (PLEG): container finished" podID="5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5" containerID="9d591871fed8cbb53f3156f30051e6cfdb28af56ec607462d6fb9ec234621c3d" exitCode=0 Jan 27 07:48:40 crc kubenswrapper[4799]: I0127 07:48:40.260343 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76z59" event={"ID":"5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5","Type":"ContainerDied","Data":"9d591871fed8cbb53f3156f30051e6cfdb28af56ec607462d6fb9ec234621c3d"} Jan 27 07:48:40 crc kubenswrapper[4799]: I0127 07:48:40.263427 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6ktz" event={"ID":"c808aeb6-0065-4efc-9d98-9ee6c97e3250","Type":"ContainerStarted","Data":"942137fa1748f04df2eb22d549dfa15ad59a07a1cfc431100256f97853807bdb"} Jan 27 07:48:40 crc kubenswrapper[4799]: I0127 07:48:40.268114 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2wfc" event={"ID":"d96b8a37-8325-4d8c-b8ce-94f40dd0a21a","Type":"ContainerStarted","Data":"8ece0d3b4bda0ee239a92f7ed9f518ba0e6c82043ee9f737bd8b27255d7ea35e"} Jan 27 07:48:40 crc kubenswrapper[4799]: I0127 07:48:40.270561 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7dw8" event={"ID":"ef0e0c84-7483-438a-8ad1-b105cd4e2cc7","Type":"ContainerStarted","Data":"2ea78423f012ffc0a0d19c944faa8e8dbe56f41376552f8ca625ea56e9cc6b1a"} Jan 27 07:48:40 crc kubenswrapper[4799]: I0127 07:48:40.274129 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tm4nj" event={"ID":"0f28fa44-7662-40a4-a2c2-81bb5a9c4ace","Type":"ContainerStarted","Data":"4b3d9d2d1ec064c46a718f7168de3be58cf7e17fa7710493dfb2041b33745b96"} Jan 27 07:48:40 crc kubenswrapper[4799]: I0127 07:48:40.277900 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kr6pr" event={"ID":"1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd","Type":"ContainerStarted","Data":"74bfd43f4308e44eb46e49413bc3de7cb7f0eae1e18342be5f0078584a85f882"} Jan 27 07:48:40 crc kubenswrapper[4799]: I0127 07:48:40.319640 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kr6pr" podStartSLOduration=3.994771985 podStartE2EDuration="1m2.319616422s" podCreationTimestamp="2026-01-27 07:47:38 +0000 UTC" firstStartedPulling="2026-01-27 07:47:41.394863283 +0000 UTC m=+127.705967348" lastFinishedPulling="2026-01-27 07:48:39.71970772 +0000 UTC m=+186.030811785" observedRunningTime="2026-01-27 07:48:40.317211945 +0000 UTC m=+186.628316020" watchObservedRunningTime="2026-01-27 07:48:40.319616422 +0000 UTC m=+186.630720487" Jan 27 07:48:40 crc kubenswrapper[4799]: I0127 07:48:40.341996 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j7dw8" podStartSLOduration=2.8792413249999997 podStartE2EDuration="1m1.34197622s" podCreationTimestamp="2026-01-27 07:47:39 +0000 UTC" firstStartedPulling="2026-01-27 07:47:41.381359805 +0000 UTC m=+127.692463870" lastFinishedPulling="2026-01-27 07:48:39.8440947 +0000 UTC m=+186.155198765" observedRunningTime="2026-01-27 07:48:40.338244745 +0000 UTC m=+186.649348810" watchObservedRunningTime="2026-01-27 07:48:40.34197622 +0000 UTC m=+186.653080285" Jan 27 07:48:40 crc kubenswrapper[4799]: I0127 07:48:40.384882 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x2wfc" podStartSLOduration=2.893349499 podStartE2EDuration="1m1.384856933s" podCreationTimestamp="2026-01-27 07:47:39 +0000 UTC" firstStartedPulling="2026-01-27 07:47:41.378432172 +0000 UTC m=+127.689536237" lastFinishedPulling="2026-01-27 07:48:39.869939606 +0000 UTC m=+186.181043671" observedRunningTime="2026-01-27 07:48:40.383031311 +0000 UTC m=+186.694135386" watchObservedRunningTime="2026-01-27 07:48:40.384856933 +0000 UTC m=+186.695960988" Jan 27 07:48:40 crc kubenswrapper[4799]: I0127 07:48:40.412625 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g6ktz" podStartSLOduration=3.811416212 podStartE2EDuration="1m2.412596391s" podCreationTimestamp="2026-01-27 07:47:38 +0000 UTC" firstStartedPulling="2026-01-27 07:47:41.363209246 +0000 UTC m=+127.674313311" lastFinishedPulling="2026-01-27 07:48:39.964389435 +0000 UTC m=+186.275493490" observedRunningTime="2026-01-27 07:48:40.411283104 +0000 UTC m=+186.722387169" watchObservedRunningTime="2026-01-27 07:48:40.412596391 +0000 UTC m=+186.723700456" Jan 27 07:48:41 crc kubenswrapper[4799]: I0127 07:48:41.150466 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tm4nj" Jan 27 07:48:41 crc kubenswrapper[4799]: I0127 07:48:41.150882 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tm4nj" Jan 27 07:48:41 crc kubenswrapper[4799]: I0127 07:48:41.288331 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76z59" event={"ID":"5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5","Type":"ContainerStarted","Data":"a67a5ac4150516365a6a714f7ceb076a86a781b4bfddc211c7ea555dc2973cc5"} Jan 27 07:48:41 crc kubenswrapper[4799]: I0127 07:48:41.309452 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tm4nj" podStartSLOduration=3.96763603 podStartE2EDuration="1m1.309428584s" podCreationTimestamp="2026-01-27 07:47:40 +0000 UTC" firstStartedPulling="2026-01-27 07:47:42.438562008 +0000 UTC m=+128.749666073" lastFinishedPulling="2026-01-27 07:48:39.780354552 +0000 UTC m=+186.091458627" observedRunningTime="2026-01-27 07:48:40.433688612 +0000 UTC m=+186.744792677" watchObservedRunningTime="2026-01-27 07:48:41.309428584 +0000 UTC m=+187.620532649" Jan 27 07:48:41 crc kubenswrapper[4799]: I0127 07:48:41.312884 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-76z59" podStartSLOduration=1.840029762 podStartE2EDuration="1m0.312875771s" podCreationTimestamp="2026-01-27 07:47:41 +0000 UTC" firstStartedPulling="2026-01-27 07:47:42.435036969 +0000 UTC m=+128.746141034" lastFinishedPulling="2026-01-27 07:48:40.907882978 +0000 UTC m=+187.218987043" observedRunningTime="2026-01-27 07:48:41.305934896 +0000 UTC m=+187.617038971" watchObservedRunningTime="2026-01-27 07:48:41.312875771 +0000 UTC m=+187.623979836" Jan 27 07:48:41 crc kubenswrapper[4799]: I0127 07:48:41.430627 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-76z59" Jan 27 07:48:41 crc kubenswrapper[4799]: I0127 07:48:41.430688 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-76z59" Jan 27 07:48:42 crc kubenswrapper[4799]: I0127 07:48:42.208032 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-tm4nj" podUID="0f28fa44-7662-40a4-a2c2-81bb5a9c4ace" containerName="registry-server" probeResult="failure" output=< Jan 27 07:48:42 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Jan 27 07:48:42 crc kubenswrapper[4799]: > Jan 27 07:48:42 crc kubenswrapper[4799]: I0127 07:48:42.475402 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-76z59" podUID="5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5" containerName="registry-server" probeResult="failure" output=< Jan 27 07:48:42 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Jan 27 07:48:42 crc kubenswrapper[4799]: > Jan 27 07:48:42 crc kubenswrapper[4799]: I0127 07:48:42.839984 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g7thd" Jan 27 07:48:42 crc kubenswrapper[4799]: I0127 07:48:42.840057 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g7thd" Jan 27 07:48:43 crc kubenswrapper[4799]: I0127 07:48:43.876753 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g7thd" podUID="6be235d3-0500-4c98-abf6-a8709c12e8a7" containerName="registry-server" probeResult="failure" output=< Jan 27 07:48:43 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Jan 27 07:48:43 crc kubenswrapper[4799]: > Jan 27 07:48:49 crc kubenswrapper[4799]: I0127 07:48:49.327875 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kr6pr" Jan 27 07:48:49 crc kubenswrapper[4799]: I0127 07:48:49.329508 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kr6pr" Jan 27 07:48:49 crc kubenswrapper[4799]: I0127 07:48:49.354126 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g6ktz" Jan 27 07:48:49 crc kubenswrapper[4799]: I0127 07:48:49.354169 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g6ktz" Jan 27 07:48:49 crc kubenswrapper[4799]: I0127 07:48:49.372584 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kr6pr" Jan 27 07:48:49 crc kubenswrapper[4799]: I0127 07:48:49.397995 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g6ktz" Jan 27 07:48:49 crc kubenswrapper[4799]: I0127 07:48:49.420181 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kr6pr" Jan 27 07:48:49 crc kubenswrapper[4799]: I0127 07:48:49.475759 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j7dw8" Jan 27 07:48:49 crc kubenswrapper[4799]: I0127 07:48:49.475833 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j7dw8" Jan 27 07:48:49 crc kubenswrapper[4799]: I0127 07:48:49.519559 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j7dw8" Jan 27 07:48:49 crc kubenswrapper[4799]: I0127 07:48:49.980188 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x2wfc" Jan 27 07:48:49 crc kubenswrapper[4799]: I0127 07:48:49.980243 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x2wfc" Jan 27 07:48:50 crc kubenswrapper[4799]: I0127 07:48:50.027852 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x2wfc" Jan 27 07:48:50 crc kubenswrapper[4799]: I0127 07:48:50.391789 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j7dw8" Jan 27 07:48:50 crc kubenswrapper[4799]: I0127 07:48:50.393139 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g6ktz" Jan 27 07:48:50 crc kubenswrapper[4799]: I0127 07:48:50.396496 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x2wfc" Jan 27 07:48:51 crc kubenswrapper[4799]: I0127 07:48:51.205858 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tm4nj" Jan 27 07:48:51 crc kubenswrapper[4799]: I0127 07:48:51.269567 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tm4nj" Jan 27 07:48:51 crc kubenswrapper[4799]: I0127 07:48:51.477980 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-76z59" Jan 27 07:48:51 crc kubenswrapper[4799]: I0127 07:48:51.524501 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-76z59" Jan 27 07:48:51 crc kubenswrapper[4799]: I0127 07:48:51.775474 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j7dw8"] Jan 27 07:48:52 crc kubenswrapper[4799]: I0127 07:48:52.360637 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j7dw8" podUID="ef0e0c84-7483-438a-8ad1-b105cd4e2cc7" containerName="registry-server" containerID="cri-o://2ea78423f012ffc0a0d19c944faa8e8dbe56f41376552f8ca625ea56e9cc6b1a" gracePeriod=2 Jan 27 07:48:52 crc kubenswrapper[4799]: I0127 07:48:52.374614 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x2wfc"] Jan 27 07:48:52 crc kubenswrapper[4799]: I0127 07:48:52.374977 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x2wfc" podUID="d96b8a37-8325-4d8c-b8ce-94f40dd0a21a" containerName="registry-server" containerID="cri-o://8ece0d3b4bda0ee239a92f7ed9f518ba0e6c82043ee9f737bd8b27255d7ea35e" gracePeriod=2 Jan 27 07:48:52 crc kubenswrapper[4799]: I0127 07:48:52.881231 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g7thd" Jan 27 07:48:52 crc kubenswrapper[4799]: I0127 07:48:52.928770 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g7thd" Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.377770 4799 generic.go:334] "Generic (PLEG): container finished" podID="ef0e0c84-7483-438a-8ad1-b105cd4e2cc7" containerID="2ea78423f012ffc0a0d19c944faa8e8dbe56f41376552f8ca625ea56e9cc6b1a" exitCode=0 Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.377855 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7dw8" event={"ID":"ef0e0c84-7483-438a-8ad1-b105cd4e2cc7","Type":"ContainerDied","Data":"2ea78423f012ffc0a0d19c944faa8e8dbe56f41376552f8ca625ea56e9cc6b1a"} Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.383942 4799 generic.go:334] "Generic (PLEG): container finished" podID="d96b8a37-8325-4d8c-b8ce-94f40dd0a21a" containerID="8ece0d3b4bda0ee239a92f7ed9f518ba0e6c82043ee9f737bd8b27255d7ea35e" exitCode=0 Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.384037 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2wfc" event={"ID":"d96b8a37-8325-4d8c-b8ce-94f40dd0a21a","Type":"ContainerDied","Data":"8ece0d3b4bda0ee239a92f7ed9f518ba0e6c82043ee9f737bd8b27255d7ea35e"} Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.558265 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j7dw8" Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.666255 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x2wfc" Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.686963 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sjcm\" (UniqueName: \"kubernetes.io/projected/ef0e0c84-7483-438a-8ad1-b105cd4e2cc7-kube-api-access-7sjcm\") pod \"ef0e0c84-7483-438a-8ad1-b105cd4e2cc7\" (UID: \"ef0e0c84-7483-438a-8ad1-b105cd4e2cc7\") " Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.687050 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef0e0c84-7483-438a-8ad1-b105cd4e2cc7-catalog-content\") pod \"ef0e0c84-7483-438a-8ad1-b105cd4e2cc7\" (UID: \"ef0e0c84-7483-438a-8ad1-b105cd4e2cc7\") " Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.687126 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef0e0c84-7483-438a-8ad1-b105cd4e2cc7-utilities\") pod \"ef0e0c84-7483-438a-8ad1-b105cd4e2cc7\" (UID: \"ef0e0c84-7483-438a-8ad1-b105cd4e2cc7\") " Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.688393 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef0e0c84-7483-438a-8ad1-b105cd4e2cc7-utilities" (OuterVolumeSpecName: "utilities") pod "ef0e0c84-7483-438a-8ad1-b105cd4e2cc7" (UID: "ef0e0c84-7483-438a-8ad1-b105cd4e2cc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.693770 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef0e0c84-7483-438a-8ad1-b105cd4e2cc7-kube-api-access-7sjcm" (OuterVolumeSpecName: "kube-api-access-7sjcm") pod "ef0e0c84-7483-438a-8ad1-b105cd4e2cc7" (UID: "ef0e0c84-7483-438a-8ad1-b105cd4e2cc7"). InnerVolumeSpecName "kube-api-access-7sjcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.732676 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.732747 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.747633 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef0e0c84-7483-438a-8ad1-b105cd4e2cc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef0e0c84-7483-438a-8ad1-b105cd4e2cc7" (UID: "ef0e0c84-7483-438a-8ad1-b105cd4e2cc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.789009 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d96b8a37-8325-4d8c-b8ce-94f40dd0a21a-catalog-content\") pod \"d96b8a37-8325-4d8c-b8ce-94f40dd0a21a\" (UID: \"d96b8a37-8325-4d8c-b8ce-94f40dd0a21a\") " Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.789135 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d96b8a37-8325-4d8c-b8ce-94f40dd0a21a-utilities\") pod \"d96b8a37-8325-4d8c-b8ce-94f40dd0a21a\" (UID: \"d96b8a37-8325-4d8c-b8ce-94f40dd0a21a\") " Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.789278 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnbpp\" (UniqueName: \"kubernetes.io/projected/d96b8a37-8325-4d8c-b8ce-94f40dd0a21a-kube-api-access-wnbpp\") pod \"d96b8a37-8325-4d8c-b8ce-94f40dd0a21a\" (UID: \"d96b8a37-8325-4d8c-b8ce-94f40dd0a21a\") " Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.789734 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sjcm\" (UniqueName: \"kubernetes.io/projected/ef0e0c84-7483-438a-8ad1-b105cd4e2cc7-kube-api-access-7sjcm\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.789779 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef0e0c84-7483-438a-8ad1-b105cd4e2cc7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.789793 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef0e0c84-7483-438a-8ad1-b105cd4e2cc7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.791521 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d96b8a37-8325-4d8c-b8ce-94f40dd0a21a-utilities" (OuterVolumeSpecName: "utilities") pod "d96b8a37-8325-4d8c-b8ce-94f40dd0a21a" (UID: "d96b8a37-8325-4d8c-b8ce-94f40dd0a21a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.800622 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d96b8a37-8325-4d8c-b8ce-94f40dd0a21a-kube-api-access-wnbpp" (OuterVolumeSpecName: "kube-api-access-wnbpp") pod "d96b8a37-8325-4d8c-b8ce-94f40dd0a21a" (UID: "d96b8a37-8325-4d8c-b8ce-94f40dd0a21a"). InnerVolumeSpecName "kube-api-access-wnbpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.838694 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d96b8a37-8325-4d8c-b8ce-94f40dd0a21a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d96b8a37-8325-4d8c-b8ce-94f40dd0a21a" (UID: "d96b8a37-8325-4d8c-b8ce-94f40dd0a21a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.891523 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d96b8a37-8325-4d8c-b8ce-94f40dd0a21a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.891571 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnbpp\" (UniqueName: \"kubernetes.io/projected/d96b8a37-8325-4d8c-b8ce-94f40dd0a21a-kube-api-access-wnbpp\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:53 crc kubenswrapper[4799]: I0127 07:48:53.891585 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d96b8a37-8325-4d8c-b8ce-94f40dd0a21a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.180679 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-76z59"] Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.181177 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-76z59" podUID="5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5" containerName="registry-server" containerID="cri-o://a67a5ac4150516365a6a714f7ceb076a86a781b4bfddc211c7ea555dc2973cc5" gracePeriod=2 Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.212932 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69f74844c8-r8kkz"] Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.213673 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" podUID="c849a377-ab04-4f2a-b659-4e96dfa619cb" containerName="controller-manager" containerID="cri-o://16a6e4307adbd8b9d4b007fa98ee3d094c7c00f0d67eb54a25ea0cfc876e9287" gracePeriod=30 Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.311528 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr"] Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.311892 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" podUID="536cb82e-df7d-4758-9c45-f00d28dfc3fd" containerName="route-controller-manager" containerID="cri-o://7d253e14aafb1da631b3425f0054e89c69d266a23187087cf86e8aa8dc6c15be" gracePeriod=30 Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.391388 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j7dw8" event={"ID":"ef0e0c84-7483-438a-8ad1-b105cd4e2cc7","Type":"ContainerDied","Data":"a9b7d5b77c8f93652264ad8f5659c8420005b105b538b532d443b006a702ac78"} Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.391836 4799 scope.go:117] "RemoveContainer" containerID="2ea78423f012ffc0a0d19c944faa8e8dbe56f41376552f8ca625ea56e9cc6b1a" Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.391768 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j7dw8" Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.394972 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2wfc" event={"ID":"d96b8a37-8325-4d8c-b8ce-94f40dd0a21a","Type":"ContainerDied","Data":"8e134aa9306fd64388dd0b422dbb363baadf3995f99da26925faaabb2b95b6b4"} Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.395142 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x2wfc" Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.410757 4799 scope.go:117] "RemoveContainer" containerID="71113dfbb7b90c0cda0aa98c883a9983309907b4f1c99fe5f4188e41aa308961" Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.430012 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x2wfc"] Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.437774 4799 scope.go:117] "RemoveContainer" containerID="e4f7c8a8929df5cd9c75adb43050ca80063dfa2b314aac3b96a1522808a4d572" Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.445534 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x2wfc"] Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.464251 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d96b8a37-8325-4d8c-b8ce-94f40dd0a21a" path="/var/lib/kubelet/pods/d96b8a37-8325-4d8c-b8ce-94f40dd0a21a/volumes" Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.465033 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j7dw8"] Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.465072 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j7dw8"] Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.476127 4799 scope.go:117] "RemoveContainer" containerID="8ece0d3b4bda0ee239a92f7ed9f518ba0e6c82043ee9f737bd8b27255d7ea35e" Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.496815 4799 scope.go:117] "RemoveContainer" containerID="2eedf418ef5cd514829096e50686b9a5f69934d4ae75f02c4cd8766021a990cd" Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.521901 4799 scope.go:117] "RemoveContainer" containerID="b315f2d217d14c5faa843fde12396659021872465ecb116f152432f7abfb94e7" Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.766128 4799 patch_prober.go:28] interesting pod/route-controller-manager-6bc97b9f78-kmzzr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Jan 27 07:48:54 crc kubenswrapper[4799]: I0127 07:48:54.766183 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" podUID="536cb82e-df7d-4758-9c45-f00d28dfc3fd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.276820 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76z59" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.299178 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.310629 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.411441 4799 generic.go:334] "Generic (PLEG): container finished" podID="c849a377-ab04-4f2a-b659-4e96dfa619cb" containerID="16a6e4307adbd8b9d4b007fa98ee3d094c7c00f0d67eb54a25ea0cfc876e9287" exitCode=0 Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.411569 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.411544 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" event={"ID":"c849a377-ab04-4f2a-b659-4e96dfa619cb","Type":"ContainerDied","Data":"16a6e4307adbd8b9d4b007fa98ee3d094c7c00f0d67eb54a25ea0cfc876e9287"} Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.411701 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69f74844c8-r8kkz" event={"ID":"c849a377-ab04-4f2a-b659-4e96dfa619cb","Type":"ContainerDied","Data":"65705656a0b14348df6cc08ef94309affff0153e392edd1f7b2ce71b0f56b58d"} Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.411758 4799 scope.go:117] "RemoveContainer" containerID="16a6e4307adbd8b9d4b007fa98ee3d094c7c00f0d67eb54a25ea0cfc876e9287" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.412226 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/536cb82e-df7d-4758-9c45-f00d28dfc3fd-client-ca\") pod \"536cb82e-df7d-4758-9c45-f00d28dfc3fd\" (UID: \"536cb82e-df7d-4758-9c45-f00d28dfc3fd\") " Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.412262 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c849a377-ab04-4f2a-b659-4e96dfa619cb-proxy-ca-bundles\") pod \"c849a377-ab04-4f2a-b659-4e96dfa619cb\" (UID: \"c849a377-ab04-4f2a-b659-4e96dfa619cb\") " Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.412289 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/536cb82e-df7d-4758-9c45-f00d28dfc3fd-serving-cert\") pod \"536cb82e-df7d-4758-9c45-f00d28dfc3fd\" (UID: \"536cb82e-df7d-4758-9c45-f00d28dfc3fd\") " Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.412374 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5-catalog-content\") pod \"5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5\" (UID: \"5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5\") " Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.412406 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5-utilities\") pod \"5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5\" (UID: \"5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5\") " Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.412427 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c849a377-ab04-4f2a-b659-4e96dfa619cb-config\") pod \"c849a377-ab04-4f2a-b659-4e96dfa619cb\" (UID: \"c849a377-ab04-4f2a-b659-4e96dfa619cb\") " Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.412472 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c849a377-ab04-4f2a-b659-4e96dfa619cb-serving-cert\") pod \"c849a377-ab04-4f2a-b659-4e96dfa619cb\" (UID: \"c849a377-ab04-4f2a-b659-4e96dfa619cb\") " Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.412528 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4fmv\" (UniqueName: \"kubernetes.io/projected/5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5-kube-api-access-x4fmv\") pod \"5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5\" (UID: \"5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5\") " Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.412565 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7zqx\" (UniqueName: \"kubernetes.io/projected/c849a377-ab04-4f2a-b659-4e96dfa619cb-kube-api-access-c7zqx\") pod \"c849a377-ab04-4f2a-b659-4e96dfa619cb\" (UID: \"c849a377-ab04-4f2a-b659-4e96dfa619cb\") " Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.412604 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536cb82e-df7d-4758-9c45-f00d28dfc3fd-config\") pod \"536cb82e-df7d-4758-9c45-f00d28dfc3fd\" (UID: \"536cb82e-df7d-4758-9c45-f00d28dfc3fd\") " Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.412668 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c849a377-ab04-4f2a-b659-4e96dfa619cb-client-ca\") pod \"c849a377-ab04-4f2a-b659-4e96dfa619cb\" (UID: \"c849a377-ab04-4f2a-b659-4e96dfa619cb\") " Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.412704 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t24l7\" (UniqueName: \"kubernetes.io/projected/536cb82e-df7d-4758-9c45-f00d28dfc3fd-kube-api-access-t24l7\") pod \"536cb82e-df7d-4758-9c45-f00d28dfc3fd\" (UID: \"536cb82e-df7d-4758-9c45-f00d28dfc3fd\") " Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.413417 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c849a377-ab04-4f2a-b659-4e96dfa619cb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c849a377-ab04-4f2a-b659-4e96dfa619cb" (UID: "c849a377-ab04-4f2a-b659-4e96dfa619cb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.413442 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536cb82e-df7d-4758-9c45-f00d28dfc3fd-client-ca" (OuterVolumeSpecName: "client-ca") pod "536cb82e-df7d-4758-9c45-f00d28dfc3fd" (UID: "536cb82e-df7d-4758-9c45-f00d28dfc3fd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.414288 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536cb82e-df7d-4758-9c45-f00d28dfc3fd-config" (OuterVolumeSpecName: "config") pod "536cb82e-df7d-4758-9c45-f00d28dfc3fd" (UID: "536cb82e-df7d-4758-9c45-f00d28dfc3fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.419425 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c849a377-ab04-4f2a-b659-4e96dfa619cb-client-ca" (OuterVolumeSpecName: "client-ca") pod "c849a377-ab04-4f2a-b659-4e96dfa619cb" (UID: "c849a377-ab04-4f2a-b659-4e96dfa619cb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.419599 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5-utilities" (OuterVolumeSpecName: "utilities") pod "5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5" (UID: "5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.419842 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5-kube-api-access-x4fmv" (OuterVolumeSpecName: "kube-api-access-x4fmv") pod "5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5" (UID: "5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5"). InnerVolumeSpecName "kube-api-access-x4fmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.419963 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c849a377-ab04-4f2a-b659-4e96dfa619cb-config" (OuterVolumeSpecName: "config") pod "c849a377-ab04-4f2a-b659-4e96dfa619cb" (UID: "c849a377-ab04-4f2a-b659-4e96dfa619cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.422196 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/536cb82e-df7d-4758-9c45-f00d28dfc3fd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "536cb82e-df7d-4758-9c45-f00d28dfc3fd" (UID: "536cb82e-df7d-4758-9c45-f00d28dfc3fd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.423900 4799 generic.go:334] "Generic (PLEG): container finished" podID="5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5" containerID="a67a5ac4150516365a6a714f7ceb076a86a781b4bfddc211c7ea555dc2973cc5" exitCode=0 Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.423979 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76z59" event={"ID":"5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5","Type":"ContainerDied","Data":"a67a5ac4150516365a6a714f7ceb076a86a781b4bfddc211c7ea555dc2973cc5"} Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.424022 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76z59" event={"ID":"5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5","Type":"ContainerDied","Data":"c3f01ed9bdc03bdb4abcf1a39379d74717ca9b7199fdadae298e6c89a4495f4b"} Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.424057 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/536cb82e-df7d-4758-9c45-f00d28dfc3fd-kube-api-access-t24l7" (OuterVolumeSpecName: "kube-api-access-t24l7") pod "536cb82e-df7d-4758-9c45-f00d28dfc3fd" (UID: "536cb82e-df7d-4758-9c45-f00d28dfc3fd"). InnerVolumeSpecName "kube-api-access-t24l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.424124 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76z59" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.437313 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c849a377-ab04-4f2a-b659-4e96dfa619cb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c849a377-ab04-4f2a-b659-4e96dfa619cb" (UID: "c849a377-ab04-4f2a-b659-4e96dfa619cb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.437478 4799 scope.go:117] "RemoveContainer" containerID="16a6e4307adbd8b9d4b007fa98ee3d094c7c00f0d67eb54a25ea0cfc876e9287" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.437826 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c849a377-ab04-4f2a-b659-4e96dfa619cb-kube-api-access-c7zqx" (OuterVolumeSpecName: "kube-api-access-c7zqx") pod "c849a377-ab04-4f2a-b659-4e96dfa619cb" (UID: "c849a377-ab04-4f2a-b659-4e96dfa619cb"). InnerVolumeSpecName "kube-api-access-c7zqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:48:55 crc kubenswrapper[4799]: E0127 07:48:55.441106 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16a6e4307adbd8b9d4b007fa98ee3d094c7c00f0d67eb54a25ea0cfc876e9287\": container with ID starting with 16a6e4307adbd8b9d4b007fa98ee3d094c7c00f0d67eb54a25ea0cfc876e9287 not found: ID does not exist" containerID="16a6e4307adbd8b9d4b007fa98ee3d094c7c00f0d67eb54a25ea0cfc876e9287" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.441193 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16a6e4307adbd8b9d4b007fa98ee3d094c7c00f0d67eb54a25ea0cfc876e9287"} err="failed to get container status \"16a6e4307adbd8b9d4b007fa98ee3d094c7c00f0d67eb54a25ea0cfc876e9287\": rpc error: code = NotFound desc = could not find container \"16a6e4307adbd8b9d4b007fa98ee3d094c7c00f0d67eb54a25ea0cfc876e9287\": container with ID starting with 16a6e4307adbd8b9d4b007fa98ee3d094c7c00f0d67eb54a25ea0cfc876e9287 not found: ID does not exist" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.441225 4799 scope.go:117] "RemoveContainer" containerID="a67a5ac4150516365a6a714f7ceb076a86a781b4bfddc211c7ea555dc2973cc5" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.456662 4799 generic.go:334] "Generic (PLEG): container finished" podID="536cb82e-df7d-4758-9c45-f00d28dfc3fd" containerID="7d253e14aafb1da631b3425f0054e89c69d266a23187087cf86e8aa8dc6c15be" exitCode=0 Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.456997 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.457049 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" event={"ID":"536cb82e-df7d-4758-9c45-f00d28dfc3fd","Type":"ContainerDied","Data":"7d253e14aafb1da631b3425f0054e89c69d266a23187087cf86e8aa8dc6c15be"} Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.457089 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr" event={"ID":"536cb82e-df7d-4758-9c45-f00d28dfc3fd","Type":"ContainerDied","Data":"6e838c1f9382ae9de821db794d590b4f1ad1302b114e49167181f2e4683343d4"} Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.484769 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5" (UID: "5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.487670 4799 scope.go:117] "RemoveContainer" containerID="9d591871fed8cbb53f3156f30051e6cfdb28af56ec607462d6fb9ec234621c3d" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.504606 4799 scope.go:117] "RemoveContainer" containerID="0ff65c562afd22278e78efb72ecb0f88f0211ecbc03ca0b0532cc6f3d7d0fc38" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.505929 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr"] Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.514217 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bc97b9f78-kmzzr"] Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.514471 4799 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c849a377-ab04-4f2a-b659-4e96dfa619cb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.514518 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t24l7\" (UniqueName: \"kubernetes.io/projected/536cb82e-df7d-4758-9c45-f00d28dfc3fd-kube-api-access-t24l7\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.514532 4799 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/536cb82e-df7d-4758-9c45-f00d28dfc3fd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.514543 4799 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c849a377-ab04-4f2a-b659-4e96dfa619cb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.514553 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/536cb82e-df7d-4758-9c45-f00d28dfc3fd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.514565 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.514579 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.514591 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c849a377-ab04-4f2a-b659-4e96dfa619cb-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.514601 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c849a377-ab04-4f2a-b659-4e96dfa619cb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.514611 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4fmv\" (UniqueName: \"kubernetes.io/projected/5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5-kube-api-access-x4fmv\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.514622 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7zqx\" (UniqueName: \"kubernetes.io/projected/c849a377-ab04-4f2a-b659-4e96dfa619cb-kube-api-access-c7zqx\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.514634 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536cb82e-df7d-4758-9c45-f00d28dfc3fd-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.522855 4799 scope.go:117] "RemoveContainer" containerID="a67a5ac4150516365a6a714f7ceb076a86a781b4bfddc211c7ea555dc2973cc5" Jan 27 07:48:55 crc kubenswrapper[4799]: E0127 07:48:55.529832 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a67a5ac4150516365a6a714f7ceb076a86a781b4bfddc211c7ea555dc2973cc5\": container with ID starting with a67a5ac4150516365a6a714f7ceb076a86a781b4bfddc211c7ea555dc2973cc5 not found: ID does not exist" containerID="a67a5ac4150516365a6a714f7ceb076a86a781b4bfddc211c7ea555dc2973cc5" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.529896 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a67a5ac4150516365a6a714f7ceb076a86a781b4bfddc211c7ea555dc2973cc5"} err="failed to get container status \"a67a5ac4150516365a6a714f7ceb076a86a781b4bfddc211c7ea555dc2973cc5\": rpc error: code = NotFound desc = could not find container \"a67a5ac4150516365a6a714f7ceb076a86a781b4bfddc211c7ea555dc2973cc5\": container with ID starting with a67a5ac4150516365a6a714f7ceb076a86a781b4bfddc211c7ea555dc2973cc5 not found: ID does not exist" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.529933 4799 scope.go:117] "RemoveContainer" containerID="9d591871fed8cbb53f3156f30051e6cfdb28af56ec607462d6fb9ec234621c3d" Jan 27 07:48:55 crc kubenswrapper[4799]: E0127 07:48:55.532711 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d591871fed8cbb53f3156f30051e6cfdb28af56ec607462d6fb9ec234621c3d\": container with ID starting with 9d591871fed8cbb53f3156f30051e6cfdb28af56ec607462d6fb9ec234621c3d not found: ID does not exist" containerID="9d591871fed8cbb53f3156f30051e6cfdb28af56ec607462d6fb9ec234621c3d" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.532799 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d591871fed8cbb53f3156f30051e6cfdb28af56ec607462d6fb9ec234621c3d"} err="failed to get container status \"9d591871fed8cbb53f3156f30051e6cfdb28af56ec607462d6fb9ec234621c3d\": rpc error: code = NotFound desc = could not find container \"9d591871fed8cbb53f3156f30051e6cfdb28af56ec607462d6fb9ec234621c3d\": container with ID starting with 9d591871fed8cbb53f3156f30051e6cfdb28af56ec607462d6fb9ec234621c3d not found: ID does not exist" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.532840 4799 scope.go:117] "RemoveContainer" containerID="0ff65c562afd22278e78efb72ecb0f88f0211ecbc03ca0b0532cc6f3d7d0fc38" Jan 27 07:48:55 crc kubenswrapper[4799]: E0127 07:48:55.533733 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ff65c562afd22278e78efb72ecb0f88f0211ecbc03ca0b0532cc6f3d7d0fc38\": container with ID starting with 0ff65c562afd22278e78efb72ecb0f88f0211ecbc03ca0b0532cc6f3d7d0fc38 not found: ID does not exist" containerID="0ff65c562afd22278e78efb72ecb0f88f0211ecbc03ca0b0532cc6f3d7d0fc38" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.533756 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ff65c562afd22278e78efb72ecb0f88f0211ecbc03ca0b0532cc6f3d7d0fc38"} err="failed to get container status \"0ff65c562afd22278e78efb72ecb0f88f0211ecbc03ca0b0532cc6f3d7d0fc38\": rpc error: code = NotFound desc = could not find container \"0ff65c562afd22278e78efb72ecb0f88f0211ecbc03ca0b0532cc6f3d7d0fc38\": container with ID starting with 0ff65c562afd22278e78efb72ecb0f88f0211ecbc03ca0b0532cc6f3d7d0fc38 not found: ID does not exist" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.533776 4799 scope.go:117] "RemoveContainer" containerID="7d253e14aafb1da631b3425f0054e89c69d266a23187087cf86e8aa8dc6c15be" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.556447 4799 scope.go:117] "RemoveContainer" containerID="7d253e14aafb1da631b3425f0054e89c69d266a23187087cf86e8aa8dc6c15be" Jan 27 07:48:55 crc kubenswrapper[4799]: E0127 07:48:55.558997 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d253e14aafb1da631b3425f0054e89c69d266a23187087cf86e8aa8dc6c15be\": container with ID starting with 7d253e14aafb1da631b3425f0054e89c69d266a23187087cf86e8aa8dc6c15be not found: ID does not exist" containerID="7d253e14aafb1da631b3425f0054e89c69d266a23187087cf86e8aa8dc6c15be" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.559040 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d253e14aafb1da631b3425f0054e89c69d266a23187087cf86e8aa8dc6c15be"} err="failed to get container status \"7d253e14aafb1da631b3425f0054e89c69d266a23187087cf86e8aa8dc6c15be\": rpc error: code = NotFound desc = could not find container \"7d253e14aafb1da631b3425f0054e89c69d266a23187087cf86e8aa8dc6c15be\": container with ID starting with 7d253e14aafb1da631b3425f0054e89c69d266a23187087cf86e8aa8dc6c15be not found: ID does not exist" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.627607 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb"] Jan 27 07:48:55 crc kubenswrapper[4799]: E0127 07:48:55.627992 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d96b8a37-8325-4d8c-b8ce-94f40dd0a21a" containerName="extract-utilities" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.628031 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d96b8a37-8325-4d8c-b8ce-94f40dd0a21a" containerName="extract-utilities" Jan 27 07:48:55 crc kubenswrapper[4799]: E0127 07:48:55.628052 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5" containerName="extract-utilities" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.628063 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5" containerName="extract-utilities" Jan 27 07:48:55 crc kubenswrapper[4799]: E0127 07:48:55.628081 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d96b8a37-8325-4d8c-b8ce-94f40dd0a21a" containerName="extract-content" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.628094 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d96b8a37-8325-4d8c-b8ce-94f40dd0a21a" containerName="extract-content" Jan 27 07:48:55 crc kubenswrapper[4799]: E0127 07:48:55.628113 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536cb82e-df7d-4758-9c45-f00d28dfc3fd" containerName="route-controller-manager" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.628126 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="536cb82e-df7d-4758-9c45-f00d28dfc3fd" containerName="route-controller-manager" Jan 27 07:48:55 crc kubenswrapper[4799]: E0127 07:48:55.628140 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef0e0c84-7483-438a-8ad1-b105cd4e2cc7" containerName="extract-content" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.628151 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef0e0c84-7483-438a-8ad1-b105cd4e2cc7" containerName="extract-content" Jan 27 07:48:55 crc kubenswrapper[4799]: E0127 07:48:55.628164 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c849a377-ab04-4f2a-b659-4e96dfa619cb" containerName="controller-manager" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.628176 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c849a377-ab04-4f2a-b659-4e96dfa619cb" containerName="controller-manager" Jan 27 07:48:55 crc kubenswrapper[4799]: E0127 07:48:55.628195 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5" containerName="extract-content" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.628207 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5" containerName="extract-content" Jan 27 07:48:55 crc kubenswrapper[4799]: E0127 07:48:55.628226 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef0e0c84-7483-438a-8ad1-b105cd4e2cc7" containerName="registry-server" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.628239 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef0e0c84-7483-438a-8ad1-b105cd4e2cc7" containerName="registry-server" Jan 27 07:48:55 crc kubenswrapper[4799]: E0127 07:48:55.628252 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d96b8a37-8325-4d8c-b8ce-94f40dd0a21a" containerName="registry-server" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.628263 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d96b8a37-8325-4d8c-b8ce-94f40dd0a21a" containerName="registry-server" Jan 27 07:48:55 crc kubenswrapper[4799]: E0127 07:48:55.628281 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef0e0c84-7483-438a-8ad1-b105cd4e2cc7" containerName="extract-utilities" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.628292 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef0e0c84-7483-438a-8ad1-b105cd4e2cc7" containerName="extract-utilities" Jan 27 07:48:55 crc kubenswrapper[4799]: E0127 07:48:55.628332 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5" containerName="registry-server" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.628345 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5" containerName="registry-server" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.628500 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="536cb82e-df7d-4758-9c45-f00d28dfc3fd" containerName="route-controller-manager" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.628519 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef0e0c84-7483-438a-8ad1-b105cd4e2cc7" containerName="registry-server" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.628535 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c849a377-ab04-4f2a-b659-4e96dfa619cb" containerName="controller-manager" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.628545 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5" containerName="registry-server" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.628556 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="d96b8a37-8325-4d8c-b8ce-94f40dd0a21a" containerName="registry-server" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.629235 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.631646 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.632620 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.632888 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.634267 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.635000 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.640647 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.642158 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb"] Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.717399 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-client-ca\") pod \"route-controller-manager-7b8b4ddbc5-s7rbb\" (UID: \"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd\") " pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.717454 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-serving-cert\") pod \"route-controller-manager-7b8b4ddbc5-s7rbb\" (UID: \"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd\") " pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.717576 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-config\") pod \"route-controller-manager-7b8b4ddbc5-s7rbb\" (UID: \"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd\") " pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.717602 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpz4v\" (UniqueName: \"kubernetes.io/projected/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-kube-api-access-xpz4v\") pod \"route-controller-manager-7b8b4ddbc5-s7rbb\" (UID: \"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd\") " pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.745575 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69f74844c8-r8kkz"] Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.748244 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-69f74844c8-r8kkz"] Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.762486 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-76z59"] Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.767759 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-76z59"] Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.818745 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-client-ca\") pod \"route-controller-manager-7b8b4ddbc5-s7rbb\" (UID: \"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd\") " pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.818845 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-serving-cert\") pod \"route-controller-manager-7b8b4ddbc5-s7rbb\" (UID: \"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd\") " pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.819070 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-config\") pod \"route-controller-manager-7b8b4ddbc5-s7rbb\" (UID: \"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd\") " pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.819187 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpz4v\" (UniqueName: \"kubernetes.io/projected/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-kube-api-access-xpz4v\") pod \"route-controller-manager-7b8b4ddbc5-s7rbb\" (UID: \"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd\") " pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.820812 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-config\") pod \"route-controller-manager-7b8b4ddbc5-s7rbb\" (UID: \"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd\") " pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.820878 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-client-ca\") pod \"route-controller-manager-7b8b4ddbc5-s7rbb\" (UID: \"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd\") " pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.823018 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-serving-cert\") pod \"route-controller-manager-7b8b4ddbc5-s7rbb\" (UID: \"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd\") " pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.848155 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpz4v\" (UniqueName: \"kubernetes.io/projected/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-kube-api-access-xpz4v\") pod \"route-controller-manager-7b8b4ddbc5-s7rbb\" (UID: \"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd\") " pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" Jan 27 07:48:55 crc kubenswrapper[4799]: I0127 07:48:55.971111 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" Jan 27 07:48:56 crc kubenswrapper[4799]: I0127 07:48:56.441971 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb"] Jan 27 07:48:56 crc kubenswrapper[4799]: W0127 07:48:56.458679 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0227c54f_b8f4_4b0a_b88d_c2ccd38141bd.slice/crio-716ce5e0f9ce56ad24ef24bb31874ae475e1fd16e002aad06a426c41fec9ad67 WatchSource:0}: Error finding container 716ce5e0f9ce56ad24ef24bb31874ae475e1fd16e002aad06a426c41fec9ad67: Status 404 returned error can't find the container with id 716ce5e0f9ce56ad24ef24bb31874ae475e1fd16e002aad06a426c41fec9ad67 Jan 27 07:48:56 crc kubenswrapper[4799]: I0127 07:48:56.463632 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="536cb82e-df7d-4758-9c45-f00d28dfc3fd" path="/var/lib/kubelet/pods/536cb82e-df7d-4758-9c45-f00d28dfc3fd/volumes" Jan 27 07:48:56 crc kubenswrapper[4799]: I0127 07:48:56.464228 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5" path="/var/lib/kubelet/pods/5d6fd89a-fee3-4ef2-b8e9-48eeec3ae0b5/volumes" Jan 27 07:48:56 crc kubenswrapper[4799]: I0127 07:48:56.465225 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c849a377-ab04-4f2a-b659-4e96dfa619cb" path="/var/lib/kubelet/pods/c849a377-ab04-4f2a-b659-4e96dfa619cb/volumes" Jan 27 07:48:56 crc kubenswrapper[4799]: I0127 07:48:56.466480 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef0e0c84-7483-438a-8ad1-b105cd4e2cc7" path="/var/lib/kubelet/pods/ef0e0c84-7483-438a-8ad1-b105cd4e2cc7/volumes" Jan 27 07:48:56 crc kubenswrapper[4799]: I0127 07:48:56.495322 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" event={"ID":"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd","Type":"ContainerStarted","Data":"716ce5e0f9ce56ad24ef24bb31874ae475e1fd16e002aad06a426c41fec9ad67"} Jan 27 07:48:56 crc kubenswrapper[4799]: I0127 07:48:56.777060 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g7thd"] Jan 27 07:48:56 crc kubenswrapper[4799]: I0127 07:48:56.777995 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g7thd" podUID="6be235d3-0500-4c98-abf6-a8709c12e8a7" containerName="registry-server" containerID="cri-o://3742cdd6bdaac839e0851671886abbecc1cc89ca426025aada04db0bc6c41ae5" gracePeriod=2 Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.186482 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g7thd" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.246274 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpq8m\" (UniqueName: \"kubernetes.io/projected/6be235d3-0500-4c98-abf6-a8709c12e8a7-kube-api-access-qpq8m\") pod \"6be235d3-0500-4c98-abf6-a8709c12e8a7\" (UID: \"6be235d3-0500-4c98-abf6-a8709c12e8a7\") " Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.246419 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be235d3-0500-4c98-abf6-a8709c12e8a7-catalog-content\") pod \"6be235d3-0500-4c98-abf6-a8709c12e8a7\" (UID: \"6be235d3-0500-4c98-abf6-a8709c12e8a7\") " Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.246469 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be235d3-0500-4c98-abf6-a8709c12e8a7-utilities\") pod \"6be235d3-0500-4c98-abf6-a8709c12e8a7\" (UID: \"6be235d3-0500-4c98-abf6-a8709c12e8a7\") " Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.248807 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6be235d3-0500-4c98-abf6-a8709c12e8a7-utilities" (OuterVolumeSpecName: "utilities") pod "6be235d3-0500-4c98-abf6-a8709c12e8a7" (UID: "6be235d3-0500-4c98-abf6-a8709c12e8a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.256395 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6be235d3-0500-4c98-abf6-a8709c12e8a7-kube-api-access-qpq8m" (OuterVolumeSpecName: "kube-api-access-qpq8m") pod "6be235d3-0500-4c98-abf6-a8709c12e8a7" (UID: "6be235d3-0500-4c98-abf6-a8709c12e8a7"). InnerVolumeSpecName "kube-api-access-qpq8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.348089 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be235d3-0500-4c98-abf6-a8709c12e8a7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.348134 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpq8m\" (UniqueName: \"kubernetes.io/projected/6be235d3-0500-4c98-abf6-a8709c12e8a7-kube-api-access-qpq8m\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.378133 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6be235d3-0500-4c98-abf6-a8709c12e8a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6be235d3-0500-4c98-abf6-a8709c12e8a7" (UID: "6be235d3-0500-4c98-abf6-a8709c12e8a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.449645 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be235d3-0500-4c98-abf6-a8709c12e8a7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.504621 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" event={"ID":"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd","Type":"ContainerStarted","Data":"9977570c4187a955db27ddadb0a689a12579ed82568fb9197ce54095ebd2f7dd"} Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.504763 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.507588 4799 generic.go:334] "Generic (PLEG): container finished" podID="6be235d3-0500-4c98-abf6-a8709c12e8a7" containerID="3742cdd6bdaac839e0851671886abbecc1cc89ca426025aada04db0bc6c41ae5" exitCode=0 Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.507680 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g7thd" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.507687 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g7thd" event={"ID":"6be235d3-0500-4c98-abf6-a8709c12e8a7","Type":"ContainerDied","Data":"3742cdd6bdaac839e0851671886abbecc1cc89ca426025aada04db0bc6c41ae5"} Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.508290 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g7thd" event={"ID":"6be235d3-0500-4c98-abf6-a8709c12e8a7","Type":"ContainerDied","Data":"58978e751c1455ad3b65c627b5f82e263865cecd44473883d2809e540a948e6b"} Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.508330 4799 scope.go:117] "RemoveContainer" containerID="3742cdd6bdaac839e0851671886abbecc1cc89ca426025aada04db0bc6c41ae5" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.510705 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.527426 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" podStartSLOduration=3.5274017090000003 podStartE2EDuration="3.527401709s" podCreationTimestamp="2026-01-27 07:48:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:48:57.523569672 +0000 UTC m=+203.834673737" watchObservedRunningTime="2026-01-27 07:48:57.527401709 +0000 UTC m=+203.838505774" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.528378 4799 scope.go:117] "RemoveContainer" containerID="cbff8834952886e59e7932bc2d9456e0b2ac134a292683c57c2fe0f9e552069e" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.563621 4799 scope.go:117] "RemoveContainer" containerID="9bbf152b37a42ae20f5788c1de15ac74a921171f3fa2c8c34dea28e3543783c4" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.576417 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g7thd"] Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.577651 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g7thd"] Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.579583 4799 scope.go:117] "RemoveContainer" containerID="3742cdd6bdaac839e0851671886abbecc1cc89ca426025aada04db0bc6c41ae5" Jan 27 07:48:57 crc kubenswrapper[4799]: E0127 07:48:57.580939 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3742cdd6bdaac839e0851671886abbecc1cc89ca426025aada04db0bc6c41ae5\": container with ID starting with 3742cdd6bdaac839e0851671886abbecc1cc89ca426025aada04db0bc6c41ae5 not found: ID does not exist" containerID="3742cdd6bdaac839e0851671886abbecc1cc89ca426025aada04db0bc6c41ae5" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.581013 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3742cdd6bdaac839e0851671886abbecc1cc89ca426025aada04db0bc6c41ae5"} err="failed to get container status \"3742cdd6bdaac839e0851671886abbecc1cc89ca426025aada04db0bc6c41ae5\": rpc error: code = NotFound desc = could not find container \"3742cdd6bdaac839e0851671886abbecc1cc89ca426025aada04db0bc6c41ae5\": container with ID starting with 3742cdd6bdaac839e0851671886abbecc1cc89ca426025aada04db0bc6c41ae5 not found: ID does not exist" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.581059 4799 scope.go:117] "RemoveContainer" containerID="cbff8834952886e59e7932bc2d9456e0b2ac134a292683c57c2fe0f9e552069e" Jan 27 07:48:57 crc kubenswrapper[4799]: E0127 07:48:57.581704 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbff8834952886e59e7932bc2d9456e0b2ac134a292683c57c2fe0f9e552069e\": container with ID starting with cbff8834952886e59e7932bc2d9456e0b2ac134a292683c57c2fe0f9e552069e not found: ID does not exist" containerID="cbff8834952886e59e7932bc2d9456e0b2ac134a292683c57c2fe0f9e552069e" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.581785 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbff8834952886e59e7932bc2d9456e0b2ac134a292683c57c2fe0f9e552069e"} err="failed to get container status \"cbff8834952886e59e7932bc2d9456e0b2ac134a292683c57c2fe0f9e552069e\": rpc error: code = NotFound desc = could not find container \"cbff8834952886e59e7932bc2d9456e0b2ac134a292683c57c2fe0f9e552069e\": container with ID starting with cbff8834952886e59e7932bc2d9456e0b2ac134a292683c57c2fe0f9e552069e not found: ID does not exist" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.581809 4799 scope.go:117] "RemoveContainer" containerID="9bbf152b37a42ae20f5788c1de15ac74a921171f3fa2c8c34dea28e3543783c4" Jan 27 07:48:57 crc kubenswrapper[4799]: E0127 07:48:57.582057 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bbf152b37a42ae20f5788c1de15ac74a921171f3fa2c8c34dea28e3543783c4\": container with ID starting with 9bbf152b37a42ae20f5788c1de15ac74a921171f3fa2c8c34dea28e3543783c4 not found: ID does not exist" containerID="9bbf152b37a42ae20f5788c1de15ac74a921171f3fa2c8c34dea28e3543783c4" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.582084 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bbf152b37a42ae20f5788c1de15ac74a921171f3fa2c8c34dea28e3543783c4"} err="failed to get container status \"9bbf152b37a42ae20f5788c1de15ac74a921171f3fa2c8c34dea28e3543783c4\": rpc error: code = NotFound desc = could not find container \"9bbf152b37a42ae20f5788c1de15ac74a921171f3fa2c8c34dea28e3543783c4\": container with ID starting with 9bbf152b37a42ae20f5788c1de15ac74a921171f3fa2c8c34dea28e3543783c4 not found: ID does not exist" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.628375 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7586585969-r4mzw"] Jan 27 07:48:57 crc kubenswrapper[4799]: E0127 07:48:57.628745 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be235d3-0500-4c98-abf6-a8709c12e8a7" containerName="extract-utilities" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.628767 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be235d3-0500-4c98-abf6-a8709c12e8a7" containerName="extract-utilities" Jan 27 07:48:57 crc kubenswrapper[4799]: E0127 07:48:57.628795 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be235d3-0500-4c98-abf6-a8709c12e8a7" containerName="registry-server" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.628801 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be235d3-0500-4c98-abf6-a8709c12e8a7" containerName="registry-server" Jan 27 07:48:57 crc kubenswrapper[4799]: E0127 07:48:57.628816 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be235d3-0500-4c98-abf6-a8709c12e8a7" containerName="extract-content" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.628824 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be235d3-0500-4c98-abf6-a8709c12e8a7" containerName="extract-content" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.628937 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6be235d3-0500-4c98-abf6-a8709c12e8a7" containerName="registry-server" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.629612 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.632654 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.633063 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.633333 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.633901 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.634042 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.634281 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.639838 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.673439 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7586585969-r4mzw"] Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.753997 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-proxy-ca-bundles\") pod \"controller-manager-7586585969-r4mzw\" (UID: \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\") " pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.754393 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-config\") pod \"controller-manager-7586585969-r4mzw\" (UID: \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\") " pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.754538 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kqnz\" (UniqueName: \"kubernetes.io/projected/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-kube-api-access-7kqnz\") pod \"controller-manager-7586585969-r4mzw\" (UID: \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\") " pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.754590 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-serving-cert\") pod \"controller-manager-7586585969-r4mzw\" (UID: \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\") " pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.754667 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-client-ca\") pod \"controller-manager-7586585969-r4mzw\" (UID: \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\") " pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.856954 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-proxy-ca-bundles\") pod \"controller-manager-7586585969-r4mzw\" (UID: \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\") " pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.857117 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-config\") pod \"controller-manager-7586585969-r4mzw\" (UID: \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\") " pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.857158 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kqnz\" (UniqueName: \"kubernetes.io/projected/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-kube-api-access-7kqnz\") pod \"controller-manager-7586585969-r4mzw\" (UID: \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\") " pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.857182 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-serving-cert\") pod \"controller-manager-7586585969-r4mzw\" (UID: \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\") " pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.857211 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-client-ca\") pod \"controller-manager-7586585969-r4mzw\" (UID: \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\") " pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.858632 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-client-ca\") pod \"controller-manager-7586585969-r4mzw\" (UID: \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\") " pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.858884 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-proxy-ca-bundles\") pod \"controller-manager-7586585969-r4mzw\" (UID: \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\") " pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.858925 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-config\") pod \"controller-manager-7586585969-r4mzw\" (UID: \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\") " pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.863663 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-serving-cert\") pod \"controller-manager-7586585969-r4mzw\" (UID: \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\") " pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.880995 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kqnz\" (UniqueName: \"kubernetes.io/projected/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-kube-api-access-7kqnz\") pod \"controller-manager-7586585969-r4mzw\" (UID: \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\") " pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" Jan 27 07:48:57 crc kubenswrapper[4799]: I0127 07:48:57.961731 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" Jan 27 07:48:58 crc kubenswrapper[4799]: I0127 07:48:58.215538 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7586585969-r4mzw"] Jan 27 07:48:58 crc kubenswrapper[4799]: W0127 07:48:58.226633 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bf1c5c7_8b08_47d0_a62a_75b855e1b994.slice/crio-f174461d4bf8e366b8b48ebeecddbf1d9c3aa8253649c6a83e41e1a4b5feed6b WatchSource:0}: Error finding container f174461d4bf8e366b8b48ebeecddbf1d9c3aa8253649c6a83e41e1a4b5feed6b: Status 404 returned error can't find the container with id f174461d4bf8e366b8b48ebeecddbf1d9c3aa8253649c6a83e41e1a4b5feed6b Jan 27 07:48:58 crc kubenswrapper[4799]: I0127 07:48:58.460744 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6be235d3-0500-4c98-abf6-a8709c12e8a7" path="/var/lib/kubelet/pods/6be235d3-0500-4c98-abf6-a8709c12e8a7/volumes" Jan 27 07:48:58 crc kubenswrapper[4799]: I0127 07:48:58.518035 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" event={"ID":"9bf1c5c7-8b08-47d0-a62a-75b855e1b994","Type":"ContainerStarted","Data":"4c5267cfff32518b22d18348f7bd107a49a001b8c23d854b7f5f8b6fd682f0f0"} Jan 27 07:48:58 crc kubenswrapper[4799]: I0127 07:48:58.518109 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" event={"ID":"9bf1c5c7-8b08-47d0-a62a-75b855e1b994","Type":"ContainerStarted","Data":"f174461d4bf8e366b8b48ebeecddbf1d9c3aa8253649c6a83e41e1a4b5feed6b"} Jan 27 07:48:58 crc kubenswrapper[4799]: I0127 07:48:58.518478 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" Jan 27 07:48:58 crc kubenswrapper[4799]: I0127 07:48:58.523809 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" Jan 27 07:48:58 crc kubenswrapper[4799]: I0127 07:48:58.545787 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" podStartSLOduration=4.545755677 podStartE2EDuration="4.545755677s" podCreationTimestamp="2026-01-27 07:48:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:48:58.542153708 +0000 UTC m=+204.853257793" watchObservedRunningTime="2026-01-27 07:48:58.545755677 +0000 UTC m=+204.856859782" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.030318 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" podUID="810999fd-fa8e-4e6c-9b07-bc58f174202b" containerName="oauth-openshift" containerID="cri-o://96c8a472f8501a852fad2ef4ed247e865751e3cdddc05d5b0bea346ce6b65c6a" gracePeriod=15 Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.493555 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.537098 4799 generic.go:334] "Generic (PLEG): container finished" podID="810999fd-fa8e-4e6c-9b07-bc58f174202b" containerID="96c8a472f8501a852fad2ef4ed247e865751e3cdddc05d5b0bea346ce6b65c6a" exitCode=0 Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.537365 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" event={"ID":"810999fd-fa8e-4e6c-9b07-bc58f174202b","Type":"ContainerDied","Data":"96c8a472f8501a852fad2ef4ed247e865751e3cdddc05d5b0bea346ce6b65c6a"} Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.537794 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" event={"ID":"810999fd-fa8e-4e6c-9b07-bc58f174202b","Type":"ContainerDied","Data":"eecde9149ee5a3f8c9a25110cf026c0426a45fa3be3b5722732317b610ba01a9"} Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.537511 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n67f6" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.537842 4799 scope.go:117] "RemoveContainer" containerID="96c8a472f8501a852fad2ef4ed247e865751e3cdddc05d5b0bea346ce6b65c6a" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.572816 4799 scope.go:117] "RemoveContainer" containerID="96c8a472f8501a852fad2ef4ed247e865751e3cdddc05d5b0bea346ce6b65c6a" Jan 27 07:49:00 crc kubenswrapper[4799]: E0127 07:49:00.573738 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96c8a472f8501a852fad2ef4ed247e865751e3cdddc05d5b0bea346ce6b65c6a\": container with ID starting with 96c8a472f8501a852fad2ef4ed247e865751e3cdddc05d5b0bea346ce6b65c6a not found: ID does not exist" containerID="96c8a472f8501a852fad2ef4ed247e865751e3cdddc05d5b0bea346ce6b65c6a" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.573810 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96c8a472f8501a852fad2ef4ed247e865751e3cdddc05d5b0bea346ce6b65c6a"} err="failed to get container status \"96c8a472f8501a852fad2ef4ed247e865751e3cdddc05d5b0bea346ce6b65c6a\": rpc error: code = NotFound desc = could not find container \"96c8a472f8501a852fad2ef4ed247e865751e3cdddc05d5b0bea346ce6b65c6a\": container with ID starting with 96c8a472f8501a852fad2ef4ed247e865751e3cdddc05d5b0bea346ce6b65c6a not found: ID does not exist" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.595440 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-audit-policies\") pod \"810999fd-fa8e-4e6c-9b07-bc58f174202b\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.595536 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-serving-cert\") pod \"810999fd-fa8e-4e6c-9b07-bc58f174202b\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.595568 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-service-ca\") pod \"810999fd-fa8e-4e6c-9b07-bc58f174202b\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.595593 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-ocp-branding-template\") pod \"810999fd-fa8e-4e6c-9b07-bc58f174202b\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.595644 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-template-error\") pod \"810999fd-fa8e-4e6c-9b07-bc58f174202b\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.595678 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-trusted-ca-bundle\") pod \"810999fd-fa8e-4e6c-9b07-bc58f174202b\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.595705 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-router-certs\") pod \"810999fd-fa8e-4e6c-9b07-bc58f174202b\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.596042 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-idp-0-file-data\") pod \"810999fd-fa8e-4e6c-9b07-bc58f174202b\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.596126 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-cliconfig\") pod \"810999fd-fa8e-4e6c-9b07-bc58f174202b\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.596148 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/810999fd-fa8e-4e6c-9b07-bc58f174202b-audit-dir\") pod \"810999fd-fa8e-4e6c-9b07-bc58f174202b\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.596185 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97f98\" (UniqueName: \"kubernetes.io/projected/810999fd-fa8e-4e6c-9b07-bc58f174202b-kube-api-access-97f98\") pod \"810999fd-fa8e-4e6c-9b07-bc58f174202b\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.596267 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-template-login\") pod \"810999fd-fa8e-4e6c-9b07-bc58f174202b\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.596454 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-template-provider-selection\") pod \"810999fd-fa8e-4e6c-9b07-bc58f174202b\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.596531 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-session\") pod \"810999fd-fa8e-4e6c-9b07-bc58f174202b\" (UID: \"810999fd-fa8e-4e6c-9b07-bc58f174202b\") " Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.596774 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/810999fd-fa8e-4e6c-9b07-bc58f174202b-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "810999fd-fa8e-4e6c-9b07-bc58f174202b" (UID: "810999fd-fa8e-4e6c-9b07-bc58f174202b"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.596788 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "810999fd-fa8e-4e6c-9b07-bc58f174202b" (UID: "810999fd-fa8e-4e6c-9b07-bc58f174202b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.597249 4799 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/810999fd-fa8e-4e6c-9b07-bc58f174202b-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.597276 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.599148 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "810999fd-fa8e-4e6c-9b07-bc58f174202b" (UID: "810999fd-fa8e-4e6c-9b07-bc58f174202b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.599477 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "810999fd-fa8e-4e6c-9b07-bc58f174202b" (UID: "810999fd-fa8e-4e6c-9b07-bc58f174202b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.599693 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "810999fd-fa8e-4e6c-9b07-bc58f174202b" (UID: "810999fd-fa8e-4e6c-9b07-bc58f174202b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.603359 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "810999fd-fa8e-4e6c-9b07-bc58f174202b" (UID: "810999fd-fa8e-4e6c-9b07-bc58f174202b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.603706 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "810999fd-fa8e-4e6c-9b07-bc58f174202b" (UID: "810999fd-fa8e-4e6c-9b07-bc58f174202b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.603719 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/810999fd-fa8e-4e6c-9b07-bc58f174202b-kube-api-access-97f98" (OuterVolumeSpecName: "kube-api-access-97f98") pod "810999fd-fa8e-4e6c-9b07-bc58f174202b" (UID: "810999fd-fa8e-4e6c-9b07-bc58f174202b"). InnerVolumeSpecName "kube-api-access-97f98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.604150 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "810999fd-fa8e-4e6c-9b07-bc58f174202b" (UID: "810999fd-fa8e-4e6c-9b07-bc58f174202b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.604253 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "810999fd-fa8e-4e6c-9b07-bc58f174202b" (UID: "810999fd-fa8e-4e6c-9b07-bc58f174202b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.604791 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "810999fd-fa8e-4e6c-9b07-bc58f174202b" (UID: "810999fd-fa8e-4e6c-9b07-bc58f174202b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.605217 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "810999fd-fa8e-4e6c-9b07-bc58f174202b" (UID: "810999fd-fa8e-4e6c-9b07-bc58f174202b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.607911 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "810999fd-fa8e-4e6c-9b07-bc58f174202b" (UID: "810999fd-fa8e-4e6c-9b07-bc58f174202b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.617666 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "810999fd-fa8e-4e6c-9b07-bc58f174202b" (UID: "810999fd-fa8e-4e6c-9b07-bc58f174202b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.699366 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97f98\" (UniqueName: \"kubernetes.io/projected/810999fd-fa8e-4e6c-9b07-bc58f174202b-kube-api-access-97f98\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.699417 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.699433 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.699447 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.699461 4799 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.699471 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.699481 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.699491 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.699501 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.699511 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.699520 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.699530 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/810999fd-fa8e-4e6c-9b07-bc58f174202b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.875209 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n67f6"] Jan 27 07:49:00 crc kubenswrapper[4799]: I0127 07:49:00.879633 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n67f6"] Jan 27 07:49:02 crc kubenswrapper[4799]: I0127 07:49:02.283880 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 07:49:02 crc kubenswrapper[4799]: I0127 07:49:02.461883 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="810999fd-fa8e-4e6c-9b07-bc58f174202b" path="/var/lib/kubelet/pods/810999fd-fa8e-4e6c-9b07-bc58f174202b/volumes" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.643153 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5b7945bc75-c7fzm"] Jan 27 07:49:07 crc kubenswrapper[4799]: E0127 07:49:07.645744 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810999fd-fa8e-4e6c-9b07-bc58f174202b" containerName="oauth-openshift" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.645966 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="810999fd-fa8e-4e6c-9b07-bc58f174202b" containerName="oauth-openshift" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.646451 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="810999fd-fa8e-4e6c-9b07-bc58f174202b" containerName="oauth-openshift" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.648774 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.654665 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.657173 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.657219 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.657506 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.657775 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.657921 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.657961 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.658447 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.658656 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.658707 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5b7945bc75-c7fzm"] Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.659158 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.659478 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.659675 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.672571 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.680008 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.685609 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.750845 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-system-session\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.750960 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51a79a36-a27f-43b4-930f-4aa8279f0c8b-audit-dir\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.750999 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-system-router-certs\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.751031 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-user-template-error\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.751060 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-system-service-ca\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.751103 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scd7w\" (UniqueName: \"kubernetes.io/projected/51a79a36-a27f-43b4-930f-4aa8279f0c8b-kube-api-access-scd7w\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.751137 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.751197 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.751226 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/51a79a36-a27f-43b4-930f-4aa8279f0c8b-audit-policies\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.751261 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.751285 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.751314 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.751401 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.751529 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-user-template-login\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.852833 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-system-service-ca\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.852914 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scd7w\" (UniqueName: \"kubernetes.io/projected/51a79a36-a27f-43b4-930f-4aa8279f0c8b-kube-api-access-scd7w\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.852943 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.853719 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.853806 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/51a79a36-a27f-43b4-930f-4aa8279f0c8b-audit-policies\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.853876 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.853932 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.853976 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.854055 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.854111 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-user-template-login\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.854210 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-system-session\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.854259 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51a79a36-a27f-43b4-930f-4aa8279f0c8b-audit-dir\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.854287 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-system-router-certs\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.854324 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-user-template-error\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.855763 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-system-service-ca\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.856050 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.856477 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51a79a36-a27f-43b4-930f-4aa8279f0c8b-audit-dir\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.856539 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/51a79a36-a27f-43b4-930f-4aa8279f0c8b-audit-policies\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.857177 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.862886 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-system-session\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.863111 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-user-template-login\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.862896 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.864795 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.865231 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.865937 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-system-router-certs\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.866931 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.867003 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/51a79a36-a27f-43b4-930f-4aa8279f0c8b-v4-0-config-user-template-error\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.883046 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scd7w\" (UniqueName: \"kubernetes.io/projected/51a79a36-a27f-43b4-930f-4aa8279f0c8b-kube-api-access-scd7w\") pod \"oauth-openshift-5b7945bc75-c7fzm\" (UID: \"51a79a36-a27f-43b4-930f-4aa8279f0c8b\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:07 crc kubenswrapper[4799]: I0127 07:49:07.986425 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:08 crc kubenswrapper[4799]: I0127 07:49:08.465587 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5b7945bc75-c7fzm"] Jan 27 07:49:08 crc kubenswrapper[4799]: I0127 07:49:08.612651 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" event={"ID":"51a79a36-a27f-43b4-930f-4aa8279f0c8b","Type":"ContainerStarted","Data":"87805f99ed8256e241e99256da6230ac032a0adc6be30521a17508b5990b83a4"} Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.069903 4799 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.071033 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: E0127 07:49:09.073540 4799 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.073664 4799 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.073979 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d" gracePeriod=15 Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.073986 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042" gracePeriod=15 Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.074052 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc" gracePeriod=15 Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.074138 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88" gracePeriod=15 Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.074135 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df" gracePeriod=15 Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.078495 4799 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 07:49:09 crc kubenswrapper[4799]: E0127 07:49:09.079113 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.079154 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 07:49:09 crc kubenswrapper[4799]: E0127 07:49:09.079188 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.079205 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 07:49:09 crc kubenswrapper[4799]: E0127 07:49:09.079230 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.079245 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:49:09 crc kubenswrapper[4799]: E0127 07:49:09.079268 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.079282 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:49:09 crc kubenswrapper[4799]: E0127 07:49:09.079304 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.079344 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 07:49:09 crc kubenswrapper[4799]: E0127 07:49:09.079366 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.079385 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 07:49:09 crc kubenswrapper[4799]: E0127 07:49:09.079409 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.079423 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.079650 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.079681 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.079697 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.079722 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.079744 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.080170 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.147148 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.189287 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.189398 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.189427 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.189452 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.189498 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.189531 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.189572 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.189606 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.291056 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.291132 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.291161 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.291210 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.291231 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.291256 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.291292 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.291363 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.291377 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.291437 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.291453 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.291475 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.291487 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.291507 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.291519 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.291539 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.437820 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:49:09 crc kubenswrapper[4799]: W0127 07:49:09.456257 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-d21466561a1407b02906923ce26e15dcf0da2a8f9f62996c49ae8d5b4f87761a WatchSource:0}: Error finding container d21466561a1407b02906923ce26e15dcf0da2a8f9f62996c49ae8d5b4f87761a: Status 404 returned error can't find the container with id d21466561a1407b02906923ce26e15dcf0da2a8f9f62996c49ae8d5b4f87761a Jan 27 07:49:09 crc kubenswrapper[4799]: E0127 07:49:09.460089 4799 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e8702be845a01 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 07:49:09.459212801 +0000 UTC m=+215.770316906,LastTimestamp:2026-01-27 07:49:09.459212801 +0000 UTC m=+215.770316906,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.628845 4799 generic.go:334] "Generic (PLEG): container finished" podID="7f10e10b-7684-4395-b1f1-051344597338" containerID="9a8041ccb5d3f6c95794158278dadd40862066abcba280aac420da88bc6f3646" exitCode=0 Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.629021 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7f10e10b-7684-4395-b1f1-051344597338","Type":"ContainerDied","Data":"9a8041ccb5d3f6c95794158278dadd40862066abcba280aac420da88bc6f3646"} Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.629987 4799 status_manager.go:851] "Failed to get status for pod" podUID="7f10e10b-7684-4395-b1f1-051344597338" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.630205 4799 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.630654 4799 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.632215 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5b7945bc75-c7fzm_51a79a36-a27f-43b4-930f-4aa8279f0c8b/oauth-openshift/0.log" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.632285 4799 generic.go:334] "Generic (PLEG): container finished" podID="51a79a36-a27f-43b4-930f-4aa8279f0c8b" containerID="caf43e108813ea9b7b9e46b866cdf9efee0b58e9661638552e5654aa2731c281" exitCode=255 Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.632362 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" event={"ID":"51a79a36-a27f-43b4-930f-4aa8279f0c8b","Type":"ContainerDied","Data":"caf43e108813ea9b7b9e46b866cdf9efee0b58e9661638552e5654aa2731c281"} Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.633207 4799 scope.go:117] "RemoveContainer" containerID="caf43e108813ea9b7b9e46b866cdf9efee0b58e9661638552e5654aa2731c281" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.634003 4799 status_manager.go:851] "Failed to get status for pod" podUID="7f10e10b-7684-4395-b1f1-051344597338" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.634372 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d21466561a1407b02906923ce26e15dcf0da2a8f9f62996c49ae8d5b4f87761a"} Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.634447 4799 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.635055 4799 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.635705 4799 status_manager.go:851] "Failed to get status for pod" podUID="51a79a36-a27f-43b4-930f-4aa8279f0c8b" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5b7945bc75-c7fzm\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.642226 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.644832 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.646093 4799 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc" exitCode=0 Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.646132 4799 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88" exitCode=0 Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.646148 4799 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042" exitCode=0 Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.646161 4799 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df" exitCode=2 Jan 27 07:49:09 crc kubenswrapper[4799]: I0127 07:49:09.646215 4799 scope.go:117] "RemoveContainer" containerID="a110b5bd05a9116d8635bab2561ac82853723a26329d6706b82f0c655e23eb71" Jan 27 07:49:10 crc kubenswrapper[4799]: E0127 07:49:10.596663 4799 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e8702be845a01 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 07:49:09.459212801 +0000 UTC m=+215.770316906,LastTimestamp:2026-01-27 07:49:09.459212801 +0000 UTC m=+215.770316906,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 07:49:10 crc kubenswrapper[4799]: I0127 07:49:10.658105 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 07:49:10 crc kubenswrapper[4799]: I0127 07:49:10.662414 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5b7945bc75-c7fzm_51a79a36-a27f-43b4-930f-4aa8279f0c8b/oauth-openshift/1.log" Jan 27 07:49:10 crc kubenswrapper[4799]: I0127 07:49:10.663165 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5b7945bc75-c7fzm_51a79a36-a27f-43b4-930f-4aa8279f0c8b/oauth-openshift/0.log" Jan 27 07:49:10 crc kubenswrapper[4799]: I0127 07:49:10.663241 4799 generic.go:334] "Generic (PLEG): container finished" podID="51a79a36-a27f-43b4-930f-4aa8279f0c8b" containerID="faba8a465ab8883b89ccfe0d58194577aeb35f5ab6c03877166a6c6b41f2e004" exitCode=255 Jan 27 07:49:10 crc kubenswrapper[4799]: I0127 07:49:10.663354 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" event={"ID":"51a79a36-a27f-43b4-930f-4aa8279f0c8b","Type":"ContainerDied","Data":"faba8a465ab8883b89ccfe0d58194577aeb35f5ab6c03877166a6c6b41f2e004"} Jan 27 07:49:10 crc kubenswrapper[4799]: I0127 07:49:10.663428 4799 scope.go:117] "RemoveContainer" containerID="caf43e108813ea9b7b9e46b866cdf9efee0b58e9661638552e5654aa2731c281" Jan 27 07:49:10 crc kubenswrapper[4799]: I0127 07:49:10.664657 4799 scope.go:117] "RemoveContainer" containerID="faba8a465ab8883b89ccfe0d58194577aeb35f5ab6c03877166a6c6b41f2e004" Jan 27 07:49:10 crc kubenswrapper[4799]: I0127 07:49:10.665009 4799 status_manager.go:851] "Failed to get status for pod" podUID="7f10e10b-7684-4395-b1f1-051344597338" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:10 crc kubenswrapper[4799]: I0127 07:49:10.665640 4799 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:10 crc kubenswrapper[4799]: I0127 07:49:10.666654 4799 status_manager.go:851] "Failed to get status for pod" podUID="51a79a36-a27f-43b4-930f-4aa8279f0c8b" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5b7945bc75-c7fzm\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:10 crc kubenswrapper[4799]: I0127 07:49:10.667206 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"99fa5435c034ffde660f2e2027f3bf57cfff4582e1b0392f38835aeea8bd50c9"} Jan 27 07:49:10 crc kubenswrapper[4799]: E0127 07:49:10.667461 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-5b7945bc75-c7fzm_openshift-authentication(51a79a36-a27f-43b4-930f-4aa8279f0c8b)\"" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" podUID="51a79a36-a27f-43b4-930f-4aa8279f0c8b" Jan 27 07:49:10 crc kubenswrapper[4799]: I0127 07:49:10.668239 4799 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:10 crc kubenswrapper[4799]: I0127 07:49:10.668987 4799 status_manager.go:851] "Failed to get status for pod" podUID="51a79a36-a27f-43b4-930f-4aa8279f0c8b" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5b7945bc75-c7fzm\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:10 crc kubenswrapper[4799]: I0127 07:49:10.669724 4799 status_manager.go:851] "Failed to get status for pod" podUID="7f10e10b-7684-4395-b1f1-051344597338" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.120890 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.121723 4799 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.122476 4799 status_manager.go:851] "Failed to get status for pod" podUID="51a79a36-a27f-43b4-930f-4aa8279f0c8b" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5b7945bc75-c7fzm\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.122964 4799 status_manager.go:851] "Failed to get status for pod" podUID="7f10e10b-7684-4395-b1f1-051344597338" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.240831 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f10e10b-7684-4395-b1f1-051344597338-kubelet-dir\") pod \"7f10e10b-7684-4395-b1f1-051344597338\" (UID: \"7f10e10b-7684-4395-b1f1-051344597338\") " Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.241498 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f10e10b-7684-4395-b1f1-051344597338-kube-api-access\") pod \"7f10e10b-7684-4395-b1f1-051344597338\" (UID: \"7f10e10b-7684-4395-b1f1-051344597338\") " Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.241473 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f10e10b-7684-4395-b1f1-051344597338-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7f10e10b-7684-4395-b1f1-051344597338" (UID: "7f10e10b-7684-4395-b1f1-051344597338"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.241543 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f10e10b-7684-4395-b1f1-051344597338-var-lock\") pod \"7f10e10b-7684-4395-b1f1-051344597338\" (UID: \"7f10e10b-7684-4395-b1f1-051344597338\") " Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.241788 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f10e10b-7684-4395-b1f1-051344597338-var-lock" (OuterVolumeSpecName: "var-lock") pod "7f10e10b-7684-4395-b1f1-051344597338" (UID: "7f10e10b-7684-4395-b1f1-051344597338"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.241956 4799 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f10e10b-7684-4395-b1f1-051344597338-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.241973 4799 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f10e10b-7684-4395-b1f1-051344597338-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.263619 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f10e10b-7684-4395-b1f1-051344597338-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7f10e10b-7684-4395-b1f1-051344597338" (UID: "7f10e10b-7684-4395-b1f1-051344597338"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.343088 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f10e10b-7684-4395-b1f1-051344597338-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.546233 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.547950 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.548855 4799 status_manager.go:851] "Failed to get status for pod" podUID="7f10e10b-7684-4395-b1f1-051344597338" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.549183 4799 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.549642 4799 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.550132 4799 status_manager.go:851] "Failed to get status for pod" podUID="51a79a36-a27f-43b4-930f-4aa8279f0c8b" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5b7945bc75-c7fzm\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.647815 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.647915 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.647951 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.648086 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.648113 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.648260 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.648583 4799 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.648613 4799 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.648631 4799 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.687293 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.687355 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7f10e10b-7684-4395-b1f1-051344597338","Type":"ContainerDied","Data":"bc6ec405638ea2ae08b39c47c34dd008e54a92450ead2d443b3b52388403cd56"} Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.687454 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc6ec405638ea2ae08b39c47c34dd008e54a92450ead2d443b3b52388403cd56" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.692264 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5b7945bc75-c7fzm_51a79a36-a27f-43b4-930f-4aa8279f0c8b/oauth-openshift/1.log" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.692960 4799 scope.go:117] "RemoveContainer" containerID="faba8a465ab8883b89ccfe0d58194577aeb35f5ab6c03877166a6c6b41f2e004" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.693541 4799 status_manager.go:851] "Failed to get status for pod" podUID="7f10e10b-7684-4395-b1f1-051344597338" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:11 crc kubenswrapper[4799]: E0127 07:49:11.693659 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-5b7945bc75-c7fzm_openshift-authentication(51a79a36-a27f-43b4-930f-4aa8279f0c8b)\"" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" podUID="51a79a36-a27f-43b4-930f-4aa8279f0c8b" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.694383 4799 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.694970 4799 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.695436 4799 status_manager.go:851] "Failed to get status for pod" podUID="51a79a36-a27f-43b4-930f-4aa8279f0c8b" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5b7945bc75-c7fzm\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.699090 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.700532 4799 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d" exitCode=0 Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.700647 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.700704 4799 scope.go:117] "RemoveContainer" containerID="eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.715597 4799 status_manager.go:851] "Failed to get status for pod" podUID="7f10e10b-7684-4395-b1f1-051344597338" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.716446 4799 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.717220 4799 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.717792 4799 status_manager.go:851] "Failed to get status for pod" podUID="51a79a36-a27f-43b4-930f-4aa8279f0c8b" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5b7945bc75-c7fzm\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.728296 4799 status_manager.go:851] "Failed to get status for pod" podUID="7f10e10b-7684-4395-b1f1-051344597338" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.728990 4799 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.729705 4799 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.730614 4799 status_manager.go:851] "Failed to get status for pod" podUID="51a79a36-a27f-43b4-930f-4aa8279f0c8b" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5b7945bc75-c7fzm\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.730945 4799 scope.go:117] "RemoveContainer" containerID="0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.759187 4799 scope.go:117] "RemoveContainer" containerID="6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.782737 4799 scope.go:117] "RemoveContainer" containerID="aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.805439 4799 scope.go:117] "RemoveContainer" containerID="a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.831836 4799 scope.go:117] "RemoveContainer" containerID="749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.865910 4799 scope.go:117] "RemoveContainer" containerID="eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc" Jan 27 07:49:11 crc kubenswrapper[4799]: E0127 07:49:11.866855 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\": container with ID starting with eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc not found: ID does not exist" containerID="eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.866934 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc"} err="failed to get container status \"eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\": rpc error: code = NotFound desc = could not find container \"eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc\": container with ID starting with eab2c1f95fd0fcc7e8fb373429b5294c3411b7e4d159fea8821e740b6cf800cc not found: ID does not exist" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.866983 4799 scope.go:117] "RemoveContainer" containerID="0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88" Jan 27 07:49:11 crc kubenswrapper[4799]: E0127 07:49:11.867872 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\": container with ID starting with 0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88 not found: ID does not exist" containerID="0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.867925 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88"} err="failed to get container status \"0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\": rpc error: code = NotFound desc = could not find container \"0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88\": container with ID starting with 0c1119173b48ce7c99445742e38382d25483446521a37e82060c06438ae4be88 not found: ID does not exist" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.867960 4799 scope.go:117] "RemoveContainer" containerID="6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042" Jan 27 07:49:11 crc kubenswrapper[4799]: E0127 07:49:11.868628 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\": container with ID starting with 6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042 not found: ID does not exist" containerID="6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.868731 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042"} err="failed to get container status \"6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\": rpc error: code = NotFound desc = could not find container \"6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042\": container with ID starting with 6887fbec954de4762f318915a36acc6d4d642e9f370a34aaaa2cbeb4e2db1042 not found: ID does not exist" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.868912 4799 scope.go:117] "RemoveContainer" containerID="aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df" Jan 27 07:49:11 crc kubenswrapper[4799]: E0127 07:49:11.869570 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\": container with ID starting with aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df not found: ID does not exist" containerID="aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.869618 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df"} err="failed to get container status \"aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\": rpc error: code = NotFound desc = could not find container \"aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df\": container with ID starting with aa1ace418f5a4a7e8d652467ebffe6aab84261c86e4152f539d53562c7ef22df not found: ID does not exist" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.869642 4799 scope.go:117] "RemoveContainer" containerID="a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d" Jan 27 07:49:11 crc kubenswrapper[4799]: E0127 07:49:11.870109 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\": container with ID starting with a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d not found: ID does not exist" containerID="a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.870144 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d"} err="failed to get container status \"a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\": rpc error: code = NotFound desc = could not find container \"a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d\": container with ID starting with a99c88afed987217d78d928a3427e95ba8cb62634e95fb6b4aab88933a71051d not found: ID does not exist" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.870167 4799 scope.go:117] "RemoveContainer" containerID="749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202" Jan 27 07:49:11 crc kubenswrapper[4799]: E0127 07:49:11.870925 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\": container with ID starting with 749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202 not found: ID does not exist" containerID="749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202" Jan 27 07:49:11 crc kubenswrapper[4799]: I0127 07:49:11.870978 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202"} err="failed to get container status \"749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\": rpc error: code = NotFound desc = could not find container \"749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202\": container with ID starting with 749b0cdcf71bc592a7b63a0464464a2668b0b58d4aaa355d614bc238a1edb202 not found: ID does not exist" Jan 27 07:49:12 crc kubenswrapper[4799]: I0127 07:49:12.463162 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 07:49:13 crc kubenswrapper[4799]: E0127 07:49:13.730985 4799 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:13 crc kubenswrapper[4799]: E0127 07:49:13.731509 4799 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:13 crc kubenswrapper[4799]: E0127 07:49:13.732025 4799 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:13 crc kubenswrapper[4799]: E0127 07:49:13.732560 4799 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:13 crc kubenswrapper[4799]: E0127 07:49:13.732951 4799 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:13 crc kubenswrapper[4799]: I0127 07:49:13.732986 4799 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 07:49:13 crc kubenswrapper[4799]: E0127 07:49:13.733428 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="200ms" Jan 27 07:49:13 crc kubenswrapper[4799]: E0127 07:49:13.934443 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="400ms" Jan 27 07:49:14 crc kubenswrapper[4799]: E0127 07:49:14.336466 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="800ms" Jan 27 07:49:14 crc kubenswrapper[4799]: I0127 07:49:14.455468 4799 status_manager.go:851] "Failed to get status for pod" podUID="51a79a36-a27f-43b4-930f-4aa8279f0c8b" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5b7945bc75-c7fzm\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:14 crc kubenswrapper[4799]: I0127 07:49:14.455868 4799 status_manager.go:851] "Failed to get status for pod" podUID="7f10e10b-7684-4395-b1f1-051344597338" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:14 crc kubenswrapper[4799]: I0127 07:49:14.456281 4799 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:15 crc kubenswrapper[4799]: E0127 07:49:15.139019 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="1.6s" Jan 27 07:49:15 crc kubenswrapper[4799]: E0127 07:49:15.863169 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:49:15Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:49:15Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:49:15Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T07:49:15Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:15 crc kubenswrapper[4799]: E0127 07:49:15.864897 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:15 crc kubenswrapper[4799]: E0127 07:49:15.865619 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:15 crc kubenswrapper[4799]: E0127 07:49:15.866003 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:15 crc kubenswrapper[4799]: E0127 07:49:15.866562 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:15 crc kubenswrapper[4799]: E0127 07:49:15.866767 4799 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 07:49:16 crc kubenswrapper[4799]: E0127 07:49:16.740190 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="3.2s" Jan 27 07:49:17 crc kubenswrapper[4799]: I0127 07:49:17.987376 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:17 crc kubenswrapper[4799]: I0127 07:49:17.987880 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:17 crc kubenswrapper[4799]: I0127 07:49:17.988958 4799 scope.go:117] "RemoveContainer" containerID="faba8a465ab8883b89ccfe0d58194577aeb35f5ab6c03877166a6c6b41f2e004" Jan 27 07:49:17 crc kubenswrapper[4799]: E0127 07:49:17.989420 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-5b7945bc75-c7fzm_openshift-authentication(51a79a36-a27f-43b4-930f-4aa8279f0c8b)\"" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" podUID="51a79a36-a27f-43b4-930f-4aa8279f0c8b" Jan 27 07:49:19 crc kubenswrapper[4799]: E0127 07:49:19.942160 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="6.4s" Jan 27 07:49:20 crc kubenswrapper[4799]: I0127 07:49:20.451602 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:49:20 crc kubenswrapper[4799]: I0127 07:49:20.453051 4799 status_manager.go:851] "Failed to get status for pod" podUID="51a79a36-a27f-43b4-930f-4aa8279f0c8b" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5b7945bc75-c7fzm\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:20 crc kubenswrapper[4799]: I0127 07:49:20.454699 4799 status_manager.go:851] "Failed to get status for pod" podUID="7f10e10b-7684-4395-b1f1-051344597338" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:20 crc kubenswrapper[4799]: I0127 07:49:20.456106 4799 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:20 crc kubenswrapper[4799]: I0127 07:49:20.482556 4799 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="847339c5-936a-45d5-b326-b9aa8d8d5d97" Jan 27 07:49:20 crc kubenswrapper[4799]: I0127 07:49:20.482953 4799 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="847339c5-936a-45d5-b326-b9aa8d8d5d97" Jan 27 07:49:20 crc kubenswrapper[4799]: E0127 07:49:20.483805 4799 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:49:20 crc kubenswrapper[4799]: I0127 07:49:20.484702 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:49:20 crc kubenswrapper[4799]: W0127 07:49:20.525041 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-3ed84564beeb1f74ffd80e78ec978c8c2cc043975d9fb960e4387983cb4a34fe WatchSource:0}: Error finding container 3ed84564beeb1f74ffd80e78ec978c8c2cc043975d9fb960e4387983cb4a34fe: Status 404 returned error can't find the container with id 3ed84564beeb1f74ffd80e78ec978c8c2cc043975d9fb960e4387983cb4a34fe Jan 27 07:49:20 crc kubenswrapper[4799]: E0127 07:49:20.599189 4799 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e8702be845a01 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 07:49:09.459212801 +0000 UTC m=+215.770316906,LastTimestamp:2026-01-27 07:49:09.459212801 +0000 UTC m=+215.770316906,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 07:49:20 crc kubenswrapper[4799]: I0127 07:49:20.781831 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3ed84564beeb1f74ffd80e78ec978c8c2cc043975d9fb960e4387983cb4a34fe"} Jan 27 07:49:21 crc kubenswrapper[4799]: I0127 07:49:21.798199 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 07:49:21 crc kubenswrapper[4799]: I0127 07:49:21.801114 4799 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14" exitCode=1 Jan 27 07:49:21 crc kubenswrapper[4799]: I0127 07:49:21.801268 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14"} Jan 27 07:49:21 crc kubenswrapper[4799]: I0127 07:49:21.802582 4799 scope.go:117] "RemoveContainer" containerID="a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14" Jan 27 07:49:21 crc kubenswrapper[4799]: I0127 07:49:21.806405 4799 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="313c7f3f12aed106d2d214367a1ce35a6c1102de97ed7bcddd5d65591dc98b0a" exitCode=0 Jan 27 07:49:21 crc kubenswrapper[4799]: I0127 07:49:21.806586 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"313c7f3f12aed106d2d214367a1ce35a6c1102de97ed7bcddd5d65591dc98b0a"} Jan 27 07:49:21 crc kubenswrapper[4799]: I0127 07:49:21.807451 4799 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="847339c5-936a-45d5-b326-b9aa8d8d5d97" Jan 27 07:49:21 crc kubenswrapper[4799]: I0127 07:49:21.807494 4799 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="847339c5-936a-45d5-b326-b9aa8d8d5d97" Jan 27 07:49:21 crc kubenswrapper[4799]: E0127 07:49:21.808467 4799 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:49:21 crc kubenswrapper[4799]: I0127 07:49:21.819511 4799 status_manager.go:851] "Failed to get status for pod" podUID="51a79a36-a27f-43b4-930f-4aa8279f0c8b" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5b7945bc75-c7fzm\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:21 crc kubenswrapper[4799]: I0127 07:49:21.825954 4799 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:21 crc kubenswrapper[4799]: I0127 07:49:21.826737 4799 status_manager.go:851] "Failed to get status for pod" podUID="7f10e10b-7684-4395-b1f1-051344597338" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:21 crc kubenswrapper[4799]: I0127 07:49:21.827623 4799 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:21 crc kubenswrapper[4799]: I0127 07:49:21.828218 4799 status_manager.go:851] "Failed to get status for pod" podUID="51a79a36-a27f-43b4-930f-4aa8279f0c8b" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5b7945bc75-c7fzm\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:21 crc kubenswrapper[4799]: I0127 07:49:21.828737 4799 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:21 crc kubenswrapper[4799]: I0127 07:49:21.829402 4799 status_manager.go:851] "Failed to get status for pod" podUID="7f10e10b-7684-4395-b1f1-051344597338" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:21 crc kubenswrapper[4799]: I0127 07:49:21.829880 4799 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 27 07:49:22 crc kubenswrapper[4799]: I0127 07:49:22.821062 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 07:49:22 crc kubenswrapper[4799]: I0127 07:49:22.821623 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"106e36c0e49e0284d2e15c4c3a5e5c2b5815e9f7f281443f6aef6616bd6747b8"} Jan 27 07:49:22 crc kubenswrapper[4799]: I0127 07:49:22.826721 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9c9fea7998f8a581cd6d1804c844032b4565e3e7320c0e2b74102784e790a4c4"} Jan 27 07:49:22 crc kubenswrapper[4799]: I0127 07:49:22.826780 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"59ff935d01d4071d143a9fee3592417a0c19c1ecaea79905840f46233dc063a9"} Jan 27 07:49:22 crc kubenswrapper[4799]: I0127 07:49:22.826792 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d0c6a1a4b4319beb0d5f8474c7cdc964996cc785febdef13a58bf0266d6e151c"} Jan 27 07:49:23 crc kubenswrapper[4799]: I0127 07:49:23.731410 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:49:23 crc kubenswrapper[4799]: I0127 07:49:23.731851 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:49:23 crc kubenswrapper[4799]: I0127 07:49:23.731975 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 07:49:23 crc kubenswrapper[4799]: I0127 07:49:23.837614 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f241346dfa36fdaa67671551964cab798c255023cfed13a79917b37d3692a989"} Jan 27 07:49:23 crc kubenswrapper[4799]: I0127 07:49:23.837694 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0bd26e6a228ed2a5b8da4b6e71dc0d353cf4be0ae99454bdebd33b56c9399761"} Jan 27 07:49:23 crc kubenswrapper[4799]: I0127 07:49:23.838097 4799 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="847339c5-936a-45d5-b326-b9aa8d8d5d97" Jan 27 07:49:23 crc kubenswrapper[4799]: I0127 07:49:23.838125 4799 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="847339c5-936a-45d5-b326-b9aa8d8d5d97" Jan 27 07:49:23 crc kubenswrapper[4799]: I0127 07:49:23.838378 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:49:23 crc kubenswrapper[4799]: I0127 07:49:23.838487 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d" gracePeriod=600 Jan 27 07:49:23 crc kubenswrapper[4799]: I0127 07:49:23.867024 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:49:23 crc kubenswrapper[4799]: I0127 07:49:23.867329 4799 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 07:49:23 crc kubenswrapper[4799]: I0127 07:49:23.867578 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 07:49:24 crc kubenswrapper[4799]: I0127 07:49:24.848332 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d" exitCode=0 Jan 27 07:49:24 crc kubenswrapper[4799]: I0127 07:49:24.848453 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d"} Jan 27 07:49:24 crc kubenswrapper[4799]: I0127 07:49:24.848873 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"3fcf94bc860237b7c54a7f06c52954a7a635d56630dbf7dfbfa67647833277b7"} Jan 27 07:49:25 crc kubenswrapper[4799]: I0127 07:49:25.484967 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:49:25 crc kubenswrapper[4799]: I0127 07:49:25.485037 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:49:25 crc kubenswrapper[4799]: I0127 07:49:25.493199 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:49:27 crc kubenswrapper[4799]: I0127 07:49:27.758591 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:49:28 crc kubenswrapper[4799]: I0127 07:49:28.852705 4799 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:49:28 crc kubenswrapper[4799]: I0127 07:49:28.876579 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:49:28 crc kubenswrapper[4799]: I0127 07:49:28.876584 4799 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="847339c5-936a-45d5-b326-b9aa8d8d5d97" Jan 27 07:49:28 crc kubenswrapper[4799]: I0127 07:49:28.876669 4799 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="847339c5-936a-45d5-b326-b9aa8d8d5d97" Jan 27 07:49:28 crc kubenswrapper[4799]: I0127 07:49:28.881109 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:49:28 crc kubenswrapper[4799]: I0127 07:49:28.884468 4799 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7f974f2c-539c-467c-ad10-bae042a6e0ea" Jan 27 07:49:29 crc kubenswrapper[4799]: I0127 07:49:29.451572 4799 scope.go:117] "RemoveContainer" containerID="faba8a465ab8883b89ccfe0d58194577aeb35f5ab6c03877166a6c6b41f2e004" Jan 27 07:49:29 crc kubenswrapper[4799]: I0127 07:49:29.884029 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5b7945bc75-c7fzm_51a79a36-a27f-43b4-930f-4aa8279f0c8b/oauth-openshift/1.log" Jan 27 07:49:29 crc kubenswrapper[4799]: I0127 07:49:29.885053 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" event={"ID":"51a79a36-a27f-43b4-930f-4aa8279f0c8b","Type":"ContainerStarted","Data":"e753629114a3d3216700966670044fabaef8a602a21c53a3b56b5e4e260c436b"} Jan 27 07:49:29 crc kubenswrapper[4799]: I0127 07:49:29.885367 4799 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="847339c5-936a-45d5-b326-b9aa8d8d5d97" Jan 27 07:49:29 crc kubenswrapper[4799]: I0127 07:49:29.885393 4799 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="847339c5-936a-45d5-b326-b9aa8d8d5d97" Jan 27 07:49:29 crc kubenswrapper[4799]: I0127 07:49:29.885496 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:30 crc kubenswrapper[4799]: I0127 07:49:30.074589 4799 patch_prober.go:28] interesting pod/oauth-openshift-5b7945bc75-c7fzm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": read tcp 10.217.0.2:57846->10.217.0.62:6443: read: connection reset by peer" start-of-body= Jan 27 07:49:30 crc kubenswrapper[4799]: I0127 07:49:30.074679 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" podUID="51a79a36-a27f-43b4-930f-4aa8279f0c8b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": read tcp 10.217.0.2:57846->10.217.0.62:6443: read: connection reset by peer" Jan 27 07:49:30 crc kubenswrapper[4799]: I0127 07:49:30.896176 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5b7945bc75-c7fzm_51a79a36-a27f-43b4-930f-4aa8279f0c8b/oauth-openshift/2.log" Jan 27 07:49:30 crc kubenswrapper[4799]: I0127 07:49:30.896912 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5b7945bc75-c7fzm_51a79a36-a27f-43b4-930f-4aa8279f0c8b/oauth-openshift/1.log" Jan 27 07:49:30 crc kubenswrapper[4799]: I0127 07:49:30.896981 4799 generic.go:334] "Generic (PLEG): container finished" podID="51a79a36-a27f-43b4-930f-4aa8279f0c8b" containerID="e753629114a3d3216700966670044fabaef8a602a21c53a3b56b5e4e260c436b" exitCode=255 Jan 27 07:49:30 crc kubenswrapper[4799]: I0127 07:49:30.897112 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" event={"ID":"51a79a36-a27f-43b4-930f-4aa8279f0c8b","Type":"ContainerDied","Data":"e753629114a3d3216700966670044fabaef8a602a21c53a3b56b5e4e260c436b"} Jan 27 07:49:30 crc kubenswrapper[4799]: I0127 07:49:30.897250 4799 scope.go:117] "RemoveContainer" containerID="faba8a465ab8883b89ccfe0d58194577aeb35f5ab6c03877166a6c6b41f2e004" Jan 27 07:49:30 crc kubenswrapper[4799]: I0127 07:49:30.898127 4799 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="847339c5-936a-45d5-b326-b9aa8d8d5d97" Jan 27 07:49:30 crc kubenswrapper[4799]: I0127 07:49:30.898174 4799 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="847339c5-936a-45d5-b326-b9aa8d8d5d97" Jan 27 07:49:30 crc kubenswrapper[4799]: I0127 07:49:30.898185 4799 scope.go:117] "RemoveContainer" containerID="e753629114a3d3216700966670044fabaef8a602a21c53a3b56b5e4e260c436b" Jan 27 07:49:30 crc kubenswrapper[4799]: E0127 07:49:30.898927 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-5b7945bc75-c7fzm_openshift-authentication(51a79a36-a27f-43b4-930f-4aa8279f0c8b)\"" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" podUID="51a79a36-a27f-43b4-930f-4aa8279f0c8b" Jan 27 07:49:31 crc kubenswrapper[4799]: I0127 07:49:31.904170 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5b7945bc75-c7fzm_51a79a36-a27f-43b4-930f-4aa8279f0c8b/oauth-openshift/2.log" Jan 27 07:49:31 crc kubenswrapper[4799]: I0127 07:49:31.905192 4799 scope.go:117] "RemoveContainer" containerID="e753629114a3d3216700966670044fabaef8a602a21c53a3b56b5e4e260c436b" Jan 27 07:49:31 crc kubenswrapper[4799]: E0127 07:49:31.905493 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-5b7945bc75-c7fzm_openshift-authentication(51a79a36-a27f-43b4-930f-4aa8279f0c8b)\"" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" podUID="51a79a36-a27f-43b4-930f-4aa8279f0c8b" Jan 27 07:49:33 crc kubenswrapper[4799]: I0127 07:49:33.866636 4799 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 07:49:33 crc kubenswrapper[4799]: I0127 07:49:33.867870 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 07:49:34 crc kubenswrapper[4799]: I0127 07:49:34.473608 4799 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7f974f2c-539c-467c-ad10-bae042a6e0ea" Jan 27 07:49:37 crc kubenswrapper[4799]: I0127 07:49:37.986921 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:37 crc kubenswrapper[4799]: I0127 07:49:37.989020 4799 scope.go:117] "RemoveContainer" containerID="e753629114a3d3216700966670044fabaef8a602a21c53a3b56b5e4e260c436b" Jan 27 07:49:37 crc kubenswrapper[4799]: E0127 07:49:37.989592 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-5b7945bc75-c7fzm_openshift-authentication(51a79a36-a27f-43b4-930f-4aa8279f0c8b)\"" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" podUID="51a79a36-a27f-43b4-930f-4aa8279f0c8b" Jan 27 07:49:39 crc kubenswrapper[4799]: I0127 07:49:39.209722 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 07:49:39 crc kubenswrapper[4799]: I0127 07:49:39.609613 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 07:49:39 crc kubenswrapper[4799]: I0127 07:49:39.791274 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 07:49:40 crc kubenswrapper[4799]: I0127 07:49:40.083348 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 07:49:40 crc kubenswrapper[4799]: I0127 07:49:40.200442 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 07:49:40 crc kubenswrapper[4799]: I0127 07:49:40.415938 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 07:49:40 crc kubenswrapper[4799]: I0127 07:49:40.588183 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 07:49:40 crc kubenswrapper[4799]: I0127 07:49:40.694419 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 07:49:40 crc kubenswrapper[4799]: I0127 07:49:40.728294 4799 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 07:49:40 crc kubenswrapper[4799]: I0127 07:49:40.735932 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=31.735900002 podStartE2EDuration="31.735900002s" podCreationTimestamp="2026-01-27 07:49:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:49:28.790969029 +0000 UTC m=+235.102073094" watchObservedRunningTime="2026-01-27 07:49:40.735900002 +0000 UTC m=+247.047004077" Jan 27 07:49:40 crc kubenswrapper[4799]: I0127 07:49:40.736890 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 07:49:40 crc kubenswrapper[4799]: I0127 07:49:40.737098 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 07:49:40 crc kubenswrapper[4799]: I0127 07:49:40.737783 4799 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="847339c5-936a-45d5-b326-b9aa8d8d5d97" Jan 27 07:49:40 crc kubenswrapper[4799]: I0127 07:49:40.737851 4799 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="847339c5-936a-45d5-b326-b9aa8d8d5d97" Jan 27 07:49:40 crc kubenswrapper[4799]: I0127 07:49:40.747638 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 07:49:40 crc kubenswrapper[4799]: I0127 07:49:40.772993 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=12.772962236 podStartE2EDuration="12.772962236s" podCreationTimestamp="2026-01-27 07:49:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:49:40.76754448 +0000 UTC m=+247.078648585" watchObservedRunningTime="2026-01-27 07:49:40.772962236 +0000 UTC m=+247.084066341" Jan 27 07:49:40 crc kubenswrapper[4799]: I0127 07:49:40.962182 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 07:49:41 crc kubenswrapper[4799]: I0127 07:49:41.341211 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 07:49:41 crc kubenswrapper[4799]: I0127 07:49:41.449485 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 07:49:41 crc kubenswrapper[4799]: I0127 07:49:41.474249 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 07:49:41 crc kubenswrapper[4799]: I0127 07:49:41.653654 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 07:49:41 crc kubenswrapper[4799]: I0127 07:49:41.856570 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 07:49:41 crc kubenswrapper[4799]: I0127 07:49:41.932189 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 07:49:41 crc kubenswrapper[4799]: I0127 07:49:41.960769 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 07:49:42 crc kubenswrapper[4799]: I0127 07:49:42.111358 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 07:49:42 crc kubenswrapper[4799]: I0127 07:49:42.218150 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 07:49:42 crc kubenswrapper[4799]: I0127 07:49:42.231842 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 07:49:42 crc kubenswrapper[4799]: I0127 07:49:42.368235 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 07:49:42 crc kubenswrapper[4799]: I0127 07:49:42.377910 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 07:49:42 crc kubenswrapper[4799]: I0127 07:49:42.389181 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 07:49:42 crc kubenswrapper[4799]: I0127 07:49:42.463932 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 07:49:42 crc kubenswrapper[4799]: I0127 07:49:42.498655 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 07:49:42 crc kubenswrapper[4799]: I0127 07:49:42.716338 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 07:49:42 crc kubenswrapper[4799]: I0127 07:49:42.788943 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 07:49:42 crc kubenswrapper[4799]: I0127 07:49:42.924792 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 07:49:42 crc kubenswrapper[4799]: I0127 07:49:42.966293 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 07:49:42 crc kubenswrapper[4799]: I0127 07:49:42.971442 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 07:49:42 crc kubenswrapper[4799]: I0127 07:49:42.987228 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 07:49:43 crc kubenswrapper[4799]: I0127 07:49:43.007027 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 07:49:43 crc kubenswrapper[4799]: I0127 07:49:43.477770 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 07:49:43 crc kubenswrapper[4799]: I0127 07:49:43.548935 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 07:49:43 crc kubenswrapper[4799]: I0127 07:49:43.580887 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 07:49:43 crc kubenswrapper[4799]: I0127 07:49:43.810125 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 07:49:43 crc kubenswrapper[4799]: I0127 07:49:43.866864 4799 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 07:49:43 crc kubenswrapper[4799]: I0127 07:49:43.866957 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 07:49:43 crc kubenswrapper[4799]: I0127 07:49:43.867050 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:49:43 crc kubenswrapper[4799]: I0127 07:49:43.868220 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"106e36c0e49e0284d2e15c4c3a5e5c2b5815e9f7f281443f6aef6616bd6747b8"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 27 07:49:43 crc kubenswrapper[4799]: I0127 07:49:43.868551 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://106e36c0e49e0284d2e15c4c3a5e5c2b5815e9f7f281443f6aef6616bd6747b8" gracePeriod=30 Jan 27 07:49:43 crc kubenswrapper[4799]: I0127 07:49:43.873822 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 07:49:43 crc kubenswrapper[4799]: I0127 07:49:43.883988 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 07:49:43 crc kubenswrapper[4799]: I0127 07:49:43.889581 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 07:49:43 crc kubenswrapper[4799]: I0127 07:49:43.935713 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 07:49:43 crc kubenswrapper[4799]: I0127 07:49:43.939224 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 07:49:44 crc kubenswrapper[4799]: I0127 07:49:44.340897 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 07:49:44 crc kubenswrapper[4799]: I0127 07:49:44.402753 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 07:49:44 crc kubenswrapper[4799]: I0127 07:49:44.464358 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 07:49:44 crc kubenswrapper[4799]: I0127 07:49:44.498352 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 07:49:44 crc kubenswrapper[4799]: I0127 07:49:44.513065 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 07:49:44 crc kubenswrapper[4799]: I0127 07:49:44.538193 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 07:49:44 crc kubenswrapper[4799]: I0127 07:49:44.558933 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 07:49:44 crc kubenswrapper[4799]: I0127 07:49:44.582255 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 07:49:44 crc kubenswrapper[4799]: I0127 07:49:44.582277 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 07:49:44 crc kubenswrapper[4799]: I0127 07:49:44.693119 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 07:49:44 crc kubenswrapper[4799]: I0127 07:49:44.702630 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 07:49:44 crc kubenswrapper[4799]: I0127 07:49:44.742187 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 07:49:44 crc kubenswrapper[4799]: I0127 07:49:44.762162 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 07:49:44 crc kubenswrapper[4799]: I0127 07:49:44.816708 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 07:49:44 crc kubenswrapper[4799]: I0127 07:49:44.819187 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 07:49:44 crc kubenswrapper[4799]: I0127 07:49:44.831973 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 07:49:44 crc kubenswrapper[4799]: I0127 07:49:44.838379 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 07:49:44 crc kubenswrapper[4799]: I0127 07:49:44.921850 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 07:49:44 crc kubenswrapper[4799]: I0127 07:49:44.946635 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 07:49:44 crc kubenswrapper[4799]: I0127 07:49:44.968167 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 07:49:45 crc kubenswrapper[4799]: I0127 07:49:45.067571 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 07:49:45 crc kubenswrapper[4799]: I0127 07:49:45.217124 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 07:49:45 crc kubenswrapper[4799]: I0127 07:49:45.233096 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 07:49:45 crc kubenswrapper[4799]: I0127 07:49:45.241048 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 07:49:45 crc kubenswrapper[4799]: I0127 07:49:45.259927 4799 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 07:49:45 crc kubenswrapper[4799]: I0127 07:49:45.376530 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 07:49:45 crc kubenswrapper[4799]: I0127 07:49:45.469201 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 07:49:45 crc kubenswrapper[4799]: I0127 07:49:45.490358 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 07:49:45 crc kubenswrapper[4799]: I0127 07:49:45.493632 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 07:49:45 crc kubenswrapper[4799]: I0127 07:49:45.517521 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 07:49:45 crc kubenswrapper[4799]: I0127 07:49:45.681484 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 07:49:45 crc kubenswrapper[4799]: I0127 07:49:45.717785 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 07:49:45 crc kubenswrapper[4799]: I0127 07:49:45.743586 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 07:49:45 crc kubenswrapper[4799]: I0127 07:49:45.765449 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 07:49:45 crc kubenswrapper[4799]: I0127 07:49:45.827633 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 07:49:45 crc kubenswrapper[4799]: I0127 07:49:45.862230 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 07:49:45 crc kubenswrapper[4799]: I0127 07:49:45.889063 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 07:49:46 crc kubenswrapper[4799]: I0127 07:49:46.058290 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 07:49:46 crc kubenswrapper[4799]: I0127 07:49:46.061705 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 07:49:46 crc kubenswrapper[4799]: I0127 07:49:46.226124 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 07:49:46 crc kubenswrapper[4799]: I0127 07:49:46.230733 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 07:49:46 crc kubenswrapper[4799]: I0127 07:49:46.247851 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 07:49:46 crc kubenswrapper[4799]: I0127 07:49:46.272491 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 07:49:46 crc kubenswrapper[4799]: I0127 07:49:46.316397 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 07:49:46 crc kubenswrapper[4799]: I0127 07:49:46.449281 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 07:49:46 crc kubenswrapper[4799]: I0127 07:49:46.494987 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 07:49:46 crc kubenswrapper[4799]: I0127 07:49:46.528465 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 07:49:46 crc kubenswrapper[4799]: I0127 07:49:46.536541 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 07:49:46 crc kubenswrapper[4799]: I0127 07:49:46.553163 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 07:49:46 crc kubenswrapper[4799]: I0127 07:49:46.587967 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 07:49:46 crc kubenswrapper[4799]: I0127 07:49:46.642880 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 07:49:46 crc kubenswrapper[4799]: I0127 07:49:46.658905 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 07:49:46 crc kubenswrapper[4799]: I0127 07:49:46.808960 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.093478 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.093607 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.095233 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.110692 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.135620 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.255678 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.313765 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.349958 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.350521 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.403384 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.427525 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.504565 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.526236 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.527261 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.657585 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.681769 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.741016 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.753174 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.785528 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.785575 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.811927 4799 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.863071 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.947294 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.984831 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 07:49:47 crc kubenswrapper[4799]: I0127 07:49:47.992686 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 07:49:48 crc kubenswrapper[4799]: I0127 07:49:48.040243 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 07:49:48 crc kubenswrapper[4799]: I0127 07:49:48.091923 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 07:49:48 crc kubenswrapper[4799]: I0127 07:49:48.183865 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 07:49:48 crc kubenswrapper[4799]: I0127 07:49:48.327176 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 07:49:48 crc kubenswrapper[4799]: I0127 07:49:48.328484 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 07:49:48 crc kubenswrapper[4799]: I0127 07:49:48.387706 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 07:49:48 crc kubenswrapper[4799]: I0127 07:49:48.413366 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 07:49:48 crc kubenswrapper[4799]: I0127 07:49:48.625068 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 07:49:48 crc kubenswrapper[4799]: I0127 07:49:48.699909 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 07:49:48 crc kubenswrapper[4799]: I0127 07:49:48.916250 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.004448 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.080413 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.168951 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.210397 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.210521 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.242865 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.248592 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.274734 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.343259 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.386014 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.417809 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.429878 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.460477 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.488655 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.564764 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.611742 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.627931 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.635970 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.636483 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.652403 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.688594 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.705087 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.716185 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.738539 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.742284 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.803422 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.879778 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.888928 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.959514 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.975250 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.978008 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 07:49:49 crc kubenswrapper[4799]: I0127 07:49:49.998846 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 07:49:50 crc kubenswrapper[4799]: I0127 07:49:50.058342 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 07:49:50 crc kubenswrapper[4799]: I0127 07:49:50.112430 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 07:49:50 crc kubenswrapper[4799]: I0127 07:49:50.180523 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 07:49:50 crc kubenswrapper[4799]: I0127 07:49:50.227375 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 07:49:50 crc kubenswrapper[4799]: I0127 07:49:50.278186 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 07:49:50 crc kubenswrapper[4799]: I0127 07:49:50.292822 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 07:49:50 crc kubenswrapper[4799]: I0127 07:49:50.316161 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 07:49:50 crc kubenswrapper[4799]: I0127 07:49:50.347123 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 07:49:50 crc kubenswrapper[4799]: I0127 07:49:50.403723 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 07:49:50 crc kubenswrapper[4799]: I0127 07:49:50.517227 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 07:49:50 crc kubenswrapper[4799]: I0127 07:49:50.547983 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 07:49:50 crc kubenswrapper[4799]: I0127 07:49:50.548021 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 07:49:50 crc kubenswrapper[4799]: I0127 07:49:50.560740 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 07:49:50 crc kubenswrapper[4799]: I0127 07:49:50.562655 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 07:49:50 crc kubenswrapper[4799]: I0127 07:49:50.570798 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 07:49:50 crc kubenswrapper[4799]: I0127 07:49:50.641334 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 07:49:50 crc kubenswrapper[4799]: I0127 07:49:50.647436 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 07:49:50 crc kubenswrapper[4799]: I0127 07:49:50.652818 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 07:49:50 crc kubenswrapper[4799]: I0127 07:49:50.743930 4799 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.001020 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.018368 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.052294 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.062429 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.088926 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.117452 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.239594 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.244004 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.293161 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.309285 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.338766 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.383598 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.395970 4799 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.396277 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://99fa5435c034ffde660f2e2027f3bf57cfff4582e1b0392f38835aeea8bd50c9" gracePeriod=5 Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.451813 4799 scope.go:117] "RemoveContainer" containerID="e753629114a3d3216700966670044fabaef8a602a21c53a3b56b5e4e260c436b" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.471325 4799 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.557908 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.603910 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.616637 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.659009 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.688165 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.708533 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.740818 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.751399 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.794192 4799 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.885225 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 07:49:51 crc kubenswrapper[4799]: I0127 07:49:51.893706 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 07:49:52 crc kubenswrapper[4799]: I0127 07:49:52.065877 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 07:49:52 crc kubenswrapper[4799]: I0127 07:49:52.128769 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 07:49:52 crc kubenswrapper[4799]: I0127 07:49:52.146896 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5b7945bc75-c7fzm_51a79a36-a27f-43b4-930f-4aa8279f0c8b/oauth-openshift/2.log" Jan 27 07:49:52 crc kubenswrapper[4799]: I0127 07:49:52.146968 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" event={"ID":"51a79a36-a27f-43b4-930f-4aa8279f0c8b","Type":"ContainerStarted","Data":"7ce25fd8ea29b22dbbf0ebc6f5a0395b66a355ac2ba4ef7c8e7558043162f590"} Jan 27 07:49:52 crc kubenswrapper[4799]: I0127 07:49:52.148716 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:52 crc kubenswrapper[4799]: I0127 07:49:52.173224 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" podStartSLOduration=77.173202678 podStartE2EDuration="1m17.173202678s" podCreationTimestamp="2026-01-27 07:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:49:29.910689049 +0000 UTC m=+236.221793124" watchObservedRunningTime="2026-01-27 07:49:52.173202678 +0000 UTC m=+258.484306763" Jan 27 07:49:52 crc kubenswrapper[4799]: I0127 07:49:52.195570 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5b7945bc75-c7fzm" Jan 27 07:49:52 crc kubenswrapper[4799]: I0127 07:49:52.365537 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 07:49:52 crc kubenswrapper[4799]: I0127 07:49:52.431750 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 07:49:52 crc kubenswrapper[4799]: I0127 07:49:52.444560 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 07:49:52 crc kubenswrapper[4799]: I0127 07:49:52.458632 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 07:49:52 crc kubenswrapper[4799]: I0127 07:49:52.527807 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 07:49:52 crc kubenswrapper[4799]: I0127 07:49:52.539373 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 07:49:52 crc kubenswrapper[4799]: I0127 07:49:52.627497 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 07:49:52 crc kubenswrapper[4799]: I0127 07:49:52.662185 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 07:49:52 crc kubenswrapper[4799]: I0127 07:49:52.667774 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 07:49:52 crc kubenswrapper[4799]: I0127 07:49:52.723385 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 07:49:52 crc kubenswrapper[4799]: I0127 07:49:52.754395 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 07:49:52 crc kubenswrapper[4799]: I0127 07:49:52.930406 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 07:49:52 crc kubenswrapper[4799]: I0127 07:49:52.958083 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 07:49:53 crc kubenswrapper[4799]: I0127 07:49:53.024397 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 07:49:53 crc kubenswrapper[4799]: I0127 07:49:53.041826 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 07:49:53 crc kubenswrapper[4799]: I0127 07:49:53.080809 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 07:49:53 crc kubenswrapper[4799]: I0127 07:49:53.127189 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 07:49:53 crc kubenswrapper[4799]: I0127 07:49:53.151845 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 07:49:53 crc kubenswrapper[4799]: I0127 07:49:53.232185 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 07:49:53 crc kubenswrapper[4799]: I0127 07:49:53.317708 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 07:49:53 crc kubenswrapper[4799]: I0127 07:49:53.455917 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 07:49:53 crc kubenswrapper[4799]: I0127 07:49:53.576054 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 07:49:53 crc kubenswrapper[4799]: I0127 07:49:53.817956 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 07:49:53 crc kubenswrapper[4799]: I0127 07:49:53.831798 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 07:49:54 crc kubenswrapper[4799]: I0127 07:49:54.016405 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 07:49:54 crc kubenswrapper[4799]: I0127 07:49:54.025399 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 07:49:54 crc kubenswrapper[4799]: I0127 07:49:54.163658 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 07:49:54 crc kubenswrapper[4799]: I0127 07:49:54.598270 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 07:49:54 crc kubenswrapper[4799]: I0127 07:49:54.650450 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 07:49:54 crc kubenswrapper[4799]: I0127 07:49:54.666870 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 07:49:54 crc kubenswrapper[4799]: I0127 07:49:54.837486 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 07:49:55 crc kubenswrapper[4799]: I0127 07:49:55.007161 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 07:49:55 crc kubenswrapper[4799]: I0127 07:49:55.114284 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 07:49:55 crc kubenswrapper[4799]: I0127 07:49:55.191440 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 07:49:55 crc kubenswrapper[4799]: I0127 07:49:55.449843 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 07:49:55 crc kubenswrapper[4799]: I0127 07:49:55.488493 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 07:49:55 crc kubenswrapper[4799]: I0127 07:49:55.548836 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 07:49:55 crc kubenswrapper[4799]: I0127 07:49:55.582222 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 07:49:55 crc kubenswrapper[4799]: I0127 07:49:55.596363 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 07:49:55 crc kubenswrapper[4799]: I0127 07:49:55.604942 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 07:49:55 crc kubenswrapper[4799]: I0127 07:49:55.728221 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 07:49:55 crc kubenswrapper[4799]: I0127 07:49:55.971196 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 07:49:56 crc kubenswrapper[4799]: I0127 07:49:56.016144 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 07:49:56 crc kubenswrapper[4799]: I0127 07:49:56.204815 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 07:49:56 crc kubenswrapper[4799]: I0127 07:49:56.769723 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.000091 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.000162 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.094999 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.095094 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.095134 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.095189 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.095234 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.095335 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.095335 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.095362 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.095388 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.095640 4799 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.095659 4799 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.095670 4799 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.095681 4799 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.107038 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.189477 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.189559 4799 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="99fa5435c034ffde660f2e2027f3bf57cfff4582e1b0392f38835aeea8bd50c9" exitCode=137 Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.189663 4799 scope.go:117] "RemoveContainer" containerID="99fa5435c034ffde660f2e2027f3bf57cfff4582e1b0392f38835aeea8bd50c9" Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.189829 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.197082 4799 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.216078 4799 scope.go:117] "RemoveContainer" containerID="99fa5435c034ffde660f2e2027f3bf57cfff4582e1b0392f38835aeea8bd50c9" Jan 27 07:49:57 crc kubenswrapper[4799]: E0127 07:49:57.216610 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99fa5435c034ffde660f2e2027f3bf57cfff4582e1b0392f38835aeea8bd50c9\": container with ID starting with 99fa5435c034ffde660f2e2027f3bf57cfff4582e1b0392f38835aeea8bd50c9 not found: ID does not exist" containerID="99fa5435c034ffde660f2e2027f3bf57cfff4582e1b0392f38835aeea8bd50c9" Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.216675 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99fa5435c034ffde660f2e2027f3bf57cfff4582e1b0392f38835aeea8bd50c9"} err="failed to get container status \"99fa5435c034ffde660f2e2027f3bf57cfff4582e1b0392f38835aeea8bd50c9\": rpc error: code = NotFound desc = could not find container \"99fa5435c034ffde660f2e2027f3bf57cfff4582e1b0392f38835aeea8bd50c9\": container with ID starting with 99fa5435c034ffde660f2e2027f3bf57cfff4582e1b0392f38835aeea8bd50c9 not found: ID does not exist" Jan 27 07:49:57 crc kubenswrapper[4799]: I0127 07:49:57.596847 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 07:49:58 crc kubenswrapper[4799]: I0127 07:49:58.463050 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 07:49:58 crc kubenswrapper[4799]: I0127 07:49:58.464482 4799 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 27 07:49:58 crc kubenswrapper[4799]: I0127 07:49:58.480983 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 07:49:58 crc kubenswrapper[4799]: I0127 07:49:58.481037 4799 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="b3ea7a8f-63cb-4622-8d7e-f36f36f4d90d" Jan 27 07:49:58 crc kubenswrapper[4799]: I0127 07:49:58.487835 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 07:49:58 crc kubenswrapper[4799]: I0127 07:49:58.487891 4799 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="b3ea7a8f-63cb-4622-8d7e-f36f36f4d90d" Jan 27 07:50:11 crc kubenswrapper[4799]: I0127 07:50:11.293428 4799 generic.go:334] "Generic (PLEG): container finished" podID="2b678fa7-59f7-4a2c-8cae-3f71a17f8734" containerID="bcd2d2ca977f8fd428a7df0dde6d40340d4ccfa3d48a6b1994c7268011ed7e3a" exitCode=0 Jan 27 07:50:11 crc kubenswrapper[4799]: I0127 07:50:11.293503 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" event={"ID":"2b678fa7-59f7-4a2c-8cae-3f71a17f8734","Type":"ContainerDied","Data":"bcd2d2ca977f8fd428a7df0dde6d40340d4ccfa3d48a6b1994c7268011ed7e3a"} Jan 27 07:50:11 crc kubenswrapper[4799]: I0127 07:50:11.295797 4799 scope.go:117] "RemoveContainer" containerID="bcd2d2ca977f8fd428a7df0dde6d40340d4ccfa3d48a6b1994c7268011ed7e3a" Jan 27 07:50:12 crc kubenswrapper[4799]: I0127 07:50:12.302861 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" event={"ID":"2b678fa7-59f7-4a2c-8cae-3f71a17f8734","Type":"ContainerStarted","Data":"591ccff02dde4bc36c2401c3d9a9c496783aeb5836a1e0eba94e185d46821faf"} Jan 27 07:50:12 crc kubenswrapper[4799]: I0127 07:50:12.304015 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" Jan 27 07:50:12 crc kubenswrapper[4799]: I0127 07:50:12.306004 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" Jan 27 07:50:14 crc kubenswrapper[4799]: I0127 07:50:14.335034 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 27 07:50:14 crc kubenswrapper[4799]: I0127 07:50:14.337878 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 07:50:14 crc kubenswrapper[4799]: I0127 07:50:14.337939 4799 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="106e36c0e49e0284d2e15c4c3a5e5c2b5815e9f7f281443f6aef6616bd6747b8" exitCode=137 Jan 27 07:50:14 crc kubenswrapper[4799]: I0127 07:50:14.338072 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"106e36c0e49e0284d2e15c4c3a5e5c2b5815e9f7f281443f6aef6616bd6747b8"} Jan 27 07:50:14 crc kubenswrapper[4799]: I0127 07:50:14.338174 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cc184f4397dd6005e7dbe935921923fff067c51a0537439f7a85c9c2eac29472"} Jan 27 07:50:14 crc kubenswrapper[4799]: I0127 07:50:14.338217 4799 scope.go:117] "RemoveContainer" containerID="a4bf569fcc0d5e3fb9b595643b9876eb8c7e9de23412d9c560295eaac6fd2b14" Jan 27 07:50:15 crc kubenswrapper[4799]: I0127 07:50:15.347467 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 27 07:50:17 crc kubenswrapper[4799]: I0127 07:50:17.758449 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:50:23 crc kubenswrapper[4799]: I0127 07:50:23.867246 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:50:23 crc kubenswrapper[4799]: I0127 07:50:23.873773 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:50:28 crc kubenswrapper[4799]: I0127 07:50:24.417408 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 07:50:34 crc kubenswrapper[4799]: I0127 07:50:34.168640 4799 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 07:50:34 crc kubenswrapper[4799]: I0127 07:50:34.980256 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb"] Jan 27 07:50:34 crc kubenswrapper[4799]: I0127 07:50:34.980607 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" podUID="0227c54f-b8f4-4b0a-b88d-c2ccd38141bd" containerName="route-controller-manager" containerID="cri-o://9977570c4187a955db27ddadb0a689a12579ed82568fb9197ce54095ebd2f7dd" gracePeriod=30 Jan 27 07:50:34 crc kubenswrapper[4799]: I0127 07:50:34.993518 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7586585969-r4mzw"] Jan 27 07:50:34 crc kubenswrapper[4799]: I0127 07:50:34.993859 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" podUID="9bf1c5c7-8b08-47d0-a62a-75b855e1b994" containerName="controller-manager" containerID="cri-o://4c5267cfff32518b22d18348f7bd107a49a001b8c23d854b7f5f8b6fd682f0f0" gracePeriod=30 Jan 27 07:50:35 crc kubenswrapper[4799]: I0127 07:50:35.496429 4799 generic.go:334] "Generic (PLEG): container finished" podID="9bf1c5c7-8b08-47d0-a62a-75b855e1b994" containerID="4c5267cfff32518b22d18348f7bd107a49a001b8c23d854b7f5f8b6fd682f0f0" exitCode=0 Jan 27 07:50:35 crc kubenswrapper[4799]: I0127 07:50:35.496516 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" event={"ID":"9bf1c5c7-8b08-47d0-a62a-75b855e1b994","Type":"ContainerDied","Data":"4c5267cfff32518b22d18348f7bd107a49a001b8c23d854b7f5f8b6fd682f0f0"} Jan 27 07:50:35 crc kubenswrapper[4799]: I0127 07:50:35.498759 4799 generic.go:334] "Generic (PLEG): container finished" podID="0227c54f-b8f4-4b0a-b88d-c2ccd38141bd" containerID="9977570c4187a955db27ddadb0a689a12579ed82568fb9197ce54095ebd2f7dd" exitCode=0 Jan 27 07:50:35 crc kubenswrapper[4799]: I0127 07:50:35.498804 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" event={"ID":"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd","Type":"ContainerDied","Data":"9977570c4187a955db27ddadb0a689a12579ed82568fb9197ce54095ebd2f7dd"} Jan 27 07:50:35 crc kubenswrapper[4799]: I0127 07:50:35.954964 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" Jan 27 07:50:35 crc kubenswrapper[4799]: I0127 07:50:35.965320 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.050447 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-serving-cert\") pod \"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd\" (UID: \"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd\") " Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.050605 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-config\") pod \"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd\" (UID: \"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd\") " Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.050739 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-proxy-ca-bundles\") pod \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\" (UID: \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\") " Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.050791 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpz4v\" (UniqueName: \"kubernetes.io/projected/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-kube-api-access-xpz4v\") pod \"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd\" (UID: \"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd\") " Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.050823 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kqnz\" (UniqueName: \"kubernetes.io/projected/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-kube-api-access-7kqnz\") pod \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\" (UID: \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\") " Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.050850 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-serving-cert\") pod \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\" (UID: \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\") " Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.050892 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-client-ca\") pod \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\" (UID: \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\") " Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.050927 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-config\") pod \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\" (UID: \"9bf1c5c7-8b08-47d0-a62a-75b855e1b994\") " Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.050967 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-client-ca\") pod \"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd\" (UID: \"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd\") " Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.052535 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-client-ca" (OuterVolumeSpecName: "client-ca") pod "0227c54f-b8f4-4b0a-b88d-c2ccd38141bd" (UID: "0227c54f-b8f4-4b0a-b88d-c2ccd38141bd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.052920 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-config" (OuterVolumeSpecName: "config") pod "0227c54f-b8f4-4b0a-b88d-c2ccd38141bd" (UID: "0227c54f-b8f4-4b0a-b88d-c2ccd38141bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.053689 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9bf1c5c7-8b08-47d0-a62a-75b855e1b994" (UID: "9bf1c5c7-8b08-47d0-a62a-75b855e1b994"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.054781 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-client-ca" (OuterVolumeSpecName: "client-ca") pod "9bf1c5c7-8b08-47d0-a62a-75b855e1b994" (UID: "9bf1c5c7-8b08-47d0-a62a-75b855e1b994"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.059517 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0227c54f-b8f4-4b0a-b88d-c2ccd38141bd" (UID: "0227c54f-b8f4-4b0a-b88d-c2ccd38141bd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.060319 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-config" (OuterVolumeSpecName: "config") pod "9bf1c5c7-8b08-47d0-a62a-75b855e1b994" (UID: "9bf1c5c7-8b08-47d0-a62a-75b855e1b994"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.063576 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-kube-api-access-7kqnz" (OuterVolumeSpecName: "kube-api-access-7kqnz") pod "9bf1c5c7-8b08-47d0-a62a-75b855e1b994" (UID: "9bf1c5c7-8b08-47d0-a62a-75b855e1b994"). InnerVolumeSpecName "kube-api-access-7kqnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.063670 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-kube-api-access-xpz4v" (OuterVolumeSpecName: "kube-api-access-xpz4v") pod "0227c54f-b8f4-4b0a-b88d-c2ccd38141bd" (UID: "0227c54f-b8f4-4b0a-b88d-c2ccd38141bd"). InnerVolumeSpecName "kube-api-access-xpz4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.067935 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9bf1c5c7-8b08-47d0-a62a-75b855e1b994" (UID: "9bf1c5c7-8b08-47d0-a62a-75b855e1b994"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.153167 4799 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.153234 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.153258 4799 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.153269 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.153279 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.153287 4799 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.153314 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpz4v\" (UniqueName: \"kubernetes.io/projected/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd-kube-api-access-xpz4v\") on node \"crc\" DevicePath \"\"" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.153324 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kqnz\" (UniqueName: \"kubernetes.io/projected/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-kube-api-access-7kqnz\") on node \"crc\" DevicePath \"\"" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.153331 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bf1c5c7-8b08-47d0-a62a-75b855e1b994-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.505273 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.505276 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7586585969-r4mzw" event={"ID":"9bf1c5c7-8b08-47d0-a62a-75b855e1b994","Type":"ContainerDied","Data":"f174461d4bf8e366b8b48ebeecddbf1d9c3aa8253649c6a83e41e1a4b5feed6b"} Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.507311 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" event={"ID":"0227c54f-b8f4-4b0a-b88d-c2ccd38141bd","Type":"ContainerDied","Data":"716ce5e0f9ce56ad24ef24bb31874ae475e1fd16e002aad06a426c41fec9ad67"} Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.507356 4799 scope.go:117] "RemoveContainer" containerID="4c5267cfff32518b22d18348f7bd107a49a001b8c23d854b7f5f8b6fd682f0f0" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.506873 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.533569 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb"] Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.536421 4799 scope.go:117] "RemoveContainer" containerID="9977570c4187a955db27ddadb0a689a12579ed82568fb9197ce54095ebd2f7dd" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.540474 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b8b4ddbc5-s7rbb"] Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.544180 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7586585969-r4mzw"] Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.549014 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7586585969-r4mzw"] Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.698773 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw"] Jan 27 07:50:36 crc kubenswrapper[4799]: E0127 07:50:36.701152 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f10e10b-7684-4395-b1f1-051344597338" containerName="installer" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.701173 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f10e10b-7684-4395-b1f1-051344597338" containerName="installer" Jan 27 07:50:36 crc kubenswrapper[4799]: E0127 07:50:36.701187 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bf1c5c7-8b08-47d0-a62a-75b855e1b994" containerName="controller-manager" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.701196 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bf1c5c7-8b08-47d0-a62a-75b855e1b994" containerName="controller-manager" Jan 27 07:50:36 crc kubenswrapper[4799]: E0127 07:50:36.701224 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0227c54f-b8f4-4b0a-b88d-c2ccd38141bd" containerName="route-controller-manager" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.701234 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0227c54f-b8f4-4b0a-b88d-c2ccd38141bd" containerName="route-controller-manager" Jan 27 07:50:36 crc kubenswrapper[4799]: E0127 07:50:36.701243 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.701249 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.701374 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="0227c54f-b8f4-4b0a-b88d-c2ccd38141bd" containerName="route-controller-manager" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.701405 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f10e10b-7684-4395-b1f1-051344597338" containerName="installer" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.701417 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.701429 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bf1c5c7-8b08-47d0-a62a-75b855e1b994" containerName="controller-manager" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.701979 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.703833 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.704834 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.705123 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.705126 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.708397 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.709216 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.711466 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-68ff5b985d-zl76v"] Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.712418 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.715427 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.715617 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.716104 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.716143 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.716227 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.716914 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.720771 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw"] Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.726801 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.726862 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68ff5b985d-zl76v"] Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.761392 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab7f016a-baa3-4384-bd0f-1fe375242c26-serving-cert\") pod \"route-controller-manager-8596db5b-8wxsw\" (UID: \"ab7f016a-baa3-4384-bd0f-1fe375242c26\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.761438 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcxc6\" (UniqueName: \"kubernetes.io/projected/ab7f016a-baa3-4384-bd0f-1fe375242c26-kube-api-access-vcxc6\") pod \"route-controller-manager-8596db5b-8wxsw\" (UID: \"ab7f016a-baa3-4384-bd0f-1fe375242c26\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.761463 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/803e5bf0-ba49-4e23-aa07-2f12205f0780-proxy-ca-bundles\") pod \"controller-manager-68ff5b985d-zl76v\" (UID: \"803e5bf0-ba49-4e23-aa07-2f12205f0780\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.761485 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab7f016a-baa3-4384-bd0f-1fe375242c26-config\") pod \"route-controller-manager-8596db5b-8wxsw\" (UID: \"ab7f016a-baa3-4384-bd0f-1fe375242c26\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.761853 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/803e5bf0-ba49-4e23-aa07-2f12205f0780-client-ca\") pod \"controller-manager-68ff5b985d-zl76v\" (UID: \"803e5bf0-ba49-4e23-aa07-2f12205f0780\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.761951 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab7f016a-baa3-4384-bd0f-1fe375242c26-client-ca\") pod \"route-controller-manager-8596db5b-8wxsw\" (UID: \"ab7f016a-baa3-4384-bd0f-1fe375242c26\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.762048 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sxln\" (UniqueName: \"kubernetes.io/projected/803e5bf0-ba49-4e23-aa07-2f12205f0780-kube-api-access-6sxln\") pod \"controller-manager-68ff5b985d-zl76v\" (UID: \"803e5bf0-ba49-4e23-aa07-2f12205f0780\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.762088 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/803e5bf0-ba49-4e23-aa07-2f12205f0780-serving-cert\") pod \"controller-manager-68ff5b985d-zl76v\" (UID: \"803e5bf0-ba49-4e23-aa07-2f12205f0780\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.762125 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/803e5bf0-ba49-4e23-aa07-2f12205f0780-config\") pod \"controller-manager-68ff5b985d-zl76v\" (UID: \"803e5bf0-ba49-4e23-aa07-2f12205f0780\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.863287 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/803e5bf0-ba49-4e23-aa07-2f12205f0780-client-ca\") pod \"controller-manager-68ff5b985d-zl76v\" (UID: \"803e5bf0-ba49-4e23-aa07-2f12205f0780\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.863359 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab7f016a-baa3-4384-bd0f-1fe375242c26-client-ca\") pod \"route-controller-manager-8596db5b-8wxsw\" (UID: \"ab7f016a-baa3-4384-bd0f-1fe375242c26\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.863389 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sxln\" (UniqueName: \"kubernetes.io/projected/803e5bf0-ba49-4e23-aa07-2f12205f0780-kube-api-access-6sxln\") pod \"controller-manager-68ff5b985d-zl76v\" (UID: \"803e5bf0-ba49-4e23-aa07-2f12205f0780\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.863409 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/803e5bf0-ba49-4e23-aa07-2f12205f0780-serving-cert\") pod \"controller-manager-68ff5b985d-zl76v\" (UID: \"803e5bf0-ba49-4e23-aa07-2f12205f0780\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.863429 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/803e5bf0-ba49-4e23-aa07-2f12205f0780-config\") pod \"controller-manager-68ff5b985d-zl76v\" (UID: \"803e5bf0-ba49-4e23-aa07-2f12205f0780\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.863462 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab7f016a-baa3-4384-bd0f-1fe375242c26-serving-cert\") pod \"route-controller-manager-8596db5b-8wxsw\" (UID: \"ab7f016a-baa3-4384-bd0f-1fe375242c26\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.863483 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcxc6\" (UniqueName: \"kubernetes.io/projected/ab7f016a-baa3-4384-bd0f-1fe375242c26-kube-api-access-vcxc6\") pod \"route-controller-manager-8596db5b-8wxsw\" (UID: \"ab7f016a-baa3-4384-bd0f-1fe375242c26\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.863512 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/803e5bf0-ba49-4e23-aa07-2f12205f0780-proxy-ca-bundles\") pod \"controller-manager-68ff5b985d-zl76v\" (UID: \"803e5bf0-ba49-4e23-aa07-2f12205f0780\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.863536 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab7f016a-baa3-4384-bd0f-1fe375242c26-config\") pod \"route-controller-manager-8596db5b-8wxsw\" (UID: \"ab7f016a-baa3-4384-bd0f-1fe375242c26\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.864549 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab7f016a-baa3-4384-bd0f-1fe375242c26-client-ca\") pod \"route-controller-manager-8596db5b-8wxsw\" (UID: \"ab7f016a-baa3-4384-bd0f-1fe375242c26\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.864984 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab7f016a-baa3-4384-bd0f-1fe375242c26-config\") pod \"route-controller-manager-8596db5b-8wxsw\" (UID: \"ab7f016a-baa3-4384-bd0f-1fe375242c26\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.865016 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/803e5bf0-ba49-4e23-aa07-2f12205f0780-client-ca\") pod \"controller-manager-68ff5b985d-zl76v\" (UID: \"803e5bf0-ba49-4e23-aa07-2f12205f0780\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.865131 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/803e5bf0-ba49-4e23-aa07-2f12205f0780-proxy-ca-bundles\") pod \"controller-manager-68ff5b985d-zl76v\" (UID: \"803e5bf0-ba49-4e23-aa07-2f12205f0780\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.866863 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/803e5bf0-ba49-4e23-aa07-2f12205f0780-config\") pod \"controller-manager-68ff5b985d-zl76v\" (UID: \"803e5bf0-ba49-4e23-aa07-2f12205f0780\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.879668 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab7f016a-baa3-4384-bd0f-1fe375242c26-serving-cert\") pod \"route-controller-manager-8596db5b-8wxsw\" (UID: \"ab7f016a-baa3-4384-bd0f-1fe375242c26\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.884911 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/803e5bf0-ba49-4e23-aa07-2f12205f0780-serving-cert\") pod \"controller-manager-68ff5b985d-zl76v\" (UID: \"803e5bf0-ba49-4e23-aa07-2f12205f0780\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.884926 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcxc6\" (UniqueName: \"kubernetes.io/projected/ab7f016a-baa3-4384-bd0f-1fe375242c26-kube-api-access-vcxc6\") pod \"route-controller-manager-8596db5b-8wxsw\" (UID: \"ab7f016a-baa3-4384-bd0f-1fe375242c26\") " pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" Jan 27 07:50:36 crc kubenswrapper[4799]: I0127 07:50:36.904112 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sxln\" (UniqueName: \"kubernetes.io/projected/803e5bf0-ba49-4e23-aa07-2f12205f0780-kube-api-access-6sxln\") pod \"controller-manager-68ff5b985d-zl76v\" (UID: \"803e5bf0-ba49-4e23-aa07-2f12205f0780\") " pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" Jan 27 07:50:37 crc kubenswrapper[4799]: I0127 07:50:37.031251 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" Jan 27 07:50:37 crc kubenswrapper[4799]: I0127 07:50:37.045866 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" Jan 27 07:50:37 crc kubenswrapper[4799]: I0127 07:50:37.287902 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw"] Jan 27 07:50:37 crc kubenswrapper[4799]: W0127 07:50:37.295332 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab7f016a_baa3_4384_bd0f_1fe375242c26.slice/crio-82c09cf5877680300b6b4a60a3668cc043f9ae07bd6878c6633c914f578aaebb WatchSource:0}: Error finding container 82c09cf5877680300b6b4a60a3668cc043f9ae07bd6878c6633c914f578aaebb: Status 404 returned error can't find the container with id 82c09cf5877680300b6b4a60a3668cc043f9ae07bd6878c6633c914f578aaebb Jan 27 07:50:37 crc kubenswrapper[4799]: I0127 07:50:37.330830 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68ff5b985d-zl76v"] Jan 27 07:50:37 crc kubenswrapper[4799]: W0127 07:50:37.338689 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod803e5bf0_ba49_4e23_aa07_2f12205f0780.slice/crio-b2abd235440deea7395fc801d8731a6dc462d4be43f21aa6931d3b8cbf61d231 WatchSource:0}: Error finding container b2abd235440deea7395fc801d8731a6dc462d4be43f21aa6931d3b8cbf61d231: Status 404 returned error can't find the container with id b2abd235440deea7395fc801d8731a6dc462d4be43f21aa6931d3b8cbf61d231 Jan 27 07:50:37 crc kubenswrapper[4799]: I0127 07:50:37.513295 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" event={"ID":"ab7f016a-baa3-4384-bd0f-1fe375242c26","Type":"ContainerStarted","Data":"82c09cf5877680300b6b4a60a3668cc043f9ae07bd6878c6633c914f578aaebb"} Jan 27 07:50:37 crc kubenswrapper[4799]: I0127 07:50:37.514431 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" event={"ID":"803e5bf0-ba49-4e23-aa07-2f12205f0780","Type":"ContainerStarted","Data":"b2abd235440deea7395fc801d8731a6dc462d4be43f21aa6931d3b8cbf61d231"} Jan 27 07:50:38 crc kubenswrapper[4799]: I0127 07:50:38.460055 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0227c54f-b8f4-4b0a-b88d-c2ccd38141bd" path="/var/lib/kubelet/pods/0227c54f-b8f4-4b0a-b88d-c2ccd38141bd/volumes" Jan 27 07:50:38 crc kubenswrapper[4799]: I0127 07:50:38.461039 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bf1c5c7-8b08-47d0-a62a-75b855e1b994" path="/var/lib/kubelet/pods/9bf1c5c7-8b08-47d0-a62a-75b855e1b994/volumes" Jan 27 07:50:38 crc kubenswrapper[4799]: I0127 07:50:38.523118 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" event={"ID":"ab7f016a-baa3-4384-bd0f-1fe375242c26","Type":"ContainerStarted","Data":"6a3623da7e73dbacc8f1cb57823c23731db050d6b071c68e54bcc902392f1d58"} Jan 27 07:50:38 crc kubenswrapper[4799]: I0127 07:50:38.523608 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" Jan 27 07:50:38 crc kubenswrapper[4799]: I0127 07:50:38.525360 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" event={"ID":"803e5bf0-ba49-4e23-aa07-2f12205f0780","Type":"ContainerStarted","Data":"31723cffc81192ef8c56b3319884da820ff4e540d31fe52ae4ce8cb7e35ae215"} Jan 27 07:50:38 crc kubenswrapper[4799]: I0127 07:50:38.527142 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" Jan 27 07:50:38 crc kubenswrapper[4799]: I0127 07:50:38.530449 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" Jan 27 07:50:38 crc kubenswrapper[4799]: I0127 07:50:38.530764 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" Jan 27 07:50:38 crc kubenswrapper[4799]: I0127 07:50:38.553801 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" podStartSLOduration=3.553773043 podStartE2EDuration="3.553773043s" podCreationTimestamp="2026-01-27 07:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:50:38.549664797 +0000 UTC m=+304.860768862" watchObservedRunningTime="2026-01-27 07:50:38.553773043 +0000 UTC m=+304.864877108" Jan 27 07:50:38 crc kubenswrapper[4799]: I0127 07:50:38.589465 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" podStartSLOduration=3.5894371449999998 podStartE2EDuration="3.589437145s" podCreationTimestamp="2026-01-27 07:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:50:38.566605626 +0000 UTC m=+304.877709691" watchObservedRunningTime="2026-01-27 07:50:38.589437145 +0000 UTC m=+304.900541210" Jan 27 07:50:54 crc kubenswrapper[4799]: I0127 07:50:54.219542 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68ff5b985d-zl76v"] Jan 27 07:50:54 crc kubenswrapper[4799]: I0127 07:50:54.220473 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" podUID="803e5bf0-ba49-4e23-aa07-2f12205f0780" containerName="controller-manager" containerID="cri-o://31723cffc81192ef8c56b3319884da820ff4e540d31fe52ae4ce8cb7e35ae215" gracePeriod=30 Jan 27 07:50:54 crc kubenswrapper[4799]: I0127 07:50:54.685618 4799 generic.go:334] "Generic (PLEG): container finished" podID="803e5bf0-ba49-4e23-aa07-2f12205f0780" containerID="31723cffc81192ef8c56b3319884da820ff4e540d31fe52ae4ce8cb7e35ae215" exitCode=0 Jan 27 07:50:54 crc kubenswrapper[4799]: I0127 07:50:54.685727 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" event={"ID":"803e5bf0-ba49-4e23-aa07-2f12205f0780","Type":"ContainerDied","Data":"31723cffc81192ef8c56b3319884da820ff4e540d31fe52ae4ce8cb7e35ae215"} Jan 27 07:50:54 crc kubenswrapper[4799]: I0127 07:50:54.779036 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" Jan 27 07:50:54 crc kubenswrapper[4799]: I0127 07:50:54.821718 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/803e5bf0-ba49-4e23-aa07-2f12205f0780-proxy-ca-bundles\") pod \"803e5bf0-ba49-4e23-aa07-2f12205f0780\" (UID: \"803e5bf0-ba49-4e23-aa07-2f12205f0780\") " Jan 27 07:50:54 crc kubenswrapper[4799]: I0127 07:50:54.821777 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/803e5bf0-ba49-4e23-aa07-2f12205f0780-client-ca\") pod \"803e5bf0-ba49-4e23-aa07-2f12205f0780\" (UID: \"803e5bf0-ba49-4e23-aa07-2f12205f0780\") " Jan 27 07:50:54 crc kubenswrapper[4799]: I0127 07:50:54.821821 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sxln\" (UniqueName: \"kubernetes.io/projected/803e5bf0-ba49-4e23-aa07-2f12205f0780-kube-api-access-6sxln\") pod \"803e5bf0-ba49-4e23-aa07-2f12205f0780\" (UID: \"803e5bf0-ba49-4e23-aa07-2f12205f0780\") " Jan 27 07:50:54 crc kubenswrapper[4799]: I0127 07:50:54.821860 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/803e5bf0-ba49-4e23-aa07-2f12205f0780-serving-cert\") pod \"803e5bf0-ba49-4e23-aa07-2f12205f0780\" (UID: \"803e5bf0-ba49-4e23-aa07-2f12205f0780\") " Jan 27 07:50:54 crc kubenswrapper[4799]: I0127 07:50:54.821900 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/803e5bf0-ba49-4e23-aa07-2f12205f0780-config\") pod \"803e5bf0-ba49-4e23-aa07-2f12205f0780\" (UID: \"803e5bf0-ba49-4e23-aa07-2f12205f0780\") " Jan 27 07:50:54 crc kubenswrapper[4799]: I0127 07:50:54.822766 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/803e5bf0-ba49-4e23-aa07-2f12205f0780-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "803e5bf0-ba49-4e23-aa07-2f12205f0780" (UID: "803e5bf0-ba49-4e23-aa07-2f12205f0780"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:50:54 crc kubenswrapper[4799]: I0127 07:50:54.822992 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/803e5bf0-ba49-4e23-aa07-2f12205f0780-config" (OuterVolumeSpecName: "config") pod "803e5bf0-ba49-4e23-aa07-2f12205f0780" (UID: "803e5bf0-ba49-4e23-aa07-2f12205f0780"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:50:54 crc kubenswrapper[4799]: I0127 07:50:54.823768 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/803e5bf0-ba49-4e23-aa07-2f12205f0780-client-ca" (OuterVolumeSpecName: "client-ca") pod "803e5bf0-ba49-4e23-aa07-2f12205f0780" (UID: "803e5bf0-ba49-4e23-aa07-2f12205f0780"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:50:54 crc kubenswrapper[4799]: I0127 07:50:54.828590 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/803e5bf0-ba49-4e23-aa07-2f12205f0780-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "803e5bf0-ba49-4e23-aa07-2f12205f0780" (UID: "803e5bf0-ba49-4e23-aa07-2f12205f0780"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:50:54 crc kubenswrapper[4799]: I0127 07:50:54.833547 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/803e5bf0-ba49-4e23-aa07-2f12205f0780-kube-api-access-6sxln" (OuterVolumeSpecName: "kube-api-access-6sxln") pod "803e5bf0-ba49-4e23-aa07-2f12205f0780" (UID: "803e5bf0-ba49-4e23-aa07-2f12205f0780"). InnerVolumeSpecName "kube-api-access-6sxln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:50:54 crc kubenswrapper[4799]: I0127 07:50:54.923731 4799 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/803e5bf0-ba49-4e23-aa07-2f12205f0780-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 07:50:54 crc kubenswrapper[4799]: I0127 07:50:54.923802 4799 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/803e5bf0-ba49-4e23-aa07-2f12205f0780-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:50:54 crc kubenswrapper[4799]: I0127 07:50:54.923818 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sxln\" (UniqueName: \"kubernetes.io/projected/803e5bf0-ba49-4e23-aa07-2f12205f0780-kube-api-access-6sxln\") on node \"crc\" DevicePath \"\"" Jan 27 07:50:54 crc kubenswrapper[4799]: I0127 07:50:54.923834 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/803e5bf0-ba49-4e23-aa07-2f12205f0780-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:50:54 crc kubenswrapper[4799]: I0127 07:50:54.923848 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/803e5bf0-ba49-4e23-aa07-2f12205f0780-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.695823 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" event={"ID":"803e5bf0-ba49-4e23-aa07-2f12205f0780","Type":"ContainerDied","Data":"b2abd235440deea7395fc801d8731a6dc462d4be43f21aa6931d3b8cbf61d231"} Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.695944 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68ff5b985d-zl76v" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.695953 4799 scope.go:117] "RemoveContainer" containerID="31723cffc81192ef8c56b3319884da820ff4e540d31fe52ae4ce8cb7e35ae215" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.711377 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b4b98f94c-sdb54"] Jan 27 07:50:55 crc kubenswrapper[4799]: E0127 07:50:55.711714 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="803e5bf0-ba49-4e23-aa07-2f12205f0780" containerName="controller-manager" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.711739 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="803e5bf0-ba49-4e23-aa07-2f12205f0780" containerName="controller-manager" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.711900 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="803e5bf0-ba49-4e23-aa07-2f12205f0780" containerName="controller-manager" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.712895 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b4b98f94c-sdb54" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.717604 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.722410 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.722952 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.724402 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.725795 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.732934 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.737798 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b4b98f94c-sdb54"] Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.739035 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.743035 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxmch\" (UniqueName: \"kubernetes.io/projected/9754fb30-c58e-4ab6-9437-260598baee0f-kube-api-access-jxmch\") pod \"controller-manager-7b4b98f94c-sdb54\" (UID: \"9754fb30-c58e-4ab6-9437-260598baee0f\") " pod="openshift-controller-manager/controller-manager-7b4b98f94c-sdb54" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.743125 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9754fb30-c58e-4ab6-9437-260598baee0f-client-ca\") pod \"controller-manager-7b4b98f94c-sdb54\" (UID: \"9754fb30-c58e-4ab6-9437-260598baee0f\") " pod="openshift-controller-manager/controller-manager-7b4b98f94c-sdb54" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.743189 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9754fb30-c58e-4ab6-9437-260598baee0f-serving-cert\") pod \"controller-manager-7b4b98f94c-sdb54\" (UID: \"9754fb30-c58e-4ab6-9437-260598baee0f\") " pod="openshift-controller-manager/controller-manager-7b4b98f94c-sdb54" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.743273 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9754fb30-c58e-4ab6-9437-260598baee0f-proxy-ca-bundles\") pod \"controller-manager-7b4b98f94c-sdb54\" (UID: \"9754fb30-c58e-4ab6-9437-260598baee0f\") " pod="openshift-controller-manager/controller-manager-7b4b98f94c-sdb54" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.743511 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9754fb30-c58e-4ab6-9437-260598baee0f-config\") pod \"controller-manager-7b4b98f94c-sdb54\" (UID: \"9754fb30-c58e-4ab6-9437-260598baee0f\") " pod="openshift-controller-manager/controller-manager-7b4b98f94c-sdb54" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.774687 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68ff5b985d-zl76v"] Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.778259 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-68ff5b985d-zl76v"] Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.844750 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9754fb30-c58e-4ab6-9437-260598baee0f-config\") pod \"controller-manager-7b4b98f94c-sdb54\" (UID: \"9754fb30-c58e-4ab6-9437-260598baee0f\") " pod="openshift-controller-manager/controller-manager-7b4b98f94c-sdb54" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.844833 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxmch\" (UniqueName: \"kubernetes.io/projected/9754fb30-c58e-4ab6-9437-260598baee0f-kube-api-access-jxmch\") pod \"controller-manager-7b4b98f94c-sdb54\" (UID: \"9754fb30-c58e-4ab6-9437-260598baee0f\") " pod="openshift-controller-manager/controller-manager-7b4b98f94c-sdb54" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.844858 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9754fb30-c58e-4ab6-9437-260598baee0f-client-ca\") pod \"controller-manager-7b4b98f94c-sdb54\" (UID: \"9754fb30-c58e-4ab6-9437-260598baee0f\") " pod="openshift-controller-manager/controller-manager-7b4b98f94c-sdb54" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.844883 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9754fb30-c58e-4ab6-9437-260598baee0f-serving-cert\") pod \"controller-manager-7b4b98f94c-sdb54\" (UID: \"9754fb30-c58e-4ab6-9437-260598baee0f\") " pod="openshift-controller-manager/controller-manager-7b4b98f94c-sdb54" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.844903 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9754fb30-c58e-4ab6-9437-260598baee0f-proxy-ca-bundles\") pod \"controller-manager-7b4b98f94c-sdb54\" (UID: \"9754fb30-c58e-4ab6-9437-260598baee0f\") " pod="openshift-controller-manager/controller-manager-7b4b98f94c-sdb54" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.846495 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9754fb30-c58e-4ab6-9437-260598baee0f-proxy-ca-bundles\") pod \"controller-manager-7b4b98f94c-sdb54\" (UID: \"9754fb30-c58e-4ab6-9437-260598baee0f\") " pod="openshift-controller-manager/controller-manager-7b4b98f94c-sdb54" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.846772 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9754fb30-c58e-4ab6-9437-260598baee0f-client-ca\") pod \"controller-manager-7b4b98f94c-sdb54\" (UID: \"9754fb30-c58e-4ab6-9437-260598baee0f\") " pod="openshift-controller-manager/controller-manager-7b4b98f94c-sdb54" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.848002 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9754fb30-c58e-4ab6-9437-260598baee0f-config\") pod \"controller-manager-7b4b98f94c-sdb54\" (UID: \"9754fb30-c58e-4ab6-9437-260598baee0f\") " pod="openshift-controller-manager/controller-manager-7b4b98f94c-sdb54" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.862859 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9754fb30-c58e-4ab6-9437-260598baee0f-serving-cert\") pod \"controller-manager-7b4b98f94c-sdb54\" (UID: \"9754fb30-c58e-4ab6-9437-260598baee0f\") " pod="openshift-controller-manager/controller-manager-7b4b98f94c-sdb54" Jan 27 07:50:55 crc kubenswrapper[4799]: I0127 07:50:55.870145 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxmch\" (UniqueName: \"kubernetes.io/projected/9754fb30-c58e-4ab6-9437-260598baee0f-kube-api-access-jxmch\") pod \"controller-manager-7b4b98f94c-sdb54\" (UID: \"9754fb30-c58e-4ab6-9437-260598baee0f\") " pod="openshift-controller-manager/controller-manager-7b4b98f94c-sdb54" Jan 27 07:50:56 crc kubenswrapper[4799]: I0127 07:50:56.057168 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b4b98f94c-sdb54" Jan 27 07:50:56 crc kubenswrapper[4799]: I0127 07:50:56.458653 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="803e5bf0-ba49-4e23-aa07-2f12205f0780" path="/var/lib/kubelet/pods/803e5bf0-ba49-4e23-aa07-2f12205f0780/volumes" Jan 27 07:50:56 crc kubenswrapper[4799]: I0127 07:50:56.521120 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b4b98f94c-sdb54"] Jan 27 07:50:56 crc kubenswrapper[4799]: W0127 07:50:56.561252 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9754fb30_c58e_4ab6_9437_260598baee0f.slice/crio-296cae8e26724df64b36abbe9e0ac53c01850cc4bf17cc041b8f2eb2ffd01e21 WatchSource:0}: Error finding container 296cae8e26724df64b36abbe9e0ac53c01850cc4bf17cc041b8f2eb2ffd01e21: Status 404 returned error can't find the container with id 296cae8e26724df64b36abbe9e0ac53c01850cc4bf17cc041b8f2eb2ffd01e21 Jan 27 07:50:56 crc kubenswrapper[4799]: I0127 07:50:56.704572 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b4b98f94c-sdb54" event={"ID":"9754fb30-c58e-4ab6-9437-260598baee0f","Type":"ContainerStarted","Data":"296cae8e26724df64b36abbe9e0ac53c01850cc4bf17cc041b8f2eb2ffd01e21"} Jan 27 07:50:57 crc kubenswrapper[4799]: I0127 07:50:57.712747 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b4b98f94c-sdb54" event={"ID":"9754fb30-c58e-4ab6-9437-260598baee0f","Type":"ContainerStarted","Data":"271c36c33c1205511664b890e54d797e91238c1c33ffbd3b1e8a586cad5d22d5"} Jan 27 07:50:57 crc kubenswrapper[4799]: I0127 07:50:57.713095 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b4b98f94c-sdb54" Jan 27 07:50:57 crc kubenswrapper[4799]: I0127 07:50:57.720763 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b4b98f94c-sdb54" Jan 27 07:50:57 crc kubenswrapper[4799]: I0127 07:50:57.737524 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b4b98f94c-sdb54" podStartSLOduration=3.737503063 podStartE2EDuration="3.737503063s" podCreationTimestamp="2026-01-27 07:50:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:50:57.736926226 +0000 UTC m=+324.048030301" watchObservedRunningTime="2026-01-27 07:50:57.737503063 +0000 UTC m=+324.048607128" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.047006 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-r2m62"] Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.048559 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.065364 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-r2m62"] Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.230162 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/912dab34-41b2-4bd5-8c7b-db948fa4f421-ca-trust-extracted\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.230240 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/912dab34-41b2-4bd5-8c7b-db948fa4f421-registry-certificates\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.230279 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/912dab34-41b2-4bd5-8c7b-db948fa4f421-trusted-ca\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.230413 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.230435 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2vww\" (UniqueName: \"kubernetes.io/projected/912dab34-41b2-4bd5-8c7b-db948fa4f421-kube-api-access-n2vww\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.230461 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/912dab34-41b2-4bd5-8c7b-db948fa4f421-installation-pull-secrets\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.230506 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/912dab34-41b2-4bd5-8c7b-db948fa4f421-registry-tls\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.230526 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/912dab34-41b2-4bd5-8c7b-db948fa4f421-bound-sa-token\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.274277 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.331616 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/912dab34-41b2-4bd5-8c7b-db948fa4f421-registry-tls\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.331680 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/912dab34-41b2-4bd5-8c7b-db948fa4f421-bound-sa-token\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.331729 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/912dab34-41b2-4bd5-8c7b-db948fa4f421-ca-trust-extracted\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.331786 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/912dab34-41b2-4bd5-8c7b-db948fa4f421-registry-certificates\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.331820 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/912dab34-41b2-4bd5-8c7b-db948fa4f421-trusted-ca\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.331890 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2vww\" (UniqueName: \"kubernetes.io/projected/912dab34-41b2-4bd5-8c7b-db948fa4f421-kube-api-access-n2vww\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.331940 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/912dab34-41b2-4bd5-8c7b-db948fa4f421-installation-pull-secrets\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.333140 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/912dab34-41b2-4bd5-8c7b-db948fa4f421-ca-trust-extracted\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.333421 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/912dab34-41b2-4bd5-8c7b-db948fa4f421-registry-certificates\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.333791 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/912dab34-41b2-4bd5-8c7b-db948fa4f421-trusted-ca\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.338395 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/912dab34-41b2-4bd5-8c7b-db948fa4f421-installation-pull-secrets\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.351091 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/912dab34-41b2-4bd5-8c7b-db948fa4f421-registry-tls\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.354971 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/912dab34-41b2-4bd5-8c7b-db948fa4f421-bound-sa-token\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.365577 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2vww\" (UniqueName: \"kubernetes.io/projected/912dab34-41b2-4bd5-8c7b-db948fa4f421-kube-api-access-n2vww\") pod \"image-registry-66df7c8f76-r2m62\" (UID: \"912dab34-41b2-4bd5-8c7b-db948fa4f421\") " pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.367748 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.832271 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-r2m62"] Jan 27 07:51:30 crc kubenswrapper[4799]: I0127 07:51:30.914208 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" event={"ID":"912dab34-41b2-4bd5-8c7b-db948fa4f421","Type":"ContainerStarted","Data":"7c828ad3b4e2a343cb9c532a85e89885fe70a9400987e79068f11b56424061b0"} Jan 27 07:51:31 crc kubenswrapper[4799]: I0127 07:51:31.921254 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" event={"ID":"912dab34-41b2-4bd5-8c7b-db948fa4f421","Type":"ContainerStarted","Data":"81ff8bf5716296f0d7adc9f87d977c0d6c74f5fae9a9fcbab1979834b95cc3c0"} Jan 27 07:51:31 crc kubenswrapper[4799]: I0127 07:51:31.921848 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:31 crc kubenswrapper[4799]: I0127 07:51:31.952926 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" podStartSLOduration=1.952904426 podStartE2EDuration="1.952904426s" podCreationTimestamp="2026-01-27 07:51:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:51:31.94781231 +0000 UTC m=+358.258916385" watchObservedRunningTime="2026-01-27 07:51:31.952904426 +0000 UTC m=+358.264008501" Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.213191 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw"] Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.214492 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" podUID="ab7f016a-baa3-4384-bd0f-1fe375242c26" containerName="route-controller-manager" containerID="cri-o://6a3623da7e73dbacc8f1cb57823c23731db050d6b071c68e54bcc902392f1d58" gracePeriod=30 Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.616433 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.713426 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab7f016a-baa3-4384-bd0f-1fe375242c26-serving-cert\") pod \"ab7f016a-baa3-4384-bd0f-1fe375242c26\" (UID: \"ab7f016a-baa3-4384-bd0f-1fe375242c26\") " Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.713483 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab7f016a-baa3-4384-bd0f-1fe375242c26-config\") pod \"ab7f016a-baa3-4384-bd0f-1fe375242c26\" (UID: \"ab7f016a-baa3-4384-bd0f-1fe375242c26\") " Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.713531 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcxc6\" (UniqueName: \"kubernetes.io/projected/ab7f016a-baa3-4384-bd0f-1fe375242c26-kube-api-access-vcxc6\") pod \"ab7f016a-baa3-4384-bd0f-1fe375242c26\" (UID: \"ab7f016a-baa3-4384-bd0f-1fe375242c26\") " Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.713560 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab7f016a-baa3-4384-bd0f-1fe375242c26-client-ca\") pod \"ab7f016a-baa3-4384-bd0f-1fe375242c26\" (UID: \"ab7f016a-baa3-4384-bd0f-1fe375242c26\") " Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.714157 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab7f016a-baa3-4384-bd0f-1fe375242c26-client-ca" (OuterVolumeSpecName: "client-ca") pod "ab7f016a-baa3-4384-bd0f-1fe375242c26" (UID: "ab7f016a-baa3-4384-bd0f-1fe375242c26"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.714172 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab7f016a-baa3-4384-bd0f-1fe375242c26-config" (OuterVolumeSpecName: "config") pod "ab7f016a-baa3-4384-bd0f-1fe375242c26" (UID: "ab7f016a-baa3-4384-bd0f-1fe375242c26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.714469 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab7f016a-baa3-4384-bd0f-1fe375242c26-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.714497 4799 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab7f016a-baa3-4384-bd0f-1fe375242c26-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.719480 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab7f016a-baa3-4384-bd0f-1fe375242c26-kube-api-access-vcxc6" (OuterVolumeSpecName: "kube-api-access-vcxc6") pod "ab7f016a-baa3-4384-bd0f-1fe375242c26" (UID: "ab7f016a-baa3-4384-bd0f-1fe375242c26"). InnerVolumeSpecName "kube-api-access-vcxc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.720592 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7f016a-baa3-4384-bd0f-1fe375242c26-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ab7f016a-baa3-4384-bd0f-1fe375242c26" (UID: "ab7f016a-baa3-4384-bd0f-1fe375242c26"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.815620 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcxc6\" (UniqueName: \"kubernetes.io/projected/ab7f016a-baa3-4384-bd0f-1fe375242c26-kube-api-access-vcxc6\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.815650 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab7f016a-baa3-4384-bd0f-1fe375242c26-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.943374 4799 generic.go:334] "Generic (PLEG): container finished" podID="ab7f016a-baa3-4384-bd0f-1fe375242c26" containerID="6a3623da7e73dbacc8f1cb57823c23731db050d6b071c68e54bcc902392f1d58" exitCode=0 Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.943452 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" event={"ID":"ab7f016a-baa3-4384-bd0f-1fe375242c26","Type":"ContainerDied","Data":"6a3623da7e73dbacc8f1cb57823c23731db050d6b071c68e54bcc902392f1d58"} Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.943458 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.943508 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw" event={"ID":"ab7f016a-baa3-4384-bd0f-1fe375242c26","Type":"ContainerDied","Data":"82c09cf5877680300b6b4a60a3668cc043f9ae07bd6878c6633c914f578aaebb"} Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.943543 4799 scope.go:117] "RemoveContainer" containerID="6a3623da7e73dbacc8f1cb57823c23731db050d6b071c68e54bcc902392f1d58" Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.964095 4799 scope.go:117] "RemoveContainer" containerID="6a3623da7e73dbacc8f1cb57823c23731db050d6b071c68e54bcc902392f1d58" Jan 27 07:51:34 crc kubenswrapper[4799]: E0127 07:51:34.967364 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a3623da7e73dbacc8f1cb57823c23731db050d6b071c68e54bcc902392f1d58\": container with ID starting with 6a3623da7e73dbacc8f1cb57823c23731db050d6b071c68e54bcc902392f1d58 not found: ID does not exist" containerID="6a3623da7e73dbacc8f1cb57823c23731db050d6b071c68e54bcc902392f1d58" Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.967420 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a3623da7e73dbacc8f1cb57823c23731db050d6b071c68e54bcc902392f1d58"} err="failed to get container status \"6a3623da7e73dbacc8f1cb57823c23731db050d6b071c68e54bcc902392f1d58\": rpc error: code = NotFound desc = could not find container \"6a3623da7e73dbacc8f1cb57823c23731db050d6b071c68e54bcc902392f1d58\": container with ID starting with 6a3623da7e73dbacc8f1cb57823c23731db050d6b071c68e54bcc902392f1d58 not found: ID does not exist" Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.972572 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw"] Jan 27 07:51:34 crc kubenswrapper[4799]: I0127 07:51:34.975519 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8596db5b-8wxsw"] Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.732657 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cd56b64df-svdg7"] Jan 27 07:51:35 crc kubenswrapper[4799]: E0127 07:51:35.733162 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7f016a-baa3-4384-bd0f-1fe375242c26" containerName="route-controller-manager" Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.733175 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7f016a-baa3-4384-bd0f-1fe375242c26" containerName="route-controller-manager" Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.733274 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7f016a-baa3-4384-bd0f-1fe375242c26" containerName="route-controller-manager" Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.733702 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cd56b64df-svdg7" Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.737145 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.737494 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.737687 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.737917 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.738110 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.738341 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.746045 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cd56b64df-svdg7"] Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.829666 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0337083-52a2-447a-a3c9-abea3a9a8c13-client-ca\") pod \"route-controller-manager-cd56b64df-svdg7\" (UID: \"f0337083-52a2-447a-a3c9-abea3a9a8c13\") " pod="openshift-route-controller-manager/route-controller-manager-cd56b64df-svdg7" Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.830029 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0337083-52a2-447a-a3c9-abea3a9a8c13-config\") pod \"route-controller-manager-cd56b64df-svdg7\" (UID: \"f0337083-52a2-447a-a3c9-abea3a9a8c13\") " pod="openshift-route-controller-manager/route-controller-manager-cd56b64df-svdg7" Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.830168 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0337083-52a2-447a-a3c9-abea3a9a8c13-serving-cert\") pod \"route-controller-manager-cd56b64df-svdg7\" (UID: \"f0337083-52a2-447a-a3c9-abea3a9a8c13\") " pod="openshift-route-controller-manager/route-controller-manager-cd56b64df-svdg7" Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.830345 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9krld\" (UniqueName: \"kubernetes.io/projected/f0337083-52a2-447a-a3c9-abea3a9a8c13-kube-api-access-9krld\") pod \"route-controller-manager-cd56b64df-svdg7\" (UID: \"f0337083-52a2-447a-a3c9-abea3a9a8c13\") " pod="openshift-route-controller-manager/route-controller-manager-cd56b64df-svdg7" Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.932204 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0337083-52a2-447a-a3c9-abea3a9a8c13-client-ca\") pod \"route-controller-manager-cd56b64df-svdg7\" (UID: \"f0337083-52a2-447a-a3c9-abea3a9a8c13\") " pod="openshift-route-controller-manager/route-controller-manager-cd56b64df-svdg7" Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.932652 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0337083-52a2-447a-a3c9-abea3a9a8c13-config\") pod \"route-controller-manager-cd56b64df-svdg7\" (UID: \"f0337083-52a2-447a-a3c9-abea3a9a8c13\") " pod="openshift-route-controller-manager/route-controller-manager-cd56b64df-svdg7" Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.932801 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0337083-52a2-447a-a3c9-abea3a9a8c13-serving-cert\") pod \"route-controller-manager-cd56b64df-svdg7\" (UID: \"f0337083-52a2-447a-a3c9-abea3a9a8c13\") " pod="openshift-route-controller-manager/route-controller-manager-cd56b64df-svdg7" Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.932943 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9krld\" (UniqueName: \"kubernetes.io/projected/f0337083-52a2-447a-a3c9-abea3a9a8c13-kube-api-access-9krld\") pod \"route-controller-manager-cd56b64df-svdg7\" (UID: \"f0337083-52a2-447a-a3c9-abea3a9a8c13\") " pod="openshift-route-controller-manager/route-controller-manager-cd56b64df-svdg7" Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.933641 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0337083-52a2-447a-a3c9-abea3a9a8c13-client-ca\") pod \"route-controller-manager-cd56b64df-svdg7\" (UID: \"f0337083-52a2-447a-a3c9-abea3a9a8c13\") " pod="openshift-route-controller-manager/route-controller-manager-cd56b64df-svdg7" Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.935080 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0337083-52a2-447a-a3c9-abea3a9a8c13-config\") pod \"route-controller-manager-cd56b64df-svdg7\" (UID: \"f0337083-52a2-447a-a3c9-abea3a9a8c13\") " pod="openshift-route-controller-manager/route-controller-manager-cd56b64df-svdg7" Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.950427 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0337083-52a2-447a-a3c9-abea3a9a8c13-serving-cert\") pod \"route-controller-manager-cd56b64df-svdg7\" (UID: \"f0337083-52a2-447a-a3c9-abea3a9a8c13\") " pod="openshift-route-controller-manager/route-controller-manager-cd56b64df-svdg7" Jan 27 07:51:35 crc kubenswrapper[4799]: I0127 07:51:35.959418 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9krld\" (UniqueName: \"kubernetes.io/projected/f0337083-52a2-447a-a3c9-abea3a9a8c13-kube-api-access-9krld\") pod \"route-controller-manager-cd56b64df-svdg7\" (UID: \"f0337083-52a2-447a-a3c9-abea3a9a8c13\") " pod="openshift-route-controller-manager/route-controller-manager-cd56b64df-svdg7" Jan 27 07:51:36 crc kubenswrapper[4799]: I0127 07:51:36.069669 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cd56b64df-svdg7" Jan 27 07:51:36 crc kubenswrapper[4799]: I0127 07:51:36.460458 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab7f016a-baa3-4384-bd0f-1fe375242c26" path="/var/lib/kubelet/pods/ab7f016a-baa3-4384-bd0f-1fe375242c26/volumes" Jan 27 07:51:36 crc kubenswrapper[4799]: I0127 07:51:36.536344 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cd56b64df-svdg7"] Jan 27 07:51:36 crc kubenswrapper[4799]: W0127 07:51:36.544577 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0337083_52a2_447a_a3c9_abea3a9a8c13.slice/crio-ce26854968fb92a14e17fd78c9b2090f3e42f7978d1303818448f2654c461d78 WatchSource:0}: Error finding container ce26854968fb92a14e17fd78c9b2090f3e42f7978d1303818448f2654c461d78: Status 404 returned error can't find the container with id ce26854968fb92a14e17fd78c9b2090f3e42f7978d1303818448f2654c461d78 Jan 27 07:51:36 crc kubenswrapper[4799]: I0127 07:51:36.960411 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cd56b64df-svdg7" event={"ID":"f0337083-52a2-447a-a3c9-abea3a9a8c13","Type":"ContainerStarted","Data":"b3807da948fb8e85faa145cceea850a3f34bad5b547297a6e969af8912dc1c34"} Jan 27 07:51:36 crc kubenswrapper[4799]: I0127 07:51:36.960833 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cd56b64df-svdg7" event={"ID":"f0337083-52a2-447a-a3c9-abea3a9a8c13","Type":"ContainerStarted","Data":"ce26854968fb92a14e17fd78c9b2090f3e42f7978d1303818448f2654c461d78"} Jan 27 07:51:36 crc kubenswrapper[4799]: I0127 07:51:36.960852 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-cd56b64df-svdg7" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.094185 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-cd56b64df-svdg7" podStartSLOduration=3.094166695 podStartE2EDuration="3.094166695s" podCreationTimestamp="2026-01-27 07:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:51:36.986257554 +0000 UTC m=+363.297361629" watchObservedRunningTime="2026-01-27 07:51:37.094166695 +0000 UTC m=+363.405270760" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.101258 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kr6pr"] Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.101577 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kr6pr" podUID="1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd" containerName="registry-server" containerID="cri-o://74bfd43f4308e44eb46e49413bc3de7cb7f0eae1e18342be5f0078584a85f882" gracePeriod=30 Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.114446 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g6ktz"] Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.114876 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g6ktz" podUID="c808aeb6-0065-4efc-9d98-9ee6c97e3250" containerName="registry-server" containerID="cri-o://942137fa1748f04df2eb22d549dfa15ad59a07a1cfc431100256f97853807bdb" gracePeriod=30 Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.119221 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2m2xz"] Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.120734 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" podUID="2b678fa7-59f7-4a2c-8cae-3f71a17f8734" containerName="marketplace-operator" containerID="cri-o://591ccff02dde4bc36c2401c3d9a9c496783aeb5836a1e0eba94e185d46821faf" gracePeriod=30 Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.129481 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tm4nj"] Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.129842 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tm4nj" podUID="0f28fa44-7662-40a4-a2c2-81bb5a9c4ace" containerName="registry-server" containerID="cri-o://4b3d9d2d1ec064c46a718f7168de3be58cf7e17fa7710493dfb2041b33745b96" gracePeriod=30 Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.150466 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zxtw5"] Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.150806 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zxtw5" podUID="472d8035-24d2-4d6c-bb9d-4f932d4be020" containerName="registry-server" containerID="cri-o://f164310698041329ec9082c07b5ba51f4981d76573bf60662cc5705346c757e4" gracePeriod=30 Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.158031 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b8j25"] Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.160807 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b8j25" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.165738 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b8j25"] Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.196398 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-cd56b64df-svdg7" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.248816 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64lxh\" (UniqueName: \"kubernetes.io/projected/69748a95-cef3-4ad3-99aa-7e59a1f7683c-kube-api-access-64lxh\") pod \"marketplace-operator-79b997595-b8j25\" (UID: \"69748a95-cef3-4ad3-99aa-7e59a1f7683c\") " pod="openshift-marketplace/marketplace-operator-79b997595-b8j25" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.248955 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69748a95-cef3-4ad3-99aa-7e59a1f7683c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b8j25\" (UID: \"69748a95-cef3-4ad3-99aa-7e59a1f7683c\") " pod="openshift-marketplace/marketplace-operator-79b997595-b8j25" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.249027 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69748a95-cef3-4ad3-99aa-7e59a1f7683c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b8j25\" (UID: \"69748a95-cef3-4ad3-99aa-7e59a1f7683c\") " pod="openshift-marketplace/marketplace-operator-79b997595-b8j25" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.350271 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69748a95-cef3-4ad3-99aa-7e59a1f7683c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b8j25\" (UID: \"69748a95-cef3-4ad3-99aa-7e59a1f7683c\") " pod="openshift-marketplace/marketplace-operator-79b997595-b8j25" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.350415 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69748a95-cef3-4ad3-99aa-7e59a1f7683c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b8j25\" (UID: \"69748a95-cef3-4ad3-99aa-7e59a1f7683c\") " pod="openshift-marketplace/marketplace-operator-79b997595-b8j25" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.350471 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64lxh\" (UniqueName: \"kubernetes.io/projected/69748a95-cef3-4ad3-99aa-7e59a1f7683c-kube-api-access-64lxh\") pod \"marketplace-operator-79b997595-b8j25\" (UID: \"69748a95-cef3-4ad3-99aa-7e59a1f7683c\") " pod="openshift-marketplace/marketplace-operator-79b997595-b8j25" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.352441 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69748a95-cef3-4ad3-99aa-7e59a1f7683c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b8j25\" (UID: \"69748a95-cef3-4ad3-99aa-7e59a1f7683c\") " pod="openshift-marketplace/marketplace-operator-79b997595-b8j25" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.358280 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69748a95-cef3-4ad3-99aa-7e59a1f7683c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b8j25\" (UID: \"69748a95-cef3-4ad3-99aa-7e59a1f7683c\") " pod="openshift-marketplace/marketplace-operator-79b997595-b8j25" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.366655 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64lxh\" (UniqueName: \"kubernetes.io/projected/69748a95-cef3-4ad3-99aa-7e59a1f7683c-kube-api-access-64lxh\") pod \"marketplace-operator-79b997595-b8j25\" (UID: \"69748a95-cef3-4ad3-99aa-7e59a1f7683c\") " pod="openshift-marketplace/marketplace-operator-79b997595-b8j25" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.544680 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b8j25" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.548917 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6ktz" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.653163 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxtw5" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.654211 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c808aeb6-0065-4efc-9d98-9ee6c97e3250-catalog-content\") pod \"c808aeb6-0065-4efc-9d98-9ee6c97e3250\" (UID: \"c808aeb6-0065-4efc-9d98-9ee6c97e3250\") " Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.654241 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfd8z\" (UniqueName: \"kubernetes.io/projected/c808aeb6-0065-4efc-9d98-9ee6c97e3250-kube-api-access-pfd8z\") pod \"c808aeb6-0065-4efc-9d98-9ee6c97e3250\" (UID: \"c808aeb6-0065-4efc-9d98-9ee6c97e3250\") " Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.660672 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c808aeb6-0065-4efc-9d98-9ee6c97e3250-utilities\") pod \"c808aeb6-0065-4efc-9d98-9ee6c97e3250\" (UID: \"c808aeb6-0065-4efc-9d98-9ee6c97e3250\") " Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.662380 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c808aeb6-0065-4efc-9d98-9ee6c97e3250-utilities" (OuterVolumeSpecName: "utilities") pod "c808aeb6-0065-4efc-9d98-9ee6c97e3250" (UID: "c808aeb6-0065-4efc-9d98-9ee6c97e3250"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.663397 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c808aeb6-0065-4efc-9d98-9ee6c97e3250-kube-api-access-pfd8z" (OuterVolumeSpecName: "kube-api-access-pfd8z") pod "c808aeb6-0065-4efc-9d98-9ee6c97e3250" (UID: "c808aeb6-0065-4efc-9d98-9ee6c97e3250"). InnerVolumeSpecName "kube-api-access-pfd8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.711760 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.716289 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c808aeb6-0065-4efc-9d98-9ee6c97e3250-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c808aeb6-0065-4efc-9d98-9ee6c97e3250" (UID: "c808aeb6-0065-4efc-9d98-9ee6c97e3250"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.722158 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tm4nj" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.732708 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kr6pr" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.762353 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/472d8035-24d2-4d6c-bb9d-4f932d4be020-catalog-content\") pod \"472d8035-24d2-4d6c-bb9d-4f932d4be020\" (UID: \"472d8035-24d2-4d6c-bb9d-4f932d4be020\") " Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.762469 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tbk4\" (UniqueName: \"kubernetes.io/projected/472d8035-24d2-4d6c-bb9d-4f932d4be020-kube-api-access-6tbk4\") pod \"472d8035-24d2-4d6c-bb9d-4f932d4be020\" (UID: \"472d8035-24d2-4d6c-bb9d-4f932d4be020\") " Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.762521 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/472d8035-24d2-4d6c-bb9d-4f932d4be020-utilities\") pod \"472d8035-24d2-4d6c-bb9d-4f932d4be020\" (UID: \"472d8035-24d2-4d6c-bb9d-4f932d4be020\") " Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.762793 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c808aeb6-0065-4efc-9d98-9ee6c97e3250-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.762805 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c808aeb6-0065-4efc-9d98-9ee6c97e3250-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.762816 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfd8z\" (UniqueName: \"kubernetes.io/projected/c808aeb6-0065-4efc-9d98-9ee6c97e3250-kube-api-access-pfd8z\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.763367 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/472d8035-24d2-4d6c-bb9d-4f932d4be020-utilities" (OuterVolumeSpecName: "utilities") pod "472d8035-24d2-4d6c-bb9d-4f932d4be020" (UID: "472d8035-24d2-4d6c-bb9d-4f932d4be020"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.773519 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/472d8035-24d2-4d6c-bb9d-4f932d4be020-kube-api-access-6tbk4" (OuterVolumeSpecName: "kube-api-access-6tbk4") pod "472d8035-24d2-4d6c-bb9d-4f932d4be020" (UID: "472d8035-24d2-4d6c-bb9d-4f932d4be020"). InnerVolumeSpecName "kube-api-access-6tbk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.863863 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b678fa7-59f7-4a2c-8cae-3f71a17f8734-marketplace-trusted-ca\") pod \"2b678fa7-59f7-4a2c-8cae-3f71a17f8734\" (UID: \"2b678fa7-59f7-4a2c-8cae-3f71a17f8734\") " Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.863947 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd-catalog-content\") pod \"1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd\" (UID: \"1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd\") " Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.863998 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd-utilities\") pod \"1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd\" (UID: \"1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd\") " Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.864026 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzhbl\" (UniqueName: \"kubernetes.io/projected/2b678fa7-59f7-4a2c-8cae-3f71a17f8734-kube-api-access-rzhbl\") pod \"2b678fa7-59f7-4a2c-8cae-3f71a17f8734\" (UID: \"2b678fa7-59f7-4a2c-8cae-3f71a17f8734\") " Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.864113 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g62kr\" (UniqueName: \"kubernetes.io/projected/0f28fa44-7662-40a4-a2c2-81bb5a9c4ace-kube-api-access-g62kr\") pod \"0f28fa44-7662-40a4-a2c2-81bb5a9c4ace\" (UID: \"0f28fa44-7662-40a4-a2c2-81bb5a9c4ace\") " Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.864156 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b678fa7-59f7-4a2c-8cae-3f71a17f8734-marketplace-operator-metrics\") pod \"2b678fa7-59f7-4a2c-8cae-3f71a17f8734\" (UID: \"2b678fa7-59f7-4a2c-8cae-3f71a17f8734\") " Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.864182 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f28fa44-7662-40a4-a2c2-81bb5a9c4ace-utilities\") pod \"0f28fa44-7662-40a4-a2c2-81bb5a9c4ace\" (UID: \"0f28fa44-7662-40a4-a2c2-81bb5a9c4ace\") " Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.864218 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f28fa44-7662-40a4-a2c2-81bb5a9c4ace-catalog-content\") pod \"0f28fa44-7662-40a4-a2c2-81bb5a9c4ace\" (UID: \"0f28fa44-7662-40a4-a2c2-81bb5a9c4ace\") " Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.864310 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plhtn\" (UniqueName: \"kubernetes.io/projected/1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd-kube-api-access-plhtn\") pod \"1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd\" (UID: \"1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd\") " Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.864683 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b678fa7-59f7-4a2c-8cae-3f71a17f8734-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "2b678fa7-59f7-4a2c-8cae-3f71a17f8734" (UID: "2b678fa7-59f7-4a2c-8cae-3f71a17f8734"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.865158 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd-utilities" (OuterVolumeSpecName: "utilities") pod "1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd" (UID: "1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.865185 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tbk4\" (UniqueName: \"kubernetes.io/projected/472d8035-24d2-4d6c-bb9d-4f932d4be020-kube-api-access-6tbk4\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.865201 4799 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b678fa7-59f7-4a2c-8cae-3f71a17f8734-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.865211 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/472d8035-24d2-4d6c-bb9d-4f932d4be020-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.865540 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f28fa44-7662-40a4-a2c2-81bb5a9c4ace-utilities" (OuterVolumeSpecName: "utilities") pod "0f28fa44-7662-40a4-a2c2-81bb5a9c4ace" (UID: "0f28fa44-7662-40a4-a2c2-81bb5a9c4ace"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.867995 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f28fa44-7662-40a4-a2c2-81bb5a9c4ace-kube-api-access-g62kr" (OuterVolumeSpecName: "kube-api-access-g62kr") pod "0f28fa44-7662-40a4-a2c2-81bb5a9c4ace" (UID: "0f28fa44-7662-40a4-a2c2-81bb5a9c4ace"). InnerVolumeSpecName "kube-api-access-g62kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.868428 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b678fa7-59f7-4a2c-8cae-3f71a17f8734-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "2b678fa7-59f7-4a2c-8cae-3f71a17f8734" (UID: "2b678fa7-59f7-4a2c-8cae-3f71a17f8734"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.868547 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b678fa7-59f7-4a2c-8cae-3f71a17f8734-kube-api-access-rzhbl" (OuterVolumeSpecName: "kube-api-access-rzhbl") pod "2b678fa7-59f7-4a2c-8cae-3f71a17f8734" (UID: "2b678fa7-59f7-4a2c-8cae-3f71a17f8734"). InnerVolumeSpecName "kube-api-access-rzhbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.868581 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd-kube-api-access-plhtn" (OuterVolumeSpecName: "kube-api-access-plhtn") pod "1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd" (UID: "1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd"). InnerVolumeSpecName "kube-api-access-plhtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.889858 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f28fa44-7662-40a4-a2c2-81bb5a9c4ace-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f28fa44-7662-40a4-a2c2-81bb5a9c4ace" (UID: "0f28fa44-7662-40a4-a2c2-81bb5a9c4ace"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.890780 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/472d8035-24d2-4d6c-bb9d-4f932d4be020-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "472d8035-24d2-4d6c-bb9d-4f932d4be020" (UID: "472d8035-24d2-4d6c-bb9d-4f932d4be020"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.912895 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd" (UID: "1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.966762 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plhtn\" (UniqueName: \"kubernetes.io/projected/1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd-kube-api-access-plhtn\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.966801 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.966811 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.966819 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzhbl\" (UniqueName: \"kubernetes.io/projected/2b678fa7-59f7-4a2c-8cae-3f71a17f8734-kube-api-access-rzhbl\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.966831 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/472d8035-24d2-4d6c-bb9d-4f932d4be020-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.966839 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g62kr\" (UniqueName: \"kubernetes.io/projected/0f28fa44-7662-40a4-a2c2-81bb5a9c4ace-kube-api-access-g62kr\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.966848 4799 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b678fa7-59f7-4a2c-8cae-3f71a17f8734-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.966857 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f28fa44-7662-40a4-a2c2-81bb5a9c4ace-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.966864 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f28fa44-7662-40a4-a2c2-81bb5a9c4ace-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.972715 4799 generic.go:334] "Generic (PLEG): container finished" podID="0f28fa44-7662-40a4-a2c2-81bb5a9c4ace" containerID="4b3d9d2d1ec064c46a718f7168de3be58cf7e17fa7710493dfb2041b33745b96" exitCode=0 Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.972796 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tm4nj" event={"ID":"0f28fa44-7662-40a4-a2c2-81bb5a9c4ace","Type":"ContainerDied","Data":"4b3d9d2d1ec064c46a718f7168de3be58cf7e17fa7710493dfb2041b33745b96"} Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.972833 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tm4nj" event={"ID":"0f28fa44-7662-40a4-a2c2-81bb5a9c4ace","Type":"ContainerDied","Data":"bdee9a0214aa7ae0857b3e3694a11f04db99a3f24c58e9a7f6cb8b71abae107e"} Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.972856 4799 scope.go:117] "RemoveContainer" containerID="4b3d9d2d1ec064c46a718f7168de3be58cf7e17fa7710493dfb2041b33745b96" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.972798 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tm4nj" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.976244 4799 generic.go:334] "Generic (PLEG): container finished" podID="1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd" containerID="74bfd43f4308e44eb46e49413bc3de7cb7f0eae1e18342be5f0078584a85f882" exitCode=0 Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.976425 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kr6pr" event={"ID":"1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd","Type":"ContainerDied","Data":"74bfd43f4308e44eb46e49413bc3de7cb7f0eae1e18342be5f0078584a85f882"} Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.976463 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kr6pr" event={"ID":"1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd","Type":"ContainerDied","Data":"3627d321c0c285955306d77dca7cb838c3313036b6f8f85176e579ea727ab0a5"} Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.976519 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kr6pr" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.991248 4799 generic.go:334] "Generic (PLEG): container finished" podID="472d8035-24d2-4d6c-bb9d-4f932d4be020" containerID="f164310698041329ec9082c07b5ba51f4981d76573bf60662cc5705346c757e4" exitCode=0 Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.991412 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxtw5" event={"ID":"472d8035-24d2-4d6c-bb9d-4f932d4be020","Type":"ContainerDied","Data":"f164310698041329ec9082c07b5ba51f4981d76573bf60662cc5705346c757e4"} Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.991971 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxtw5" event={"ID":"472d8035-24d2-4d6c-bb9d-4f932d4be020","Type":"ContainerDied","Data":"3ecbf07cf07a66853956b24adbfb44442c2b2cbdf218c845b7e4451289f8c364"} Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.992522 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxtw5" Jan 27 07:51:37 crc kubenswrapper[4799]: I0127 07:51:37.996831 4799 scope.go:117] "RemoveContainer" containerID="c340097a7a7e6b9698ba9b2d82eb74ac091a5bd4b5935c30997ebf920a04f7d5" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.022655 4799 scope.go:117] "RemoveContainer" containerID="5484c333fc6d8a39b340305f837170ec46cc3b9486a08d7893d751fd1ff91983" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.026629 4799 generic.go:334] "Generic (PLEG): container finished" podID="c808aeb6-0065-4efc-9d98-9ee6c97e3250" containerID="942137fa1748f04df2eb22d549dfa15ad59a07a1cfc431100256f97853807bdb" exitCode=0 Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.026786 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6ktz" event={"ID":"c808aeb6-0065-4efc-9d98-9ee6c97e3250","Type":"ContainerDied","Data":"942137fa1748f04df2eb22d549dfa15ad59a07a1cfc431100256f97853807bdb"} Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.026848 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6ktz" event={"ID":"c808aeb6-0065-4efc-9d98-9ee6c97e3250","Type":"ContainerDied","Data":"c652a30720441b8d5fcd7ee284beab017f9e4e51e040b619d56450afb171b77d"} Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.029293 4799 generic.go:334] "Generic (PLEG): container finished" podID="2b678fa7-59f7-4a2c-8cae-3f71a17f8734" containerID="591ccff02dde4bc36c2401c3d9a9c496783aeb5836a1e0eba94e185d46821faf" exitCode=0 Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.029400 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" event={"ID":"2b678fa7-59f7-4a2c-8cae-3f71a17f8734","Type":"ContainerDied","Data":"591ccff02dde4bc36c2401c3d9a9c496783aeb5836a1e0eba94e185d46821faf"} Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.029483 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" event={"ID":"2b678fa7-59f7-4a2c-8cae-3f71a17f8734","Type":"ContainerDied","Data":"1208f4f477bb3a191a3a5ec57272b327bf0f95353f77bab7f6b7aaf0cb8ca5e7"} Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.030511 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b8j25"] Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.032378 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2m2xz" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.032495 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6ktz" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.054194 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kr6pr"] Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.061356 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kr6pr"] Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.070661 4799 scope.go:117] "RemoveContainer" containerID="4b3d9d2d1ec064c46a718f7168de3be58cf7e17fa7710493dfb2041b33745b96" Jan 27 07:51:38 crc kubenswrapper[4799]: E0127 07:51:38.071748 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b3d9d2d1ec064c46a718f7168de3be58cf7e17fa7710493dfb2041b33745b96\": container with ID starting with 4b3d9d2d1ec064c46a718f7168de3be58cf7e17fa7710493dfb2041b33745b96 not found: ID does not exist" containerID="4b3d9d2d1ec064c46a718f7168de3be58cf7e17fa7710493dfb2041b33745b96" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.071900 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b3d9d2d1ec064c46a718f7168de3be58cf7e17fa7710493dfb2041b33745b96"} err="failed to get container status \"4b3d9d2d1ec064c46a718f7168de3be58cf7e17fa7710493dfb2041b33745b96\": rpc error: code = NotFound desc = could not find container \"4b3d9d2d1ec064c46a718f7168de3be58cf7e17fa7710493dfb2041b33745b96\": container with ID starting with 4b3d9d2d1ec064c46a718f7168de3be58cf7e17fa7710493dfb2041b33745b96 not found: ID does not exist" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.071991 4799 scope.go:117] "RemoveContainer" containerID="c340097a7a7e6b9698ba9b2d82eb74ac091a5bd4b5935c30997ebf920a04f7d5" Jan 27 07:51:38 crc kubenswrapper[4799]: E0127 07:51:38.072439 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c340097a7a7e6b9698ba9b2d82eb74ac091a5bd4b5935c30997ebf920a04f7d5\": container with ID starting with c340097a7a7e6b9698ba9b2d82eb74ac091a5bd4b5935c30997ebf920a04f7d5 not found: ID does not exist" containerID="c340097a7a7e6b9698ba9b2d82eb74ac091a5bd4b5935c30997ebf920a04f7d5" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.072495 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c340097a7a7e6b9698ba9b2d82eb74ac091a5bd4b5935c30997ebf920a04f7d5"} err="failed to get container status \"c340097a7a7e6b9698ba9b2d82eb74ac091a5bd4b5935c30997ebf920a04f7d5\": rpc error: code = NotFound desc = could not find container \"c340097a7a7e6b9698ba9b2d82eb74ac091a5bd4b5935c30997ebf920a04f7d5\": container with ID starting with c340097a7a7e6b9698ba9b2d82eb74ac091a5bd4b5935c30997ebf920a04f7d5 not found: ID does not exist" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.072556 4799 scope.go:117] "RemoveContainer" containerID="5484c333fc6d8a39b340305f837170ec46cc3b9486a08d7893d751fd1ff91983" Jan 27 07:51:38 crc kubenswrapper[4799]: E0127 07:51:38.072877 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5484c333fc6d8a39b340305f837170ec46cc3b9486a08d7893d751fd1ff91983\": container with ID starting with 5484c333fc6d8a39b340305f837170ec46cc3b9486a08d7893d751fd1ff91983 not found: ID does not exist" containerID="5484c333fc6d8a39b340305f837170ec46cc3b9486a08d7893d751fd1ff91983" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.072892 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5484c333fc6d8a39b340305f837170ec46cc3b9486a08d7893d751fd1ff91983"} err="failed to get container status \"5484c333fc6d8a39b340305f837170ec46cc3b9486a08d7893d751fd1ff91983\": rpc error: code = NotFound desc = could not find container \"5484c333fc6d8a39b340305f837170ec46cc3b9486a08d7893d751fd1ff91983\": container with ID starting with 5484c333fc6d8a39b340305f837170ec46cc3b9486a08d7893d751fd1ff91983 not found: ID does not exist" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.072904 4799 scope.go:117] "RemoveContainer" containerID="74bfd43f4308e44eb46e49413bc3de7cb7f0eae1e18342be5f0078584a85f882" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.073028 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tm4nj"] Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.076805 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tm4nj"] Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.091821 4799 scope.go:117] "RemoveContainer" containerID="c5ef2aa6aadce8a0c3f3428d9cef3c481e6d8bf77f24a16e9a9d314d3114b8b2" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.123149 4799 scope.go:117] "RemoveContainer" containerID="4685affb3f16eb5020fb30f6bb676ff8756e7abc5145235df53586b833c9df01" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.123791 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2m2xz"] Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.136579 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2m2xz"] Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.141189 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g6ktz"] Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.146378 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g6ktz"] Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.149396 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zxtw5"] Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.152977 4799 scope.go:117] "RemoveContainer" containerID="74bfd43f4308e44eb46e49413bc3de7cb7f0eae1e18342be5f0078584a85f882" Jan 27 07:51:38 crc kubenswrapper[4799]: E0127 07:51:38.153541 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74bfd43f4308e44eb46e49413bc3de7cb7f0eae1e18342be5f0078584a85f882\": container with ID starting with 74bfd43f4308e44eb46e49413bc3de7cb7f0eae1e18342be5f0078584a85f882 not found: ID does not exist" containerID="74bfd43f4308e44eb46e49413bc3de7cb7f0eae1e18342be5f0078584a85f882" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.153582 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74bfd43f4308e44eb46e49413bc3de7cb7f0eae1e18342be5f0078584a85f882"} err="failed to get container status \"74bfd43f4308e44eb46e49413bc3de7cb7f0eae1e18342be5f0078584a85f882\": rpc error: code = NotFound desc = could not find container \"74bfd43f4308e44eb46e49413bc3de7cb7f0eae1e18342be5f0078584a85f882\": container with ID starting with 74bfd43f4308e44eb46e49413bc3de7cb7f0eae1e18342be5f0078584a85f882 not found: ID does not exist" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.153612 4799 scope.go:117] "RemoveContainer" containerID="c5ef2aa6aadce8a0c3f3428d9cef3c481e6d8bf77f24a16e9a9d314d3114b8b2" Jan 27 07:51:38 crc kubenswrapper[4799]: E0127 07:51:38.154064 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5ef2aa6aadce8a0c3f3428d9cef3c481e6d8bf77f24a16e9a9d314d3114b8b2\": container with ID starting with c5ef2aa6aadce8a0c3f3428d9cef3c481e6d8bf77f24a16e9a9d314d3114b8b2 not found: ID does not exist" containerID="c5ef2aa6aadce8a0c3f3428d9cef3c481e6d8bf77f24a16e9a9d314d3114b8b2" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.154106 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5ef2aa6aadce8a0c3f3428d9cef3c481e6d8bf77f24a16e9a9d314d3114b8b2"} err="failed to get container status \"c5ef2aa6aadce8a0c3f3428d9cef3c481e6d8bf77f24a16e9a9d314d3114b8b2\": rpc error: code = NotFound desc = could not find container \"c5ef2aa6aadce8a0c3f3428d9cef3c481e6d8bf77f24a16e9a9d314d3114b8b2\": container with ID starting with c5ef2aa6aadce8a0c3f3428d9cef3c481e6d8bf77f24a16e9a9d314d3114b8b2 not found: ID does not exist" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.154135 4799 scope.go:117] "RemoveContainer" containerID="4685affb3f16eb5020fb30f6bb676ff8756e7abc5145235df53586b833c9df01" Jan 27 07:51:38 crc kubenswrapper[4799]: E0127 07:51:38.154820 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4685affb3f16eb5020fb30f6bb676ff8756e7abc5145235df53586b833c9df01\": container with ID starting with 4685affb3f16eb5020fb30f6bb676ff8756e7abc5145235df53586b833c9df01 not found: ID does not exist" containerID="4685affb3f16eb5020fb30f6bb676ff8756e7abc5145235df53586b833c9df01" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.154852 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4685affb3f16eb5020fb30f6bb676ff8756e7abc5145235df53586b833c9df01"} err="failed to get container status \"4685affb3f16eb5020fb30f6bb676ff8756e7abc5145235df53586b833c9df01\": rpc error: code = NotFound desc = could not find container \"4685affb3f16eb5020fb30f6bb676ff8756e7abc5145235df53586b833c9df01\": container with ID starting with 4685affb3f16eb5020fb30f6bb676ff8756e7abc5145235df53586b833c9df01 not found: ID does not exist" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.154873 4799 scope.go:117] "RemoveContainer" containerID="f164310698041329ec9082c07b5ba51f4981d76573bf60662cc5705346c757e4" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.155918 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zxtw5"] Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.175335 4799 scope.go:117] "RemoveContainer" containerID="8bb1f0254ad0ee194dc2f2f2f7c525441daaf634b0087a21f9463c0e6b85c3d0" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.192280 4799 scope.go:117] "RemoveContainer" containerID="0fdf6de4de5f8a31a322db013c3818736efa536dc72f4cd2e84ae77a1bc13230" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.213544 4799 scope.go:117] "RemoveContainer" containerID="f164310698041329ec9082c07b5ba51f4981d76573bf60662cc5705346c757e4" Jan 27 07:51:38 crc kubenswrapper[4799]: E0127 07:51:38.214119 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f164310698041329ec9082c07b5ba51f4981d76573bf60662cc5705346c757e4\": container with ID starting with f164310698041329ec9082c07b5ba51f4981d76573bf60662cc5705346c757e4 not found: ID does not exist" containerID="f164310698041329ec9082c07b5ba51f4981d76573bf60662cc5705346c757e4" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.214174 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f164310698041329ec9082c07b5ba51f4981d76573bf60662cc5705346c757e4"} err="failed to get container status \"f164310698041329ec9082c07b5ba51f4981d76573bf60662cc5705346c757e4\": rpc error: code = NotFound desc = could not find container \"f164310698041329ec9082c07b5ba51f4981d76573bf60662cc5705346c757e4\": container with ID starting with f164310698041329ec9082c07b5ba51f4981d76573bf60662cc5705346c757e4 not found: ID does not exist" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.214200 4799 scope.go:117] "RemoveContainer" containerID="8bb1f0254ad0ee194dc2f2f2f7c525441daaf634b0087a21f9463c0e6b85c3d0" Jan 27 07:51:38 crc kubenswrapper[4799]: E0127 07:51:38.214515 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bb1f0254ad0ee194dc2f2f2f7c525441daaf634b0087a21f9463c0e6b85c3d0\": container with ID starting with 8bb1f0254ad0ee194dc2f2f2f7c525441daaf634b0087a21f9463c0e6b85c3d0 not found: ID does not exist" containerID="8bb1f0254ad0ee194dc2f2f2f7c525441daaf634b0087a21f9463c0e6b85c3d0" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.214536 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bb1f0254ad0ee194dc2f2f2f7c525441daaf634b0087a21f9463c0e6b85c3d0"} err="failed to get container status \"8bb1f0254ad0ee194dc2f2f2f7c525441daaf634b0087a21f9463c0e6b85c3d0\": rpc error: code = NotFound desc = could not find container \"8bb1f0254ad0ee194dc2f2f2f7c525441daaf634b0087a21f9463c0e6b85c3d0\": container with ID starting with 8bb1f0254ad0ee194dc2f2f2f7c525441daaf634b0087a21f9463c0e6b85c3d0 not found: ID does not exist" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.214551 4799 scope.go:117] "RemoveContainer" containerID="0fdf6de4de5f8a31a322db013c3818736efa536dc72f4cd2e84ae77a1bc13230" Jan 27 07:51:38 crc kubenswrapper[4799]: E0127 07:51:38.214898 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fdf6de4de5f8a31a322db013c3818736efa536dc72f4cd2e84ae77a1bc13230\": container with ID starting with 0fdf6de4de5f8a31a322db013c3818736efa536dc72f4cd2e84ae77a1bc13230 not found: ID does not exist" containerID="0fdf6de4de5f8a31a322db013c3818736efa536dc72f4cd2e84ae77a1bc13230" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.214949 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fdf6de4de5f8a31a322db013c3818736efa536dc72f4cd2e84ae77a1bc13230"} err="failed to get container status \"0fdf6de4de5f8a31a322db013c3818736efa536dc72f4cd2e84ae77a1bc13230\": rpc error: code = NotFound desc = could not find container \"0fdf6de4de5f8a31a322db013c3818736efa536dc72f4cd2e84ae77a1bc13230\": container with ID starting with 0fdf6de4de5f8a31a322db013c3818736efa536dc72f4cd2e84ae77a1bc13230 not found: ID does not exist" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.214970 4799 scope.go:117] "RemoveContainer" containerID="942137fa1748f04df2eb22d549dfa15ad59a07a1cfc431100256f97853807bdb" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.237460 4799 scope.go:117] "RemoveContainer" containerID="99f4f1bd15a237315b123a470efbac1da0bb78f140fe0205ec6520ee60ed5e39" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.253096 4799 scope.go:117] "RemoveContainer" containerID="7eb4f637c7941d0690710ad77ea9bdd198746dd85ed973b298a9460cbdbde8d5" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.266902 4799 scope.go:117] "RemoveContainer" containerID="942137fa1748f04df2eb22d549dfa15ad59a07a1cfc431100256f97853807bdb" Jan 27 07:51:38 crc kubenswrapper[4799]: E0127 07:51:38.267408 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"942137fa1748f04df2eb22d549dfa15ad59a07a1cfc431100256f97853807bdb\": container with ID starting with 942137fa1748f04df2eb22d549dfa15ad59a07a1cfc431100256f97853807bdb not found: ID does not exist" containerID="942137fa1748f04df2eb22d549dfa15ad59a07a1cfc431100256f97853807bdb" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.267437 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"942137fa1748f04df2eb22d549dfa15ad59a07a1cfc431100256f97853807bdb"} err="failed to get container status \"942137fa1748f04df2eb22d549dfa15ad59a07a1cfc431100256f97853807bdb\": rpc error: code = NotFound desc = could not find container \"942137fa1748f04df2eb22d549dfa15ad59a07a1cfc431100256f97853807bdb\": container with ID starting with 942137fa1748f04df2eb22d549dfa15ad59a07a1cfc431100256f97853807bdb not found: ID does not exist" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.267481 4799 scope.go:117] "RemoveContainer" containerID="99f4f1bd15a237315b123a470efbac1da0bb78f140fe0205ec6520ee60ed5e39" Jan 27 07:51:38 crc kubenswrapper[4799]: E0127 07:51:38.267806 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99f4f1bd15a237315b123a470efbac1da0bb78f140fe0205ec6520ee60ed5e39\": container with ID starting with 99f4f1bd15a237315b123a470efbac1da0bb78f140fe0205ec6520ee60ed5e39 not found: ID does not exist" containerID="99f4f1bd15a237315b123a470efbac1da0bb78f140fe0205ec6520ee60ed5e39" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.267825 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99f4f1bd15a237315b123a470efbac1da0bb78f140fe0205ec6520ee60ed5e39"} err="failed to get container status \"99f4f1bd15a237315b123a470efbac1da0bb78f140fe0205ec6520ee60ed5e39\": rpc error: code = NotFound desc = could not find container \"99f4f1bd15a237315b123a470efbac1da0bb78f140fe0205ec6520ee60ed5e39\": container with ID starting with 99f4f1bd15a237315b123a470efbac1da0bb78f140fe0205ec6520ee60ed5e39 not found: ID does not exist" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.267855 4799 scope.go:117] "RemoveContainer" containerID="7eb4f637c7941d0690710ad77ea9bdd198746dd85ed973b298a9460cbdbde8d5" Jan 27 07:51:38 crc kubenswrapper[4799]: E0127 07:51:38.268434 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eb4f637c7941d0690710ad77ea9bdd198746dd85ed973b298a9460cbdbde8d5\": container with ID starting with 7eb4f637c7941d0690710ad77ea9bdd198746dd85ed973b298a9460cbdbde8d5 not found: ID does not exist" containerID="7eb4f637c7941d0690710ad77ea9bdd198746dd85ed973b298a9460cbdbde8d5" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.268453 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eb4f637c7941d0690710ad77ea9bdd198746dd85ed973b298a9460cbdbde8d5"} err="failed to get container status \"7eb4f637c7941d0690710ad77ea9bdd198746dd85ed973b298a9460cbdbde8d5\": rpc error: code = NotFound desc = could not find container \"7eb4f637c7941d0690710ad77ea9bdd198746dd85ed973b298a9460cbdbde8d5\": container with ID starting with 7eb4f637c7941d0690710ad77ea9bdd198746dd85ed973b298a9460cbdbde8d5 not found: ID does not exist" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.268484 4799 scope.go:117] "RemoveContainer" containerID="591ccff02dde4bc36c2401c3d9a9c496783aeb5836a1e0eba94e185d46821faf" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.282485 4799 scope.go:117] "RemoveContainer" containerID="bcd2d2ca977f8fd428a7df0dde6d40340d4ccfa3d48a6b1994c7268011ed7e3a" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.298130 4799 scope.go:117] "RemoveContainer" containerID="591ccff02dde4bc36c2401c3d9a9c496783aeb5836a1e0eba94e185d46821faf" Jan 27 07:51:38 crc kubenswrapper[4799]: E0127 07:51:38.298510 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"591ccff02dde4bc36c2401c3d9a9c496783aeb5836a1e0eba94e185d46821faf\": container with ID starting with 591ccff02dde4bc36c2401c3d9a9c496783aeb5836a1e0eba94e185d46821faf not found: ID does not exist" containerID="591ccff02dde4bc36c2401c3d9a9c496783aeb5836a1e0eba94e185d46821faf" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.298548 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"591ccff02dde4bc36c2401c3d9a9c496783aeb5836a1e0eba94e185d46821faf"} err="failed to get container status \"591ccff02dde4bc36c2401c3d9a9c496783aeb5836a1e0eba94e185d46821faf\": rpc error: code = NotFound desc = could not find container \"591ccff02dde4bc36c2401c3d9a9c496783aeb5836a1e0eba94e185d46821faf\": container with ID starting with 591ccff02dde4bc36c2401c3d9a9c496783aeb5836a1e0eba94e185d46821faf not found: ID does not exist" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.298587 4799 scope.go:117] "RemoveContainer" containerID="bcd2d2ca977f8fd428a7df0dde6d40340d4ccfa3d48a6b1994c7268011ed7e3a" Jan 27 07:51:38 crc kubenswrapper[4799]: E0127 07:51:38.298890 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcd2d2ca977f8fd428a7df0dde6d40340d4ccfa3d48a6b1994c7268011ed7e3a\": container with ID starting with bcd2d2ca977f8fd428a7df0dde6d40340d4ccfa3d48a6b1994c7268011ed7e3a not found: ID does not exist" containerID="bcd2d2ca977f8fd428a7df0dde6d40340d4ccfa3d48a6b1994c7268011ed7e3a" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.298922 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcd2d2ca977f8fd428a7df0dde6d40340d4ccfa3d48a6b1994c7268011ed7e3a"} err="failed to get container status \"bcd2d2ca977f8fd428a7df0dde6d40340d4ccfa3d48a6b1994c7268011ed7e3a\": rpc error: code = NotFound desc = could not find container \"bcd2d2ca977f8fd428a7df0dde6d40340d4ccfa3d48a6b1994c7268011ed7e3a\": container with ID starting with bcd2d2ca977f8fd428a7df0dde6d40340d4ccfa3d48a6b1994c7268011ed7e3a not found: ID does not exist" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.463193 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f28fa44-7662-40a4-a2c2-81bb5a9c4ace" path="/var/lib/kubelet/pods/0f28fa44-7662-40a4-a2c2-81bb5a9c4ace/volumes" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.464516 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd" path="/var/lib/kubelet/pods/1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd/volumes" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.465793 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b678fa7-59f7-4a2c-8cae-3f71a17f8734" path="/var/lib/kubelet/pods/2b678fa7-59f7-4a2c-8cae-3f71a17f8734/volumes" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.467625 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="472d8035-24d2-4d6c-bb9d-4f932d4be020" path="/var/lib/kubelet/pods/472d8035-24d2-4d6c-bb9d-4f932d4be020/volumes" Jan 27 07:51:38 crc kubenswrapper[4799]: I0127 07:51:38.468880 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c808aeb6-0065-4efc-9d98-9ee6c97e3250" path="/var/lib/kubelet/pods/c808aeb6-0065-4efc-9d98-9ee6c97e3250/volumes" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.044186 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b8j25" event={"ID":"69748a95-cef3-4ad3-99aa-7e59a1f7683c","Type":"ContainerStarted","Data":"c81f8c6b0d031ccb75904a0dea53a7703b3a8719907db50930f393e63d1827cb"} Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.044241 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b8j25" event={"ID":"69748a95-cef3-4ad3-99aa-7e59a1f7683c","Type":"ContainerStarted","Data":"7495981c932b2f44ef47b997c36ccfd44724c50b9afb55a609e9e30e7acd2b1f"} Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.044685 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-b8j25" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.048739 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-b8j25" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.066842 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-b8j25" podStartSLOduration=2.066823955 podStartE2EDuration="2.066823955s" podCreationTimestamp="2026-01-27 07:51:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:51:39.063849645 +0000 UTC m=+365.374953730" watchObservedRunningTime="2026-01-27 07:51:39.066823955 +0000 UTC m=+365.377928020" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.726916 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mpdx6"] Jan 27 07:51:39 crc kubenswrapper[4799]: E0127 07:51:39.727904 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472d8035-24d2-4d6c-bb9d-4f932d4be020" containerName="extract-utilities" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.728063 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="472d8035-24d2-4d6c-bb9d-4f932d4be020" containerName="extract-utilities" Jan 27 07:51:39 crc kubenswrapper[4799]: E0127 07:51:39.728172 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b678fa7-59f7-4a2c-8cae-3f71a17f8734" containerName="marketplace-operator" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.728265 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b678fa7-59f7-4a2c-8cae-3f71a17f8734" containerName="marketplace-operator" Jan 27 07:51:39 crc kubenswrapper[4799]: E0127 07:51:39.728484 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b678fa7-59f7-4a2c-8cae-3f71a17f8734" containerName="marketplace-operator" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.728596 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b678fa7-59f7-4a2c-8cae-3f71a17f8734" containerName="marketplace-operator" Jan 27 07:51:39 crc kubenswrapper[4799]: E0127 07:51:39.728700 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd" containerName="extract-content" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.728793 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd" containerName="extract-content" Jan 27 07:51:39 crc kubenswrapper[4799]: E0127 07:51:39.728894 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f28fa44-7662-40a4-a2c2-81bb5a9c4ace" containerName="extract-utilities" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.729018 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f28fa44-7662-40a4-a2c2-81bb5a9c4ace" containerName="extract-utilities" Jan 27 07:51:39 crc kubenswrapper[4799]: E0127 07:51:39.729113 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c808aeb6-0065-4efc-9d98-9ee6c97e3250" containerName="registry-server" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.729210 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c808aeb6-0065-4efc-9d98-9ee6c97e3250" containerName="registry-server" Jan 27 07:51:39 crc kubenswrapper[4799]: E0127 07:51:39.729326 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f28fa44-7662-40a4-a2c2-81bb5a9c4ace" containerName="extract-content" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.729418 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f28fa44-7662-40a4-a2c2-81bb5a9c4ace" containerName="extract-content" Jan 27 07:51:39 crc kubenswrapper[4799]: E0127 07:51:39.729527 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd" containerName="extract-utilities" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.729622 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd" containerName="extract-utilities" Jan 27 07:51:39 crc kubenswrapper[4799]: E0127 07:51:39.729726 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472d8035-24d2-4d6c-bb9d-4f932d4be020" containerName="extract-content" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.729825 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="472d8035-24d2-4d6c-bb9d-4f932d4be020" containerName="extract-content" Jan 27 07:51:39 crc kubenswrapper[4799]: E0127 07:51:39.729950 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f28fa44-7662-40a4-a2c2-81bb5a9c4ace" containerName="registry-server" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.730056 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f28fa44-7662-40a4-a2c2-81bb5a9c4ace" containerName="registry-server" Jan 27 07:51:39 crc kubenswrapper[4799]: E0127 07:51:39.730156 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd" containerName="registry-server" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.730245 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd" containerName="registry-server" Jan 27 07:51:39 crc kubenswrapper[4799]: E0127 07:51:39.730402 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c808aeb6-0065-4efc-9d98-9ee6c97e3250" containerName="extract-utilities" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.730499 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c808aeb6-0065-4efc-9d98-9ee6c97e3250" containerName="extract-utilities" Jan 27 07:51:39 crc kubenswrapper[4799]: E0127 07:51:39.730520 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c808aeb6-0065-4efc-9d98-9ee6c97e3250" containerName="extract-content" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.730537 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c808aeb6-0065-4efc-9d98-9ee6c97e3250" containerName="extract-content" Jan 27 07:51:39 crc kubenswrapper[4799]: E0127 07:51:39.730550 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472d8035-24d2-4d6c-bb9d-4f932d4be020" containerName="registry-server" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.730559 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="472d8035-24d2-4d6c-bb9d-4f932d4be020" containerName="registry-server" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.730754 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c808aeb6-0065-4efc-9d98-9ee6c97e3250" containerName="registry-server" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.730773 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a0bd18e-8063-4c7c-8a1b-3f8dfca1cabd" containerName="registry-server" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.730794 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b678fa7-59f7-4a2c-8cae-3f71a17f8734" containerName="marketplace-operator" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.730812 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="472d8035-24d2-4d6c-bb9d-4f932d4be020" containerName="registry-server" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.730827 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f28fa44-7662-40a4-a2c2-81bb5a9c4ace" containerName="registry-server" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.730842 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b678fa7-59f7-4a2c-8cae-3f71a17f8734" containerName="marketplace-operator" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.732917 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpdx6" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.738345 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.742569 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mpdx6"] Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.906532 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tqfj\" (UniqueName: \"kubernetes.io/projected/2d44c7a9-27c0-4266-a833-0932010c632a-kube-api-access-7tqfj\") pod \"certified-operators-mpdx6\" (UID: \"2d44c7a9-27c0-4266-a833-0932010c632a\") " pod="openshift-marketplace/certified-operators-mpdx6" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.906612 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d44c7a9-27c0-4266-a833-0932010c632a-catalog-content\") pod \"certified-operators-mpdx6\" (UID: \"2d44c7a9-27c0-4266-a833-0932010c632a\") " pod="openshift-marketplace/certified-operators-mpdx6" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.906670 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d44c7a9-27c0-4266-a833-0932010c632a-utilities\") pod \"certified-operators-mpdx6\" (UID: \"2d44c7a9-27c0-4266-a833-0932010c632a\") " pod="openshift-marketplace/certified-operators-mpdx6" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.925886 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6tjvj"] Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.927062 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tjvj" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.929474 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 07:51:39 crc kubenswrapper[4799]: I0127 07:51:39.937270 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6tjvj"] Jan 27 07:51:40 crc kubenswrapper[4799]: I0127 07:51:40.008353 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tqfj\" (UniqueName: \"kubernetes.io/projected/2d44c7a9-27c0-4266-a833-0932010c632a-kube-api-access-7tqfj\") pod \"certified-operators-mpdx6\" (UID: \"2d44c7a9-27c0-4266-a833-0932010c632a\") " pod="openshift-marketplace/certified-operators-mpdx6" Jan 27 07:51:40 crc kubenswrapper[4799]: I0127 07:51:40.008419 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d44c7a9-27c0-4266-a833-0932010c632a-catalog-content\") pod \"certified-operators-mpdx6\" (UID: \"2d44c7a9-27c0-4266-a833-0932010c632a\") " pod="openshift-marketplace/certified-operators-mpdx6" Jan 27 07:51:40 crc kubenswrapper[4799]: I0127 07:51:40.008450 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d44c7a9-27c0-4266-a833-0932010c632a-utilities\") pod \"certified-operators-mpdx6\" (UID: \"2d44c7a9-27c0-4266-a833-0932010c632a\") " pod="openshift-marketplace/certified-operators-mpdx6" Jan 27 07:51:40 crc kubenswrapper[4799]: I0127 07:51:40.008991 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d44c7a9-27c0-4266-a833-0932010c632a-utilities\") pod \"certified-operators-mpdx6\" (UID: \"2d44c7a9-27c0-4266-a833-0932010c632a\") " pod="openshift-marketplace/certified-operators-mpdx6" Jan 27 07:51:40 crc kubenswrapper[4799]: I0127 07:51:40.010359 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d44c7a9-27c0-4266-a833-0932010c632a-catalog-content\") pod \"certified-operators-mpdx6\" (UID: \"2d44c7a9-27c0-4266-a833-0932010c632a\") " pod="openshift-marketplace/certified-operators-mpdx6" Jan 27 07:51:40 crc kubenswrapper[4799]: I0127 07:51:40.027366 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tqfj\" (UniqueName: \"kubernetes.io/projected/2d44c7a9-27c0-4266-a833-0932010c632a-kube-api-access-7tqfj\") pod \"certified-operators-mpdx6\" (UID: \"2d44c7a9-27c0-4266-a833-0932010c632a\") " pod="openshift-marketplace/certified-operators-mpdx6" Jan 27 07:51:40 crc kubenswrapper[4799]: I0127 07:51:40.054067 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpdx6" Jan 27 07:51:40 crc kubenswrapper[4799]: I0127 07:51:40.110334 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6646c613-e0d7-42d3-b170-c2768b718f02-utilities\") pod \"community-operators-6tjvj\" (UID: \"6646c613-e0d7-42d3-b170-c2768b718f02\") " pod="openshift-marketplace/community-operators-6tjvj" Jan 27 07:51:40 crc kubenswrapper[4799]: I0127 07:51:40.110391 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl7jn\" (UniqueName: \"kubernetes.io/projected/6646c613-e0d7-42d3-b170-c2768b718f02-kube-api-access-rl7jn\") pod \"community-operators-6tjvj\" (UID: \"6646c613-e0d7-42d3-b170-c2768b718f02\") " pod="openshift-marketplace/community-operators-6tjvj" Jan 27 07:51:40 crc kubenswrapper[4799]: I0127 07:51:40.110480 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6646c613-e0d7-42d3-b170-c2768b718f02-catalog-content\") pod \"community-operators-6tjvj\" (UID: \"6646c613-e0d7-42d3-b170-c2768b718f02\") " pod="openshift-marketplace/community-operators-6tjvj" Jan 27 07:51:40 crc kubenswrapper[4799]: I0127 07:51:40.212338 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl7jn\" (UniqueName: \"kubernetes.io/projected/6646c613-e0d7-42d3-b170-c2768b718f02-kube-api-access-rl7jn\") pod \"community-operators-6tjvj\" (UID: \"6646c613-e0d7-42d3-b170-c2768b718f02\") " pod="openshift-marketplace/community-operators-6tjvj" Jan 27 07:51:40 crc kubenswrapper[4799]: I0127 07:51:40.212963 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6646c613-e0d7-42d3-b170-c2768b718f02-catalog-content\") pod \"community-operators-6tjvj\" (UID: \"6646c613-e0d7-42d3-b170-c2768b718f02\") " pod="openshift-marketplace/community-operators-6tjvj" Jan 27 07:51:40 crc kubenswrapper[4799]: I0127 07:51:40.213093 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6646c613-e0d7-42d3-b170-c2768b718f02-utilities\") pod \"community-operators-6tjvj\" (UID: \"6646c613-e0d7-42d3-b170-c2768b718f02\") " pod="openshift-marketplace/community-operators-6tjvj" Jan 27 07:51:40 crc kubenswrapper[4799]: I0127 07:51:40.213823 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6646c613-e0d7-42d3-b170-c2768b718f02-utilities\") pod \"community-operators-6tjvj\" (UID: \"6646c613-e0d7-42d3-b170-c2768b718f02\") " pod="openshift-marketplace/community-operators-6tjvj" Jan 27 07:51:40 crc kubenswrapper[4799]: I0127 07:51:40.214709 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6646c613-e0d7-42d3-b170-c2768b718f02-catalog-content\") pod \"community-operators-6tjvj\" (UID: \"6646c613-e0d7-42d3-b170-c2768b718f02\") " pod="openshift-marketplace/community-operators-6tjvj" Jan 27 07:51:40 crc kubenswrapper[4799]: I0127 07:51:40.232502 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl7jn\" (UniqueName: \"kubernetes.io/projected/6646c613-e0d7-42d3-b170-c2768b718f02-kube-api-access-rl7jn\") pod \"community-operators-6tjvj\" (UID: \"6646c613-e0d7-42d3-b170-c2768b718f02\") " pod="openshift-marketplace/community-operators-6tjvj" Jan 27 07:51:40 crc kubenswrapper[4799]: I0127 07:51:40.245980 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tjvj" Jan 27 07:51:40 crc kubenswrapper[4799]: I0127 07:51:40.461703 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mpdx6"] Jan 27 07:51:40 crc kubenswrapper[4799]: W0127 07:51:40.466287 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d44c7a9_27c0_4266_a833_0932010c632a.slice/crio-6c5584920ff5c2dedcbaecd2978b35de1ab987690ebd37e6108241d2c33225b6 WatchSource:0}: Error finding container 6c5584920ff5c2dedcbaecd2978b35de1ab987690ebd37e6108241d2c33225b6: Status 404 returned error can't find the container with id 6c5584920ff5c2dedcbaecd2978b35de1ab987690ebd37e6108241d2c33225b6 Jan 27 07:51:40 crc kubenswrapper[4799]: I0127 07:51:40.657146 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6tjvj"] Jan 27 07:51:40 crc kubenswrapper[4799]: W0127 07:51:40.705466 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6646c613_e0d7_42d3_b170_c2768b718f02.slice/crio-b66036f2b95b7bdb26e89f68c695e6ba7a48bdca3b53ffddea8ec9b5e8990c83 WatchSource:0}: Error finding container b66036f2b95b7bdb26e89f68c695e6ba7a48bdca3b53ffddea8ec9b5e8990c83: Status 404 returned error can't find the container with id b66036f2b95b7bdb26e89f68c695e6ba7a48bdca3b53ffddea8ec9b5e8990c83 Jan 27 07:51:41 crc kubenswrapper[4799]: I0127 07:51:41.057515 4799 generic.go:334] "Generic (PLEG): container finished" podID="2d44c7a9-27c0-4266-a833-0932010c632a" containerID="ff0fec15a31fddde9081e375fa069ec80586f6057197a91dcf8774285ccfb237" exitCode=0 Jan 27 07:51:41 crc kubenswrapper[4799]: I0127 07:51:41.057579 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpdx6" event={"ID":"2d44c7a9-27c0-4266-a833-0932010c632a","Type":"ContainerDied","Data":"ff0fec15a31fddde9081e375fa069ec80586f6057197a91dcf8774285ccfb237"} Jan 27 07:51:41 crc kubenswrapper[4799]: I0127 07:51:41.057650 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpdx6" event={"ID":"2d44c7a9-27c0-4266-a833-0932010c632a","Type":"ContainerStarted","Data":"6c5584920ff5c2dedcbaecd2978b35de1ab987690ebd37e6108241d2c33225b6"} Jan 27 07:51:41 crc kubenswrapper[4799]: I0127 07:51:41.068628 4799 generic.go:334] "Generic (PLEG): container finished" podID="6646c613-e0d7-42d3-b170-c2768b718f02" containerID="3be4dc3be2a5b289499b91a2438190b25f3350991f87792d1a5e953d98c6c60f" exitCode=0 Jan 27 07:51:41 crc kubenswrapper[4799]: I0127 07:51:41.069252 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tjvj" event={"ID":"6646c613-e0d7-42d3-b170-c2768b718f02","Type":"ContainerDied","Data":"3be4dc3be2a5b289499b91a2438190b25f3350991f87792d1a5e953d98c6c60f"} Jan 27 07:51:41 crc kubenswrapper[4799]: I0127 07:51:41.069285 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tjvj" event={"ID":"6646c613-e0d7-42d3-b170-c2768b718f02","Type":"ContainerStarted","Data":"b66036f2b95b7bdb26e89f68c695e6ba7a48bdca3b53ffddea8ec9b5e8990c83"} Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.132177 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6qcrh"] Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.133822 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qcrh" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.136736 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.146493 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qcrh"] Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.235645 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c54203-e089-4140-af14-4223823e95f8-utilities\") pod \"redhat-marketplace-6qcrh\" (UID: \"53c54203-e089-4140-af14-4223823e95f8\") " pod="openshift-marketplace/redhat-marketplace-6qcrh" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.235820 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c54203-e089-4140-af14-4223823e95f8-catalog-content\") pod \"redhat-marketplace-6qcrh\" (UID: \"53c54203-e089-4140-af14-4223823e95f8\") " pod="openshift-marketplace/redhat-marketplace-6qcrh" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.235901 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jtp2\" (UniqueName: \"kubernetes.io/projected/53c54203-e089-4140-af14-4223823e95f8-kube-api-access-6jtp2\") pod \"redhat-marketplace-6qcrh\" (UID: \"53c54203-e089-4140-af14-4223823e95f8\") " pod="openshift-marketplace/redhat-marketplace-6qcrh" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.329982 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vr7hb"] Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.331433 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vr7hb" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.337563 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c54203-e089-4140-af14-4223823e95f8-utilities\") pod \"redhat-marketplace-6qcrh\" (UID: \"53c54203-e089-4140-af14-4223823e95f8\") " pod="openshift-marketplace/redhat-marketplace-6qcrh" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.337658 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c54203-e089-4140-af14-4223823e95f8-catalog-content\") pod \"redhat-marketplace-6qcrh\" (UID: \"53c54203-e089-4140-af14-4223823e95f8\") " pod="openshift-marketplace/redhat-marketplace-6qcrh" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.337712 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jtp2\" (UniqueName: \"kubernetes.io/projected/53c54203-e089-4140-af14-4223823e95f8-kube-api-access-6jtp2\") pod \"redhat-marketplace-6qcrh\" (UID: \"53c54203-e089-4140-af14-4223823e95f8\") " pod="openshift-marketplace/redhat-marketplace-6qcrh" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.338818 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c54203-e089-4140-af14-4223823e95f8-utilities\") pod \"redhat-marketplace-6qcrh\" (UID: \"53c54203-e089-4140-af14-4223823e95f8\") " pod="openshift-marketplace/redhat-marketplace-6qcrh" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.339087 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c54203-e089-4140-af14-4223823e95f8-catalog-content\") pod \"redhat-marketplace-6qcrh\" (UID: \"53c54203-e089-4140-af14-4223823e95f8\") " pod="openshift-marketplace/redhat-marketplace-6qcrh" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.343066 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.350828 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vr7hb"] Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.367671 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jtp2\" (UniqueName: \"kubernetes.io/projected/53c54203-e089-4140-af14-4223823e95f8-kube-api-access-6jtp2\") pod \"redhat-marketplace-6qcrh\" (UID: \"53c54203-e089-4140-af14-4223823e95f8\") " pod="openshift-marketplace/redhat-marketplace-6qcrh" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.439497 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/285a5405-cd08-4633-b71e-ba771ebba82f-utilities\") pod \"redhat-operators-vr7hb\" (UID: \"285a5405-cd08-4633-b71e-ba771ebba82f\") " pod="openshift-marketplace/redhat-operators-vr7hb" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.439699 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rtt6\" (UniqueName: \"kubernetes.io/projected/285a5405-cd08-4633-b71e-ba771ebba82f-kube-api-access-7rtt6\") pod \"redhat-operators-vr7hb\" (UID: \"285a5405-cd08-4633-b71e-ba771ebba82f\") " pod="openshift-marketplace/redhat-operators-vr7hb" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.440088 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/285a5405-cd08-4633-b71e-ba771ebba82f-catalog-content\") pod \"redhat-operators-vr7hb\" (UID: \"285a5405-cd08-4633-b71e-ba771ebba82f\") " pod="openshift-marketplace/redhat-operators-vr7hb" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.458441 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qcrh" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.541562 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rtt6\" (UniqueName: \"kubernetes.io/projected/285a5405-cd08-4633-b71e-ba771ebba82f-kube-api-access-7rtt6\") pod \"redhat-operators-vr7hb\" (UID: \"285a5405-cd08-4633-b71e-ba771ebba82f\") " pod="openshift-marketplace/redhat-operators-vr7hb" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.541639 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/285a5405-cd08-4633-b71e-ba771ebba82f-catalog-content\") pod \"redhat-operators-vr7hb\" (UID: \"285a5405-cd08-4633-b71e-ba771ebba82f\") " pod="openshift-marketplace/redhat-operators-vr7hb" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.541691 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/285a5405-cd08-4633-b71e-ba771ebba82f-utilities\") pod \"redhat-operators-vr7hb\" (UID: \"285a5405-cd08-4633-b71e-ba771ebba82f\") " pod="openshift-marketplace/redhat-operators-vr7hb" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.542113 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/285a5405-cd08-4633-b71e-ba771ebba82f-utilities\") pod \"redhat-operators-vr7hb\" (UID: \"285a5405-cd08-4633-b71e-ba771ebba82f\") " pod="openshift-marketplace/redhat-operators-vr7hb" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.542319 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/285a5405-cd08-4633-b71e-ba771ebba82f-catalog-content\") pod \"redhat-operators-vr7hb\" (UID: \"285a5405-cd08-4633-b71e-ba771ebba82f\") " pod="openshift-marketplace/redhat-operators-vr7hb" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.570795 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rtt6\" (UniqueName: \"kubernetes.io/projected/285a5405-cd08-4633-b71e-ba771ebba82f-kube-api-access-7rtt6\") pod \"redhat-operators-vr7hb\" (UID: \"285a5405-cd08-4633-b71e-ba771ebba82f\") " pod="openshift-marketplace/redhat-operators-vr7hb" Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.689016 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qcrh"] Jan 27 07:51:42 crc kubenswrapper[4799]: I0127 07:51:42.693186 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vr7hb" Jan 27 07:51:43 crc kubenswrapper[4799]: I0127 07:51:43.083733 4799 generic.go:334] "Generic (PLEG): container finished" podID="6646c613-e0d7-42d3-b170-c2768b718f02" containerID="6d203025920e8db0b36826296be6d6e8f68d3dcc5739a5095058fc7165c2c112" exitCode=0 Jan 27 07:51:43 crc kubenswrapper[4799]: I0127 07:51:43.083809 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tjvj" event={"ID":"6646c613-e0d7-42d3-b170-c2768b718f02","Type":"ContainerDied","Data":"6d203025920e8db0b36826296be6d6e8f68d3dcc5739a5095058fc7165c2c112"} Jan 27 07:51:43 crc kubenswrapper[4799]: I0127 07:51:43.086345 4799 generic.go:334] "Generic (PLEG): container finished" podID="53c54203-e089-4140-af14-4223823e95f8" containerID="f16981792307e014e10faabe305a81f5acf658033165712476e0b1edff6cdd85" exitCode=0 Jan 27 07:51:43 crc kubenswrapper[4799]: I0127 07:51:43.086426 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qcrh" event={"ID":"53c54203-e089-4140-af14-4223823e95f8","Type":"ContainerDied","Data":"f16981792307e014e10faabe305a81f5acf658033165712476e0b1edff6cdd85"} Jan 27 07:51:43 crc kubenswrapper[4799]: I0127 07:51:43.086462 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qcrh" event={"ID":"53c54203-e089-4140-af14-4223823e95f8","Type":"ContainerStarted","Data":"25540835c7308db5dbff1d24d54f0967b73a30cf30e1de57725ce90dc86347e4"} Jan 27 07:51:43 crc kubenswrapper[4799]: I0127 07:51:43.091687 4799 generic.go:334] "Generic (PLEG): container finished" podID="2d44c7a9-27c0-4266-a833-0932010c632a" containerID="049d471241a9f96f29ae1212baf8b65852ee3dc46f6bfdfbe208ad9a5877c464" exitCode=0 Jan 27 07:51:43 crc kubenswrapper[4799]: I0127 07:51:43.091756 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpdx6" event={"ID":"2d44c7a9-27c0-4266-a833-0932010c632a","Type":"ContainerDied","Data":"049d471241a9f96f29ae1212baf8b65852ee3dc46f6bfdfbe208ad9a5877c464"} Jan 27 07:51:43 crc kubenswrapper[4799]: I0127 07:51:43.122044 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vr7hb"] Jan 27 07:51:43 crc kubenswrapper[4799]: W0127 07:51:43.126182 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod285a5405_cd08_4633_b71e_ba771ebba82f.slice/crio-4f78815fab754f3c4057b25aafbad218b9ad9b61c581f14b414b76931a1f05bc WatchSource:0}: Error finding container 4f78815fab754f3c4057b25aafbad218b9ad9b61c581f14b414b76931a1f05bc: Status 404 returned error can't find the container with id 4f78815fab754f3c4057b25aafbad218b9ad9b61c581f14b414b76931a1f05bc Jan 27 07:51:44 crc kubenswrapper[4799]: I0127 07:51:44.098138 4799 generic.go:334] "Generic (PLEG): container finished" podID="285a5405-cd08-4633-b71e-ba771ebba82f" containerID="514439f76ef8b39ddc07a680cae648b788cdbc038722dd52f37f50eaf420e084" exitCode=0 Jan 27 07:51:44 crc kubenswrapper[4799]: I0127 07:51:44.098242 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vr7hb" event={"ID":"285a5405-cd08-4633-b71e-ba771ebba82f","Type":"ContainerDied","Data":"514439f76ef8b39ddc07a680cae648b788cdbc038722dd52f37f50eaf420e084"} Jan 27 07:51:44 crc kubenswrapper[4799]: I0127 07:51:44.098823 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vr7hb" event={"ID":"285a5405-cd08-4633-b71e-ba771ebba82f","Type":"ContainerStarted","Data":"4f78815fab754f3c4057b25aafbad218b9ad9b61c581f14b414b76931a1f05bc"} Jan 27 07:51:44 crc kubenswrapper[4799]: I0127 07:51:44.101254 4799 generic.go:334] "Generic (PLEG): container finished" podID="53c54203-e089-4140-af14-4223823e95f8" containerID="84f9f5ab06d121693776787cf1beffcfd3f3e734289571f474d6c037bf70eaf8" exitCode=0 Jan 27 07:51:44 crc kubenswrapper[4799]: I0127 07:51:44.101332 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qcrh" event={"ID":"53c54203-e089-4140-af14-4223823e95f8","Type":"ContainerDied","Data":"84f9f5ab06d121693776787cf1beffcfd3f3e734289571f474d6c037bf70eaf8"} Jan 27 07:51:44 crc kubenswrapper[4799]: I0127 07:51:44.104567 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpdx6" event={"ID":"2d44c7a9-27c0-4266-a833-0932010c632a","Type":"ContainerStarted","Data":"a5cfc1030a6b7ff6d5b120cac663adb871d8fcbebb8adca47c73f44378871cfb"} Jan 27 07:51:44 crc kubenswrapper[4799]: I0127 07:51:44.118415 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tjvj" event={"ID":"6646c613-e0d7-42d3-b170-c2768b718f02","Type":"ContainerStarted","Data":"32ad983dc9779fab1ca8f61998a36d0dac913ba985d8fc3200aabae250aca053"} Jan 27 07:51:44 crc kubenswrapper[4799]: I0127 07:51:44.175579 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mpdx6" podStartSLOduration=2.722327318 podStartE2EDuration="5.169266269s" podCreationTimestamp="2026-01-27 07:51:39 +0000 UTC" firstStartedPulling="2026-01-27 07:51:41.059112055 +0000 UTC m=+367.370216120" lastFinishedPulling="2026-01-27 07:51:43.506050976 +0000 UTC m=+369.817155071" observedRunningTime="2026-01-27 07:51:44.163591006 +0000 UTC m=+370.474695091" watchObservedRunningTime="2026-01-27 07:51:44.169266269 +0000 UTC m=+370.480370354" Jan 27 07:51:44 crc kubenswrapper[4799]: I0127 07:51:44.185581 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6tjvj" podStartSLOduration=2.666774834 podStartE2EDuration="5.185560187s" podCreationTimestamp="2026-01-27 07:51:39 +0000 UTC" firstStartedPulling="2026-01-27 07:51:41.073754191 +0000 UTC m=+367.384858256" lastFinishedPulling="2026-01-27 07:51:43.592539544 +0000 UTC m=+369.903643609" observedRunningTime="2026-01-27 07:51:44.181922935 +0000 UTC m=+370.493027010" watchObservedRunningTime="2026-01-27 07:51:44.185560187 +0000 UTC m=+370.496664252" Jan 27 07:51:45 crc kubenswrapper[4799]: I0127 07:51:45.125925 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qcrh" event={"ID":"53c54203-e089-4140-af14-4223823e95f8","Type":"ContainerStarted","Data":"2d6d1fbb7d084ae34a86f7c36717f3ef97d93e32deb49bda4a13ad3c8a947f8d"} Jan 27 07:51:45 crc kubenswrapper[4799]: I0127 07:51:45.131354 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vr7hb" event={"ID":"285a5405-cd08-4633-b71e-ba771ebba82f","Type":"ContainerStarted","Data":"968d75cb86fd39c30f104a4058a94b3deb179d64ebca79e06b088f08d4f823fa"} Jan 27 07:51:45 crc kubenswrapper[4799]: I0127 07:51:45.146461 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6qcrh" podStartSLOduration=1.6714449999999998 podStartE2EDuration="3.146443966s" podCreationTimestamp="2026-01-27 07:51:42 +0000 UTC" firstStartedPulling="2026-01-27 07:51:43.088186065 +0000 UTC m=+369.399290150" lastFinishedPulling="2026-01-27 07:51:44.563185051 +0000 UTC m=+370.874289116" observedRunningTime="2026-01-27 07:51:45.144442505 +0000 UTC m=+371.455546590" watchObservedRunningTime="2026-01-27 07:51:45.146443966 +0000 UTC m=+371.457548041" Jan 27 07:51:46 crc kubenswrapper[4799]: I0127 07:51:46.141650 4799 generic.go:334] "Generic (PLEG): container finished" podID="285a5405-cd08-4633-b71e-ba771ebba82f" containerID="968d75cb86fd39c30f104a4058a94b3deb179d64ebca79e06b088f08d4f823fa" exitCode=0 Jan 27 07:51:46 crc kubenswrapper[4799]: I0127 07:51:46.144019 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vr7hb" event={"ID":"285a5405-cd08-4633-b71e-ba771ebba82f","Type":"ContainerDied","Data":"968d75cb86fd39c30f104a4058a94b3deb179d64ebca79e06b088f08d4f823fa"} Jan 27 07:51:47 crc kubenswrapper[4799]: I0127 07:51:47.150032 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vr7hb" event={"ID":"285a5405-cd08-4633-b71e-ba771ebba82f","Type":"ContainerStarted","Data":"6fff54b98a5867b3a5699f8db5583bb38c279d9ff3f4db2b544bc6c8dbe32818"} Jan 27 07:51:47 crc kubenswrapper[4799]: I0127 07:51:47.173925 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vr7hb" podStartSLOduration=2.741646892 podStartE2EDuration="5.173900007s" podCreationTimestamp="2026-01-27 07:51:42 +0000 UTC" firstStartedPulling="2026-01-27 07:51:44.10106599 +0000 UTC m=+370.412170055" lastFinishedPulling="2026-01-27 07:51:46.533319105 +0000 UTC m=+372.844423170" observedRunningTime="2026-01-27 07:51:47.168966317 +0000 UTC m=+373.480070402" watchObservedRunningTime="2026-01-27 07:51:47.173900007 +0000 UTC m=+373.485004082" Jan 27 07:51:50 crc kubenswrapper[4799]: I0127 07:51:50.055340 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mpdx6" Jan 27 07:51:50 crc kubenswrapper[4799]: I0127 07:51:50.055830 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mpdx6" Jan 27 07:51:50 crc kubenswrapper[4799]: I0127 07:51:50.119806 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mpdx6" Jan 27 07:51:50 crc kubenswrapper[4799]: I0127 07:51:50.216905 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mpdx6" Jan 27 07:51:50 crc kubenswrapper[4799]: I0127 07:51:50.247177 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6tjvj" Jan 27 07:51:50 crc kubenswrapper[4799]: I0127 07:51:50.248249 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6tjvj" Jan 27 07:51:50 crc kubenswrapper[4799]: I0127 07:51:50.289354 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6tjvj" Jan 27 07:51:50 crc kubenswrapper[4799]: I0127 07:51:50.374439 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-r2m62" Jan 27 07:51:50 crc kubenswrapper[4799]: I0127 07:51:50.432583 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6ww5r"] Jan 27 07:51:51 crc kubenswrapper[4799]: I0127 07:51:51.223607 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6tjvj" Jan 27 07:51:52 crc kubenswrapper[4799]: I0127 07:51:52.459271 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6qcrh" Jan 27 07:51:52 crc kubenswrapper[4799]: I0127 07:51:52.459655 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6qcrh" Jan 27 07:51:52 crc kubenswrapper[4799]: I0127 07:51:52.512428 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6qcrh" Jan 27 07:51:52 crc kubenswrapper[4799]: I0127 07:51:52.693865 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vr7hb" Jan 27 07:51:52 crc kubenswrapper[4799]: I0127 07:51:52.694232 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vr7hb" Jan 27 07:51:53 crc kubenswrapper[4799]: I0127 07:51:53.220787 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6qcrh" Jan 27 07:51:53 crc kubenswrapper[4799]: I0127 07:51:53.731387 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:51:53 crc kubenswrapper[4799]: I0127 07:51:53.731459 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:51:53 crc kubenswrapper[4799]: I0127 07:51:53.732349 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vr7hb" podUID="285a5405-cd08-4633-b71e-ba771ebba82f" containerName="registry-server" probeResult="failure" output=< Jan 27 07:51:53 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Jan 27 07:51:53 crc kubenswrapper[4799]: > Jan 27 07:52:02 crc kubenswrapper[4799]: I0127 07:52:02.727513 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vr7hb" Jan 27 07:52:02 crc kubenswrapper[4799]: I0127 07:52:02.767145 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vr7hb" Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.477680 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" podUID="13ca58bb-9a4a-420d-b692-9ceda01d8b0c" containerName="registry" containerID="cri-o://e79cabd32cd0be2025f74671726694225c41a00bc1d7788c11d191d3c21e3f47" gracePeriod=30 Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.844066 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.863455 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.863522 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-trusted-ca\") pod \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.863559 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-installation-pull-secrets\") pod \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.863594 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w68rn\" (UniqueName: \"kubernetes.io/projected/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-kube-api-access-w68rn\") pod \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.863633 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-bound-sa-token\") pod \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.864913 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "13ca58bb-9a4a-420d-b692-9ceda01d8b0c" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.865773 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-ca-trust-extracted\") pod \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.865821 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-registry-tls\") pod \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.865854 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-registry-certificates\") pod \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\" (UID: \"13ca58bb-9a4a-420d-b692-9ceda01d8b0c\") " Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.866353 4799 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.867032 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "13ca58bb-9a4a-420d-b692-9ceda01d8b0c" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.873397 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "13ca58bb-9a4a-420d-b692-9ceda01d8b0c" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.874035 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "13ca58bb-9a4a-420d-b692-9ceda01d8b0c" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.874446 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "13ca58bb-9a4a-420d-b692-9ceda01d8b0c" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.874736 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "13ca58bb-9a4a-420d-b692-9ceda01d8b0c" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.874912 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-kube-api-access-w68rn" (OuterVolumeSpecName: "kube-api-access-w68rn") pod "13ca58bb-9a4a-420d-b692-9ceda01d8b0c" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c"). InnerVolumeSpecName "kube-api-access-w68rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.887047 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "13ca58bb-9a4a-420d-b692-9ceda01d8b0c" (UID: "13ca58bb-9a4a-420d-b692-9ceda01d8b0c"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.967573 4799 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.967606 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w68rn\" (UniqueName: \"kubernetes.io/projected/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-kube-api-access-w68rn\") on node \"crc\" DevicePath \"\"" Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.967618 4799 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.967628 4799 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.967637 4799 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 07:52:15 crc kubenswrapper[4799]: I0127 07:52:15.967645 4799 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/13ca58bb-9a4a-420d-b692-9ceda01d8b0c-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 07:52:16 crc kubenswrapper[4799]: I0127 07:52:16.333864 4799 generic.go:334] "Generic (PLEG): container finished" podID="13ca58bb-9a4a-420d-b692-9ceda01d8b0c" containerID="e79cabd32cd0be2025f74671726694225c41a00bc1d7788c11d191d3c21e3f47" exitCode=0 Jan 27 07:52:16 crc kubenswrapper[4799]: I0127 07:52:16.333908 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" event={"ID":"13ca58bb-9a4a-420d-b692-9ceda01d8b0c","Type":"ContainerDied","Data":"e79cabd32cd0be2025f74671726694225c41a00bc1d7788c11d191d3c21e3f47"} Jan 27 07:52:16 crc kubenswrapper[4799]: I0127 07:52:16.333938 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" event={"ID":"13ca58bb-9a4a-420d-b692-9ceda01d8b0c","Type":"ContainerDied","Data":"075c302c9836f1cd2f0a0ee00a87d27f09e37426a11318e8226bbde65915c475"} Jan 27 07:52:16 crc kubenswrapper[4799]: I0127 07:52:16.333956 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6ww5r" Jan 27 07:52:16 crc kubenswrapper[4799]: I0127 07:52:16.333961 4799 scope.go:117] "RemoveContainer" containerID="e79cabd32cd0be2025f74671726694225c41a00bc1d7788c11d191d3c21e3f47" Jan 27 07:52:16 crc kubenswrapper[4799]: I0127 07:52:16.354507 4799 scope.go:117] "RemoveContainer" containerID="e79cabd32cd0be2025f74671726694225c41a00bc1d7788c11d191d3c21e3f47" Jan 27 07:52:16 crc kubenswrapper[4799]: E0127 07:52:16.354934 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e79cabd32cd0be2025f74671726694225c41a00bc1d7788c11d191d3c21e3f47\": container with ID starting with e79cabd32cd0be2025f74671726694225c41a00bc1d7788c11d191d3c21e3f47 not found: ID does not exist" containerID="e79cabd32cd0be2025f74671726694225c41a00bc1d7788c11d191d3c21e3f47" Jan 27 07:52:16 crc kubenswrapper[4799]: I0127 07:52:16.354960 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e79cabd32cd0be2025f74671726694225c41a00bc1d7788c11d191d3c21e3f47"} err="failed to get container status \"e79cabd32cd0be2025f74671726694225c41a00bc1d7788c11d191d3c21e3f47\": rpc error: code = NotFound desc = could not find container \"e79cabd32cd0be2025f74671726694225c41a00bc1d7788c11d191d3c21e3f47\": container with ID starting with e79cabd32cd0be2025f74671726694225c41a00bc1d7788c11d191d3c21e3f47 not found: ID does not exist" Jan 27 07:52:16 crc kubenswrapper[4799]: I0127 07:52:16.368742 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6ww5r"] Jan 27 07:52:16 crc kubenswrapper[4799]: I0127 07:52:16.373169 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6ww5r"] Jan 27 07:52:16 crc kubenswrapper[4799]: I0127 07:52:16.471414 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13ca58bb-9a4a-420d-b692-9ceda01d8b0c" path="/var/lib/kubelet/pods/13ca58bb-9a4a-420d-b692-9ceda01d8b0c/volumes" Jan 27 07:52:23 crc kubenswrapper[4799]: I0127 07:52:23.732108 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:52:23 crc kubenswrapper[4799]: I0127 07:52:23.732609 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:52:53 crc kubenswrapper[4799]: I0127 07:52:53.731466 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:52:53 crc kubenswrapper[4799]: I0127 07:52:53.732357 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:52:53 crc kubenswrapper[4799]: I0127 07:52:53.732464 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 07:52:53 crc kubenswrapper[4799]: I0127 07:52:53.734155 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3fcf94bc860237b7c54a7f06c52954a7a635d56630dbf7dfbfa67647833277b7"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:52:53 crc kubenswrapper[4799]: I0127 07:52:53.734291 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://3fcf94bc860237b7c54a7f06c52954a7a635d56630dbf7dfbfa67647833277b7" gracePeriod=600 Jan 27 07:52:54 crc kubenswrapper[4799]: I0127 07:52:54.594582 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="3fcf94bc860237b7c54a7f06c52954a7a635d56630dbf7dfbfa67647833277b7" exitCode=0 Jan 27 07:52:54 crc kubenswrapper[4799]: I0127 07:52:54.594697 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"3fcf94bc860237b7c54a7f06c52954a7a635d56630dbf7dfbfa67647833277b7"} Jan 27 07:52:54 crc kubenswrapper[4799]: I0127 07:52:54.595044 4799 scope.go:117] "RemoveContainer" containerID="4715ca15dc9413a33674ca689ba7989478e77332cb27148b67f3339cbcfe6c0d" Jan 27 07:52:55 crc kubenswrapper[4799]: I0127 07:52:55.602356 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"9f1fbea47241ca132008f5bc69b6952b9b211129abf8945df312afdf556db38e"} Jan 27 07:55:23 crc kubenswrapper[4799]: I0127 07:55:23.731456 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:55:23 crc kubenswrapper[4799]: I0127 07:55:23.732076 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:55:53 crc kubenswrapper[4799]: I0127 07:55:53.731210 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:55:53 crc kubenswrapper[4799]: I0127 07:55:53.731897 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:56:23 crc kubenswrapper[4799]: I0127 07:56:23.730874 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:56:23 crc kubenswrapper[4799]: I0127 07:56:23.731807 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:56:23 crc kubenswrapper[4799]: I0127 07:56:23.731872 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 07:56:23 crc kubenswrapper[4799]: I0127 07:56:23.732658 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9f1fbea47241ca132008f5bc69b6952b9b211129abf8945df312afdf556db38e"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:56:23 crc kubenswrapper[4799]: I0127 07:56:23.732710 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://9f1fbea47241ca132008f5bc69b6952b9b211129abf8945df312afdf556db38e" gracePeriod=600 Jan 27 07:56:24 crc kubenswrapper[4799]: I0127 07:56:24.039099 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="9f1fbea47241ca132008f5bc69b6952b9b211129abf8945df312afdf556db38e" exitCode=0 Jan 27 07:56:24 crc kubenswrapper[4799]: I0127 07:56:24.039171 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"9f1fbea47241ca132008f5bc69b6952b9b211129abf8945df312afdf556db38e"} Jan 27 07:56:24 crc kubenswrapper[4799]: I0127 07:56:24.039238 4799 scope.go:117] "RemoveContainer" containerID="3fcf94bc860237b7c54a7f06c52954a7a635d56630dbf7dfbfa67647833277b7" Jan 27 07:56:25 crc kubenswrapper[4799]: I0127 07:56:25.050496 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"483ddb194043e569830cf4dd2964ecb2e41dd9cb022fa07f532bbee3b74029d4"} Jan 27 07:57:43 crc kubenswrapper[4799]: I0127 07:57:43.270846 4799 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 07:58:53 crc kubenswrapper[4799]: I0127 07:58:53.731734 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:58:53 crc kubenswrapper[4799]: I0127 07:58:53.732611 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:59:23 crc kubenswrapper[4799]: I0127 07:59:23.731829 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:59:23 crc kubenswrapper[4799]: I0127 07:59:23.732473 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:59:42 crc kubenswrapper[4799]: I0127 07:59:42.952332 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hgdnc"] Jan 27 07:59:42 crc kubenswrapper[4799]: E0127 07:59:42.953192 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13ca58bb-9a4a-420d-b692-9ceda01d8b0c" containerName="registry" Jan 27 07:59:42 crc kubenswrapper[4799]: I0127 07:59:42.953210 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="13ca58bb-9a4a-420d-b692-9ceda01d8b0c" containerName="registry" Jan 27 07:59:42 crc kubenswrapper[4799]: I0127 07:59:42.953358 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="13ca58bb-9a4a-420d-b692-9ceda01d8b0c" containerName="registry" Jan 27 07:59:42 crc kubenswrapper[4799]: I0127 07:59:42.954258 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgdnc" Jan 27 07:59:42 crc kubenswrapper[4799]: I0127 07:59:42.972637 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hgdnc"] Jan 27 07:59:43 crc kubenswrapper[4799]: I0127 07:59:43.010241 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6sk8\" (UniqueName: \"kubernetes.io/projected/a3bce872-2524-4c22-8fb5-7fce7040b790-kube-api-access-r6sk8\") pod \"community-operators-hgdnc\" (UID: \"a3bce872-2524-4c22-8fb5-7fce7040b790\") " pod="openshift-marketplace/community-operators-hgdnc" Jan 27 07:59:43 crc kubenswrapper[4799]: I0127 07:59:43.010400 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3bce872-2524-4c22-8fb5-7fce7040b790-catalog-content\") pod \"community-operators-hgdnc\" (UID: \"a3bce872-2524-4c22-8fb5-7fce7040b790\") " pod="openshift-marketplace/community-operators-hgdnc" Jan 27 07:59:43 crc kubenswrapper[4799]: I0127 07:59:43.010465 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3bce872-2524-4c22-8fb5-7fce7040b790-utilities\") pod \"community-operators-hgdnc\" (UID: \"a3bce872-2524-4c22-8fb5-7fce7040b790\") " pod="openshift-marketplace/community-operators-hgdnc" Jan 27 07:59:43 crc kubenswrapper[4799]: I0127 07:59:43.111516 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3bce872-2524-4c22-8fb5-7fce7040b790-catalog-content\") pod \"community-operators-hgdnc\" (UID: \"a3bce872-2524-4c22-8fb5-7fce7040b790\") " pod="openshift-marketplace/community-operators-hgdnc" Jan 27 07:59:43 crc kubenswrapper[4799]: I0127 07:59:43.112564 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3bce872-2524-4c22-8fb5-7fce7040b790-utilities\") pod \"community-operators-hgdnc\" (UID: \"a3bce872-2524-4c22-8fb5-7fce7040b790\") " pod="openshift-marketplace/community-operators-hgdnc" Jan 27 07:59:43 crc kubenswrapper[4799]: I0127 07:59:43.112591 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3bce872-2524-4c22-8fb5-7fce7040b790-catalog-content\") pod \"community-operators-hgdnc\" (UID: \"a3bce872-2524-4c22-8fb5-7fce7040b790\") " pod="openshift-marketplace/community-operators-hgdnc" Jan 27 07:59:43 crc kubenswrapper[4799]: I0127 07:59:43.112665 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6sk8\" (UniqueName: \"kubernetes.io/projected/a3bce872-2524-4c22-8fb5-7fce7040b790-kube-api-access-r6sk8\") pod \"community-operators-hgdnc\" (UID: \"a3bce872-2524-4c22-8fb5-7fce7040b790\") " pod="openshift-marketplace/community-operators-hgdnc" Jan 27 07:59:43 crc kubenswrapper[4799]: I0127 07:59:43.112847 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3bce872-2524-4c22-8fb5-7fce7040b790-utilities\") pod \"community-operators-hgdnc\" (UID: \"a3bce872-2524-4c22-8fb5-7fce7040b790\") " pod="openshift-marketplace/community-operators-hgdnc" Jan 27 07:59:43 crc kubenswrapper[4799]: I0127 07:59:43.133764 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6sk8\" (UniqueName: \"kubernetes.io/projected/a3bce872-2524-4c22-8fb5-7fce7040b790-kube-api-access-r6sk8\") pod \"community-operators-hgdnc\" (UID: \"a3bce872-2524-4c22-8fb5-7fce7040b790\") " pod="openshift-marketplace/community-operators-hgdnc" Jan 27 07:59:43 crc kubenswrapper[4799]: I0127 07:59:43.328202 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgdnc" Jan 27 07:59:43 crc kubenswrapper[4799]: I0127 07:59:43.802844 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hgdnc"] Jan 27 07:59:44 crc kubenswrapper[4799]: I0127 07:59:44.311181 4799 generic.go:334] "Generic (PLEG): container finished" podID="a3bce872-2524-4c22-8fb5-7fce7040b790" containerID="d4c146f04e2063c35a347214893a806196b0c60c6dd02fc522f67dcc1b087cb3" exitCode=0 Jan 27 07:59:44 crc kubenswrapper[4799]: I0127 07:59:44.311286 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgdnc" event={"ID":"a3bce872-2524-4c22-8fb5-7fce7040b790","Type":"ContainerDied","Data":"d4c146f04e2063c35a347214893a806196b0c60c6dd02fc522f67dcc1b087cb3"} Jan 27 07:59:44 crc kubenswrapper[4799]: I0127 07:59:44.311518 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgdnc" event={"ID":"a3bce872-2524-4c22-8fb5-7fce7040b790","Type":"ContainerStarted","Data":"dcf33e83c7547919b00664c38c59d8c163835610faf1fdda7202102216986b5f"} Jan 27 07:59:44 crc kubenswrapper[4799]: I0127 07:59:44.312935 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 07:59:46 crc kubenswrapper[4799]: I0127 07:59:46.325819 4799 generic.go:334] "Generic (PLEG): container finished" podID="a3bce872-2524-4c22-8fb5-7fce7040b790" containerID="abd109e6ffddc10e39e22d930b4abb2aa6895a6a7af1afb6931826df0dbd9ea2" exitCode=0 Jan 27 07:59:46 crc kubenswrapper[4799]: I0127 07:59:46.325932 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgdnc" event={"ID":"a3bce872-2524-4c22-8fb5-7fce7040b790","Type":"ContainerDied","Data":"abd109e6ffddc10e39e22d930b4abb2aa6895a6a7af1afb6931826df0dbd9ea2"} Jan 27 07:59:47 crc kubenswrapper[4799]: I0127 07:59:47.334040 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgdnc" event={"ID":"a3bce872-2524-4c22-8fb5-7fce7040b790","Type":"ContainerStarted","Data":"ea78dc189752d1a6475d4c7d21d32ce631847c1447f5125a065cf0494efcac66"} Jan 27 07:59:47 crc kubenswrapper[4799]: I0127 07:59:47.353907 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hgdnc" podStartSLOduration=2.844332372 podStartE2EDuration="5.353890015s" podCreationTimestamp="2026-01-27 07:59:42 +0000 UTC" firstStartedPulling="2026-01-27 07:59:44.312614666 +0000 UTC m=+850.623718731" lastFinishedPulling="2026-01-27 07:59:46.822172269 +0000 UTC m=+853.133276374" observedRunningTime="2026-01-27 07:59:47.352358136 +0000 UTC m=+853.663462201" watchObservedRunningTime="2026-01-27 07:59:47.353890015 +0000 UTC m=+853.664994080" Jan 27 07:59:53 crc kubenswrapper[4799]: I0127 07:59:53.329356 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hgdnc" Jan 27 07:59:53 crc kubenswrapper[4799]: I0127 07:59:53.329654 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hgdnc" Jan 27 07:59:53 crc kubenswrapper[4799]: I0127 07:59:53.406137 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hgdnc" Jan 27 07:59:53 crc kubenswrapper[4799]: I0127 07:59:53.457555 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hgdnc" Jan 27 07:59:53 crc kubenswrapper[4799]: I0127 07:59:53.639550 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hgdnc"] Jan 27 07:59:53 crc kubenswrapper[4799]: I0127 07:59:53.731531 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:59:53 crc kubenswrapper[4799]: I0127 07:59:53.731612 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:59:53 crc kubenswrapper[4799]: I0127 07:59:53.731669 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 07:59:53 crc kubenswrapper[4799]: I0127 07:59:53.732234 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"483ddb194043e569830cf4dd2964ecb2e41dd9cb022fa07f532bbee3b74029d4"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:59:53 crc kubenswrapper[4799]: I0127 07:59:53.732294 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://483ddb194043e569830cf4dd2964ecb2e41dd9cb022fa07f532bbee3b74029d4" gracePeriod=600 Jan 27 07:59:54 crc kubenswrapper[4799]: I0127 07:59:54.389593 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="483ddb194043e569830cf4dd2964ecb2e41dd9cb022fa07f532bbee3b74029d4" exitCode=0 Jan 27 07:59:54 crc kubenswrapper[4799]: I0127 07:59:54.389658 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"483ddb194043e569830cf4dd2964ecb2e41dd9cb022fa07f532bbee3b74029d4"} Jan 27 07:59:54 crc kubenswrapper[4799]: I0127 07:59:54.389900 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"213c4fc7aacd3827bd72695593e20f0c8c733e85cf917988ad9ae8811f1be289"} Jan 27 07:59:54 crc kubenswrapper[4799]: I0127 07:59:54.389928 4799 scope.go:117] "RemoveContainer" containerID="9f1fbea47241ca132008f5bc69b6952b9b211129abf8945df312afdf556db38e" Jan 27 07:59:55 crc kubenswrapper[4799]: I0127 07:59:55.401715 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hgdnc" podUID="a3bce872-2524-4c22-8fb5-7fce7040b790" containerName="registry-server" containerID="cri-o://ea78dc189752d1a6475d4c7d21d32ce631847c1447f5125a065cf0494efcac66" gracePeriod=2 Jan 27 07:59:57 crc kubenswrapper[4799]: I0127 07:59:57.419517 4799 generic.go:334] "Generic (PLEG): container finished" podID="a3bce872-2524-4c22-8fb5-7fce7040b790" containerID="ea78dc189752d1a6475d4c7d21d32ce631847c1447f5125a065cf0494efcac66" exitCode=0 Jan 27 07:59:57 crc kubenswrapper[4799]: I0127 07:59:57.419583 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgdnc" event={"ID":"a3bce872-2524-4c22-8fb5-7fce7040b790","Type":"ContainerDied","Data":"ea78dc189752d1a6475d4c7d21d32ce631847c1447f5125a065cf0494efcac66"} Jan 27 07:59:58 crc kubenswrapper[4799]: I0127 07:59:58.991275 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgdnc" Jan 27 07:59:59 crc kubenswrapper[4799]: I0127 07:59:59.020455 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3bce872-2524-4c22-8fb5-7fce7040b790-utilities\") pod \"a3bce872-2524-4c22-8fb5-7fce7040b790\" (UID: \"a3bce872-2524-4c22-8fb5-7fce7040b790\") " Jan 27 07:59:59 crc kubenswrapper[4799]: I0127 07:59:59.020548 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6sk8\" (UniqueName: \"kubernetes.io/projected/a3bce872-2524-4c22-8fb5-7fce7040b790-kube-api-access-r6sk8\") pod \"a3bce872-2524-4c22-8fb5-7fce7040b790\" (UID: \"a3bce872-2524-4c22-8fb5-7fce7040b790\") " Jan 27 07:59:59 crc kubenswrapper[4799]: I0127 07:59:59.020586 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3bce872-2524-4c22-8fb5-7fce7040b790-catalog-content\") pod \"a3bce872-2524-4c22-8fb5-7fce7040b790\" (UID: \"a3bce872-2524-4c22-8fb5-7fce7040b790\") " Jan 27 07:59:59 crc kubenswrapper[4799]: I0127 07:59:59.022086 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3bce872-2524-4c22-8fb5-7fce7040b790-utilities" (OuterVolumeSpecName: "utilities") pod "a3bce872-2524-4c22-8fb5-7fce7040b790" (UID: "a3bce872-2524-4c22-8fb5-7fce7040b790"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:59:59 crc kubenswrapper[4799]: I0127 07:59:59.027514 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3bce872-2524-4c22-8fb5-7fce7040b790-kube-api-access-r6sk8" (OuterVolumeSpecName: "kube-api-access-r6sk8") pod "a3bce872-2524-4c22-8fb5-7fce7040b790" (UID: "a3bce872-2524-4c22-8fb5-7fce7040b790"). InnerVolumeSpecName "kube-api-access-r6sk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:59:59 crc kubenswrapper[4799]: I0127 07:59:59.081426 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3bce872-2524-4c22-8fb5-7fce7040b790-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3bce872-2524-4c22-8fb5-7fce7040b790" (UID: "a3bce872-2524-4c22-8fb5-7fce7040b790"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:59:59 crc kubenswrapper[4799]: I0127 07:59:59.121546 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3bce872-2524-4c22-8fb5-7fce7040b790-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:59:59 crc kubenswrapper[4799]: I0127 07:59:59.121584 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6sk8\" (UniqueName: \"kubernetes.io/projected/a3bce872-2524-4c22-8fb5-7fce7040b790-kube-api-access-r6sk8\") on node \"crc\" DevicePath \"\"" Jan 27 07:59:59 crc kubenswrapper[4799]: I0127 07:59:59.121597 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3bce872-2524-4c22-8fb5-7fce7040b790-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:59:59 crc kubenswrapper[4799]: I0127 07:59:59.435246 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgdnc" event={"ID":"a3bce872-2524-4c22-8fb5-7fce7040b790","Type":"ContainerDied","Data":"dcf33e83c7547919b00664c38c59d8c163835610faf1fdda7202102216986b5f"} Jan 27 07:59:59 crc kubenswrapper[4799]: I0127 07:59:59.435346 4799 scope.go:117] "RemoveContainer" containerID="ea78dc189752d1a6475d4c7d21d32ce631847c1447f5125a065cf0494efcac66" Jan 27 07:59:59 crc kubenswrapper[4799]: I0127 07:59:59.435402 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgdnc" Jan 27 07:59:59 crc kubenswrapper[4799]: I0127 07:59:59.460764 4799 scope.go:117] "RemoveContainer" containerID="abd109e6ffddc10e39e22d930b4abb2aa6895a6a7af1afb6931826df0dbd9ea2" Jan 27 07:59:59 crc kubenswrapper[4799]: I0127 07:59:59.485246 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hgdnc"] Jan 27 07:59:59 crc kubenswrapper[4799]: I0127 07:59:59.492153 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hgdnc"] Jan 27 07:59:59 crc kubenswrapper[4799]: I0127 07:59:59.507893 4799 scope.go:117] "RemoveContainer" containerID="d4c146f04e2063c35a347214893a806196b0c60c6dd02fc522f67dcc1b087cb3" Jan 27 08:00:00 crc kubenswrapper[4799]: I0127 08:00:00.174557 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491680-qrwgl"] Jan 27 08:00:00 crc kubenswrapper[4799]: E0127 08:00:00.175933 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3bce872-2524-4c22-8fb5-7fce7040b790" containerName="extract-utilities" Jan 27 08:00:00 crc kubenswrapper[4799]: I0127 08:00:00.176276 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3bce872-2524-4c22-8fb5-7fce7040b790" containerName="extract-utilities" Jan 27 08:00:00 crc kubenswrapper[4799]: E0127 08:00:00.176378 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3bce872-2524-4c22-8fb5-7fce7040b790" containerName="registry-server" Jan 27 08:00:00 crc kubenswrapper[4799]: I0127 08:00:00.176426 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3bce872-2524-4c22-8fb5-7fce7040b790" containerName="registry-server" Jan 27 08:00:00 crc kubenswrapper[4799]: E0127 08:00:00.176476 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3bce872-2524-4c22-8fb5-7fce7040b790" containerName="extract-content" Jan 27 08:00:00 crc kubenswrapper[4799]: I0127 08:00:00.176528 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3bce872-2524-4c22-8fb5-7fce7040b790" containerName="extract-content" Jan 27 08:00:00 crc kubenswrapper[4799]: I0127 08:00:00.176661 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3bce872-2524-4c22-8fb5-7fce7040b790" containerName="registry-server" Jan 27 08:00:00 crc kubenswrapper[4799]: I0127 08:00:00.177074 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-qrwgl" Jan 27 08:00:00 crc kubenswrapper[4799]: I0127 08:00:00.179090 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 08:00:00 crc kubenswrapper[4799]: I0127 08:00:00.179601 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 08:00:00 crc kubenswrapper[4799]: I0127 08:00:00.181269 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491680-qrwgl"] Jan 27 08:00:00 crc kubenswrapper[4799]: I0127 08:00:00.338871 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdb1aeee-c165-4184-8db1-f48cde66dd4b-config-volume\") pod \"collect-profiles-29491680-qrwgl\" (UID: \"cdb1aeee-c165-4184-8db1-f48cde66dd4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-qrwgl" Jan 27 08:00:00 crc kubenswrapper[4799]: I0127 08:00:00.338981 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdb1aeee-c165-4184-8db1-f48cde66dd4b-secret-volume\") pod \"collect-profiles-29491680-qrwgl\" (UID: \"cdb1aeee-c165-4184-8db1-f48cde66dd4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-qrwgl" Jan 27 08:00:00 crc kubenswrapper[4799]: I0127 08:00:00.339087 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr8wb\" (UniqueName: \"kubernetes.io/projected/cdb1aeee-c165-4184-8db1-f48cde66dd4b-kube-api-access-gr8wb\") pod \"collect-profiles-29491680-qrwgl\" (UID: \"cdb1aeee-c165-4184-8db1-f48cde66dd4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-qrwgl" Jan 27 08:00:00 crc kubenswrapper[4799]: I0127 08:00:00.440851 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr8wb\" (UniqueName: \"kubernetes.io/projected/cdb1aeee-c165-4184-8db1-f48cde66dd4b-kube-api-access-gr8wb\") pod \"collect-profiles-29491680-qrwgl\" (UID: \"cdb1aeee-c165-4184-8db1-f48cde66dd4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-qrwgl" Jan 27 08:00:00 crc kubenswrapper[4799]: I0127 08:00:00.441043 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdb1aeee-c165-4184-8db1-f48cde66dd4b-config-volume\") pod \"collect-profiles-29491680-qrwgl\" (UID: \"cdb1aeee-c165-4184-8db1-f48cde66dd4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-qrwgl" Jan 27 08:00:00 crc kubenswrapper[4799]: I0127 08:00:00.441092 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdb1aeee-c165-4184-8db1-f48cde66dd4b-secret-volume\") pod \"collect-profiles-29491680-qrwgl\" (UID: \"cdb1aeee-c165-4184-8db1-f48cde66dd4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-qrwgl" Jan 27 08:00:00 crc kubenswrapper[4799]: I0127 08:00:00.442885 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdb1aeee-c165-4184-8db1-f48cde66dd4b-config-volume\") pod \"collect-profiles-29491680-qrwgl\" (UID: \"cdb1aeee-c165-4184-8db1-f48cde66dd4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-qrwgl" Jan 27 08:00:00 crc kubenswrapper[4799]: I0127 08:00:00.449764 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdb1aeee-c165-4184-8db1-f48cde66dd4b-secret-volume\") pod \"collect-profiles-29491680-qrwgl\" (UID: \"cdb1aeee-c165-4184-8db1-f48cde66dd4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-qrwgl" Jan 27 08:00:00 crc kubenswrapper[4799]: I0127 08:00:00.461688 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3bce872-2524-4c22-8fb5-7fce7040b790" path="/var/lib/kubelet/pods/a3bce872-2524-4c22-8fb5-7fce7040b790/volumes" Jan 27 08:00:00 crc kubenswrapper[4799]: I0127 08:00:00.465194 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr8wb\" (UniqueName: \"kubernetes.io/projected/cdb1aeee-c165-4184-8db1-f48cde66dd4b-kube-api-access-gr8wb\") pod \"collect-profiles-29491680-qrwgl\" (UID: \"cdb1aeee-c165-4184-8db1-f48cde66dd4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-qrwgl" Jan 27 08:00:00 crc kubenswrapper[4799]: I0127 08:00:00.492541 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-qrwgl" Jan 27 08:00:00 crc kubenswrapper[4799]: I0127 08:00:00.717205 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491680-qrwgl"] Jan 27 08:00:01 crc kubenswrapper[4799]: I0127 08:00:01.452459 4799 generic.go:334] "Generic (PLEG): container finished" podID="cdb1aeee-c165-4184-8db1-f48cde66dd4b" containerID="f9a2ae69d411267c99f2ffb5b83c0a7e0eb885e3bc63db0ca6637c9a30f87fe1" exitCode=0 Jan 27 08:00:01 crc kubenswrapper[4799]: I0127 08:00:01.452670 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-qrwgl" event={"ID":"cdb1aeee-c165-4184-8db1-f48cde66dd4b","Type":"ContainerDied","Data":"f9a2ae69d411267c99f2ffb5b83c0a7e0eb885e3bc63db0ca6637c9a30f87fe1"} Jan 27 08:00:01 crc kubenswrapper[4799]: I0127 08:00:01.453043 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-qrwgl" event={"ID":"cdb1aeee-c165-4184-8db1-f48cde66dd4b","Type":"ContainerStarted","Data":"12c2b81cf60ffe9cac8a639833400172b10c5a1d4854f7fe66155ded8efb3901"} Jan 27 08:00:02 crc kubenswrapper[4799]: I0127 08:00:02.688974 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-qrwgl" Jan 27 08:00:02 crc kubenswrapper[4799]: I0127 08:00:02.871996 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr8wb\" (UniqueName: \"kubernetes.io/projected/cdb1aeee-c165-4184-8db1-f48cde66dd4b-kube-api-access-gr8wb\") pod \"cdb1aeee-c165-4184-8db1-f48cde66dd4b\" (UID: \"cdb1aeee-c165-4184-8db1-f48cde66dd4b\") " Jan 27 08:00:02 crc kubenswrapper[4799]: I0127 08:00:02.872038 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdb1aeee-c165-4184-8db1-f48cde66dd4b-config-volume\") pod \"cdb1aeee-c165-4184-8db1-f48cde66dd4b\" (UID: \"cdb1aeee-c165-4184-8db1-f48cde66dd4b\") " Jan 27 08:00:02 crc kubenswrapper[4799]: I0127 08:00:02.872094 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdb1aeee-c165-4184-8db1-f48cde66dd4b-secret-volume\") pod \"cdb1aeee-c165-4184-8db1-f48cde66dd4b\" (UID: \"cdb1aeee-c165-4184-8db1-f48cde66dd4b\") " Jan 27 08:00:02 crc kubenswrapper[4799]: I0127 08:00:02.873148 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdb1aeee-c165-4184-8db1-f48cde66dd4b-config-volume" (OuterVolumeSpecName: "config-volume") pod "cdb1aeee-c165-4184-8db1-f48cde66dd4b" (UID: "cdb1aeee-c165-4184-8db1-f48cde66dd4b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:00:02 crc kubenswrapper[4799]: I0127 08:00:02.877506 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdb1aeee-c165-4184-8db1-f48cde66dd4b-kube-api-access-gr8wb" (OuterVolumeSpecName: "kube-api-access-gr8wb") pod "cdb1aeee-c165-4184-8db1-f48cde66dd4b" (UID: "cdb1aeee-c165-4184-8db1-f48cde66dd4b"). InnerVolumeSpecName "kube-api-access-gr8wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:00:02 crc kubenswrapper[4799]: I0127 08:00:02.877599 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb1aeee-c165-4184-8db1-f48cde66dd4b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cdb1aeee-c165-4184-8db1-f48cde66dd4b" (UID: "cdb1aeee-c165-4184-8db1-f48cde66dd4b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:00:02 crc kubenswrapper[4799]: I0127 08:00:02.973837 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr8wb\" (UniqueName: \"kubernetes.io/projected/cdb1aeee-c165-4184-8db1-f48cde66dd4b-kube-api-access-gr8wb\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:02 crc kubenswrapper[4799]: I0127 08:00:02.973883 4799 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdb1aeee-c165-4184-8db1-f48cde66dd4b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:02 crc kubenswrapper[4799]: I0127 08:00:02.973896 4799 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdb1aeee-c165-4184-8db1-f48cde66dd4b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:03 crc kubenswrapper[4799]: I0127 08:00:03.473560 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-qrwgl" event={"ID":"cdb1aeee-c165-4184-8db1-f48cde66dd4b","Type":"ContainerDied","Data":"12c2b81cf60ffe9cac8a639833400172b10c5a1d4854f7fe66155ded8efb3901"} Jan 27 08:00:03 crc kubenswrapper[4799]: I0127 08:00:03.473600 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12c2b81cf60ffe9cac8a639833400172b10c5a1d4854f7fe66155ded8efb3901" Jan 27 08:00:03 crc kubenswrapper[4799]: I0127 08:00:03.473701 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491680-qrwgl" Jan 27 08:00:22 crc kubenswrapper[4799]: I0127 08:00:22.686088 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6qfkb"] Jan 27 08:00:22 crc kubenswrapper[4799]: E0127 08:00:22.688525 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdb1aeee-c165-4184-8db1-f48cde66dd4b" containerName="collect-profiles" Jan 27 08:00:22 crc kubenswrapper[4799]: I0127 08:00:22.688683 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb1aeee-c165-4184-8db1-f48cde66dd4b" containerName="collect-profiles" Jan 27 08:00:22 crc kubenswrapper[4799]: I0127 08:00:22.688957 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdb1aeee-c165-4184-8db1-f48cde66dd4b" containerName="collect-profiles" Jan 27 08:00:22 crc kubenswrapper[4799]: I0127 08:00:22.690350 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qfkb" Jan 27 08:00:22 crc kubenswrapper[4799]: I0127 08:00:22.694830 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6qfkb"] Jan 27 08:00:22 crc kubenswrapper[4799]: I0127 08:00:22.748178 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01872076-ffd4-4499-b71d-f9ec337cc01c-utilities\") pod \"redhat-operators-6qfkb\" (UID: \"01872076-ffd4-4499-b71d-f9ec337cc01c\") " pod="openshift-marketplace/redhat-operators-6qfkb" Jan 27 08:00:22 crc kubenswrapper[4799]: I0127 08:00:22.748284 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggpl9\" (UniqueName: \"kubernetes.io/projected/01872076-ffd4-4499-b71d-f9ec337cc01c-kube-api-access-ggpl9\") pod \"redhat-operators-6qfkb\" (UID: \"01872076-ffd4-4499-b71d-f9ec337cc01c\") " pod="openshift-marketplace/redhat-operators-6qfkb" Jan 27 08:00:22 crc kubenswrapper[4799]: I0127 08:00:22.748354 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01872076-ffd4-4499-b71d-f9ec337cc01c-catalog-content\") pod \"redhat-operators-6qfkb\" (UID: \"01872076-ffd4-4499-b71d-f9ec337cc01c\") " pod="openshift-marketplace/redhat-operators-6qfkb" Jan 27 08:00:22 crc kubenswrapper[4799]: I0127 08:00:22.849125 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggpl9\" (UniqueName: \"kubernetes.io/projected/01872076-ffd4-4499-b71d-f9ec337cc01c-kube-api-access-ggpl9\") pod \"redhat-operators-6qfkb\" (UID: \"01872076-ffd4-4499-b71d-f9ec337cc01c\") " pod="openshift-marketplace/redhat-operators-6qfkb" Jan 27 08:00:22 crc kubenswrapper[4799]: I0127 08:00:22.849196 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01872076-ffd4-4499-b71d-f9ec337cc01c-catalog-content\") pod \"redhat-operators-6qfkb\" (UID: \"01872076-ffd4-4499-b71d-f9ec337cc01c\") " pod="openshift-marketplace/redhat-operators-6qfkb" Jan 27 08:00:22 crc kubenswrapper[4799]: I0127 08:00:22.849252 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01872076-ffd4-4499-b71d-f9ec337cc01c-utilities\") pod \"redhat-operators-6qfkb\" (UID: \"01872076-ffd4-4499-b71d-f9ec337cc01c\") " pod="openshift-marketplace/redhat-operators-6qfkb" Jan 27 08:00:22 crc kubenswrapper[4799]: I0127 08:00:22.849807 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01872076-ffd4-4499-b71d-f9ec337cc01c-catalog-content\") pod \"redhat-operators-6qfkb\" (UID: \"01872076-ffd4-4499-b71d-f9ec337cc01c\") " pod="openshift-marketplace/redhat-operators-6qfkb" Jan 27 08:00:22 crc kubenswrapper[4799]: I0127 08:00:22.849832 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01872076-ffd4-4499-b71d-f9ec337cc01c-utilities\") pod \"redhat-operators-6qfkb\" (UID: \"01872076-ffd4-4499-b71d-f9ec337cc01c\") " pod="openshift-marketplace/redhat-operators-6qfkb" Jan 27 08:00:22 crc kubenswrapper[4799]: I0127 08:00:22.866741 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggpl9\" (UniqueName: \"kubernetes.io/projected/01872076-ffd4-4499-b71d-f9ec337cc01c-kube-api-access-ggpl9\") pod \"redhat-operators-6qfkb\" (UID: \"01872076-ffd4-4499-b71d-f9ec337cc01c\") " pod="openshift-marketplace/redhat-operators-6qfkb" Jan 27 08:00:23 crc kubenswrapper[4799]: I0127 08:00:23.017220 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qfkb" Jan 27 08:00:23 crc kubenswrapper[4799]: I0127 08:00:23.459394 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6qfkb"] Jan 27 08:00:23 crc kubenswrapper[4799]: I0127 08:00:23.594077 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qfkb" event={"ID":"01872076-ffd4-4499-b71d-f9ec337cc01c","Type":"ContainerStarted","Data":"f343722824737da1bb40508c2c1c8f7800367ecc3f742dad302944897596fc33"} Jan 27 08:00:23 crc kubenswrapper[4799]: I0127 08:00:23.594124 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qfkb" event={"ID":"01872076-ffd4-4499-b71d-f9ec337cc01c","Type":"ContainerStarted","Data":"5f7ebabe8680b49d4fc97b3d2c4d7e0b58f1ddddce84578831bb32ad39b7a475"} Jan 27 08:00:24 crc kubenswrapper[4799]: I0127 08:00:24.600703 4799 generic.go:334] "Generic (PLEG): container finished" podID="01872076-ffd4-4499-b71d-f9ec337cc01c" containerID="f343722824737da1bb40508c2c1c8f7800367ecc3f742dad302944897596fc33" exitCode=0 Jan 27 08:00:24 crc kubenswrapper[4799]: I0127 08:00:24.600751 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qfkb" event={"ID":"01872076-ffd4-4499-b71d-f9ec337cc01c","Type":"ContainerDied","Data":"f343722824737da1bb40508c2c1c8f7800367ecc3f742dad302944897596fc33"} Jan 27 08:00:25 crc kubenswrapper[4799]: I0127 08:00:25.610015 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qfkb" event={"ID":"01872076-ffd4-4499-b71d-f9ec337cc01c","Type":"ContainerStarted","Data":"f3515a7a8fde268aa90f135fc900443c56de682da32dbccf362b48adc34f6d49"} Jan 27 08:00:26 crc kubenswrapper[4799]: I0127 08:00:26.620103 4799 generic.go:334] "Generic (PLEG): container finished" podID="01872076-ffd4-4499-b71d-f9ec337cc01c" containerID="f3515a7a8fde268aa90f135fc900443c56de682da32dbccf362b48adc34f6d49" exitCode=0 Jan 27 08:00:26 crc kubenswrapper[4799]: I0127 08:00:26.620216 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qfkb" event={"ID":"01872076-ffd4-4499-b71d-f9ec337cc01c","Type":"ContainerDied","Data":"f3515a7a8fde268aa90f135fc900443c56de682da32dbccf362b48adc34f6d49"} Jan 27 08:00:27 crc kubenswrapper[4799]: I0127 08:00:27.631592 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qfkb" event={"ID":"01872076-ffd4-4499-b71d-f9ec337cc01c","Type":"ContainerStarted","Data":"90b67cd8127b087046d7160e01120a9ca93cb9fc444a12ce868c0489ac19bfbc"} Jan 27 08:00:27 crc kubenswrapper[4799]: I0127 08:00:27.650258 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6qfkb" podStartSLOduration=3.2215393 podStartE2EDuration="5.650232645s" podCreationTimestamp="2026-01-27 08:00:22 +0000 UTC" firstStartedPulling="2026-01-27 08:00:24.602243297 +0000 UTC m=+890.913347362" lastFinishedPulling="2026-01-27 08:00:27.030936632 +0000 UTC m=+893.342040707" observedRunningTime="2026-01-27 08:00:27.646612913 +0000 UTC m=+893.957716988" watchObservedRunningTime="2026-01-27 08:00:27.650232645 +0000 UTC m=+893.961336710" Jan 27 08:00:31 crc kubenswrapper[4799]: I0127 08:00:31.780482 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hggcd"] Jan 27 08:00:31 crc kubenswrapper[4799]: I0127 08:00:31.781510 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovn-controller" containerID="cri-o://495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211" gracePeriod=30 Jan 27 08:00:31 crc kubenswrapper[4799]: I0127 08:00:31.781652 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3" gracePeriod=30 Jan 27 08:00:31 crc kubenswrapper[4799]: I0127 08:00:31.781698 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="kube-rbac-proxy-node" containerID="cri-o://5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5" gracePeriod=30 Jan 27 08:00:31 crc kubenswrapper[4799]: I0127 08:00:31.781634 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="northd" containerID="cri-o://60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee" gracePeriod=30 Jan 27 08:00:31 crc kubenswrapper[4799]: I0127 08:00:31.781731 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovn-acl-logging" containerID="cri-o://14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909" gracePeriod=30 Jan 27 08:00:31 crc kubenswrapper[4799]: I0127 08:00:31.781903 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="sbdb" containerID="cri-o://b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8" gracePeriod=30 Jan 27 08:00:31 crc kubenswrapper[4799]: I0127 08:00:31.781885 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="nbdb" containerID="cri-o://1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6" gracePeriod=30 Jan 27 08:00:31 crc kubenswrapper[4799]: I0127 08:00:31.818594 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovnkube-controller" containerID="cri-o://a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113" gracePeriod=30 Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.458015 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hggcd_836be94a-c1de-4b1c-b98a-7af78a2a4607/ovnkube-controller/3.log" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.463318 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hggcd_836be94a-c1de-4b1c-b98a-7af78a2a4607/ovn-acl-logging/0.log" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.464174 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hggcd_836be94a-c1de-4b1c-b98a-7af78a2a4607/ovn-controller/0.log" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.465472 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.523542 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fgj2p"] Jan 27 08:00:32 crc kubenswrapper[4799]: E0127 08:00:32.523772 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="nbdb" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.523786 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="nbdb" Jan 27 08:00:32 crc kubenswrapper[4799]: E0127 08:00:32.523799 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="northd" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.523807 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="northd" Jan 27 08:00:32 crc kubenswrapper[4799]: E0127 08:00:32.523817 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovnkube-controller" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.523825 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovnkube-controller" Jan 27 08:00:32 crc kubenswrapper[4799]: E0127 08:00:32.523837 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.523847 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 08:00:32 crc kubenswrapper[4799]: E0127 08:00:32.523860 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovnkube-controller" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.523868 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovnkube-controller" Jan 27 08:00:32 crc kubenswrapper[4799]: E0127 08:00:32.523880 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="kube-rbac-proxy-node" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.523888 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="kube-rbac-proxy-node" Jan 27 08:00:32 crc kubenswrapper[4799]: E0127 08:00:32.523902 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovn-acl-logging" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.524021 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovn-acl-logging" Jan 27 08:00:32 crc kubenswrapper[4799]: E0127 08:00:32.524034 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovnkube-controller" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.524043 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovnkube-controller" Jan 27 08:00:32 crc kubenswrapper[4799]: E0127 08:00:32.524054 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovn-controller" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.524063 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovn-controller" Jan 27 08:00:32 crc kubenswrapper[4799]: E0127 08:00:32.524072 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="sbdb" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.524081 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="sbdb" Jan 27 08:00:32 crc kubenswrapper[4799]: E0127 08:00:32.524093 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="kubecfg-setup" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.524101 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="kubecfg-setup" Jan 27 08:00:32 crc kubenswrapper[4799]: E0127 08:00:32.524124 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovnkube-controller" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.524132 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovnkube-controller" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.524251 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovn-acl-logging" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.524267 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="nbdb" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.524276 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovn-controller" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.524286 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="sbdb" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.524347 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovnkube-controller" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.524360 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovnkube-controller" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.524369 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovnkube-controller" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.524416 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovnkube-controller" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.524429 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="northd" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.524441 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovnkube-controller" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.524453 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.524467 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="kube-rbac-proxy-node" Jan 27 08:00:32 crc kubenswrapper[4799]: E0127 08:00:32.524600 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovnkube-controller" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.524610 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerName="ovnkube-controller" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.526774 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.570767 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-node-log\") pod \"836be94a-c1de-4b1c-b98a-7af78a2a4607\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.570849 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-kubelet\") pod \"836be94a-c1de-4b1c-b98a-7af78a2a4607\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.570881 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/836be94a-c1de-4b1c-b98a-7af78a2a4607-ovn-node-metrics-cert\") pod \"836be94a-c1de-4b1c-b98a-7af78a2a4607\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.570882 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-node-log" (OuterVolumeSpecName: "node-log") pod "836be94a-c1de-4b1c-b98a-7af78a2a4607" (UID: "836be94a-c1de-4b1c-b98a-7af78a2a4607"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.570901 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-run-openvswitch\") pod \"836be94a-c1de-4b1c-b98a-7af78a2a4607\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.570944 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "836be94a-c1de-4b1c-b98a-7af78a2a4607" (UID: "836be94a-c1de-4b1c-b98a-7af78a2a4607"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.570999 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-var-lib-openvswitch\") pod \"836be94a-c1de-4b1c-b98a-7af78a2a4607\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571001 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "836be94a-c1de-4b1c-b98a-7af78a2a4607" (UID: "836be94a-c1de-4b1c-b98a-7af78a2a4607"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571044 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/836be94a-c1de-4b1c-b98a-7af78a2a4607-ovnkube-config\") pod \"836be94a-c1de-4b1c-b98a-7af78a2a4607\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571072 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/836be94a-c1de-4b1c-b98a-7af78a2a4607-env-overrides\") pod \"836be94a-c1de-4b1c-b98a-7af78a2a4607\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571072 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "836be94a-c1de-4b1c-b98a-7af78a2a4607" (UID: "836be94a-c1de-4b1c-b98a-7af78a2a4607"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571103 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-run-netns\") pod \"836be94a-c1de-4b1c-b98a-7af78a2a4607\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571126 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/836be94a-c1de-4b1c-b98a-7af78a2a4607-ovnkube-script-lib\") pod \"836be94a-c1de-4b1c-b98a-7af78a2a4607\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571168 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-slash\") pod \"836be94a-c1de-4b1c-b98a-7af78a2a4607\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571182 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "836be94a-c1de-4b1c-b98a-7af78a2a4607" (UID: "836be94a-c1de-4b1c-b98a-7af78a2a4607"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571197 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-cni-netd\") pod \"836be94a-c1de-4b1c-b98a-7af78a2a4607\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571224 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnc94\" (UniqueName: \"kubernetes.io/projected/836be94a-c1de-4b1c-b98a-7af78a2a4607-kube-api-access-nnc94\") pod \"836be94a-c1de-4b1c-b98a-7af78a2a4607\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571259 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-var-lib-cni-networks-ovn-kubernetes\") pod \"836be94a-c1de-4b1c-b98a-7af78a2a4607\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571279 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-log-socket\") pod \"836be94a-c1de-4b1c-b98a-7af78a2a4607\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571317 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-run-ovn\") pod \"836be94a-c1de-4b1c-b98a-7af78a2a4607\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571338 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-systemd-units\") pod \"836be94a-c1de-4b1c-b98a-7af78a2a4607\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571356 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-run-systemd\") pod \"836be94a-c1de-4b1c-b98a-7af78a2a4607\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571387 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-etc-openvswitch\") pod \"836be94a-c1de-4b1c-b98a-7af78a2a4607\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571411 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-run-ovn-kubernetes\") pod \"836be94a-c1de-4b1c-b98a-7af78a2a4607\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571430 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-cni-bin\") pod \"836be94a-c1de-4b1c-b98a-7af78a2a4607\" (UID: \"836be94a-c1de-4b1c-b98a-7af78a2a4607\") " Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571649 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/836be94a-c1de-4b1c-b98a-7af78a2a4607-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "836be94a-c1de-4b1c-b98a-7af78a2a4607" (UID: "836be94a-c1de-4b1c-b98a-7af78a2a4607"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571669 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/836be94a-c1de-4b1c-b98a-7af78a2a4607-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "836be94a-c1de-4b1c-b98a-7af78a2a4607" (UID: "836be94a-c1de-4b1c-b98a-7af78a2a4607"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571706 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "836be94a-c1de-4b1c-b98a-7af78a2a4607" (UID: "836be94a-c1de-4b1c-b98a-7af78a2a4607"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571769 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-slash" (OuterVolumeSpecName: "host-slash") pod "836be94a-c1de-4b1c-b98a-7af78a2a4607" (UID: "836be94a-c1de-4b1c-b98a-7af78a2a4607"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571801 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "836be94a-c1de-4b1c-b98a-7af78a2a4607" (UID: "836be94a-c1de-4b1c-b98a-7af78a2a4607"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571898 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/836be94a-c1de-4b1c-b98a-7af78a2a4607-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "836be94a-c1de-4b1c-b98a-7af78a2a4607" (UID: "836be94a-c1de-4b1c-b98a-7af78a2a4607"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.571950 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "836be94a-c1de-4b1c-b98a-7af78a2a4607" (UID: "836be94a-c1de-4b1c-b98a-7af78a2a4607"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.572053 4799 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.572075 4799 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.572092 4799 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.572105 4799 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.572119 4799 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.572135 4799 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.572149 4799 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.572162 4799 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.572175 4799 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/836be94a-c1de-4b1c-b98a-7af78a2a4607-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.572188 4799 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/836be94a-c1de-4b1c-b98a-7af78a2a4607-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.572201 4799 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.572214 4799 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/836be94a-c1de-4b1c-b98a-7af78a2a4607-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.572509 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "836be94a-c1de-4b1c-b98a-7af78a2a4607" (UID: "836be94a-c1de-4b1c-b98a-7af78a2a4607"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.572548 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "836be94a-c1de-4b1c-b98a-7af78a2a4607" (UID: "836be94a-c1de-4b1c-b98a-7af78a2a4607"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.572581 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "836be94a-c1de-4b1c-b98a-7af78a2a4607" (UID: "836be94a-c1de-4b1c-b98a-7af78a2a4607"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.572609 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "836be94a-c1de-4b1c-b98a-7af78a2a4607" (UID: "836be94a-c1de-4b1c-b98a-7af78a2a4607"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.572887 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-log-socket" (OuterVolumeSpecName: "log-socket") pod "836be94a-c1de-4b1c-b98a-7af78a2a4607" (UID: "836be94a-c1de-4b1c-b98a-7af78a2a4607"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.576110 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836be94a-c1de-4b1c-b98a-7af78a2a4607-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "836be94a-c1de-4b1c-b98a-7af78a2a4607" (UID: "836be94a-c1de-4b1c-b98a-7af78a2a4607"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.576524 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836be94a-c1de-4b1c-b98a-7af78a2a4607-kube-api-access-nnc94" (OuterVolumeSpecName: "kube-api-access-nnc94") pod "836be94a-c1de-4b1c-b98a-7af78a2a4607" (UID: "836be94a-c1de-4b1c-b98a-7af78a2a4607"). InnerVolumeSpecName "kube-api-access-nnc94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.586734 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "836be94a-c1de-4b1c-b98a-7af78a2a4607" (UID: "836be94a-c1de-4b1c-b98a-7af78a2a4607"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.672048 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tgr7w_60934e21-bc53-4f80-bb08-bb67af7301cd/kube-multus/1.log" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.672971 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31c06f68-d4da-4ead-a1d1-f47806c1517b-env-overrides\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673035 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-host-cni-bin\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673067 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-host-run-ovn-kubernetes\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673087 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-run-openvswitch\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673108 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-var-lib-openvswitch\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673125 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-etc-openvswitch\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673142 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31c06f68-d4da-4ead-a1d1-f47806c1517b-ovnkube-config\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673157 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-log-socket\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673323 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-node-log\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673429 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31c06f68-d4da-4ead-a1d1-f47806c1517b-ovn-node-metrics-cert\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673525 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-run-systemd\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673560 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673608 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-systemd-units\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673634 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/31c06f68-d4da-4ead-a1d1-f47806c1517b-ovnkube-script-lib\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673656 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltpbw\" (UniqueName: \"kubernetes.io/projected/31c06f68-d4da-4ead-a1d1-f47806c1517b-kube-api-access-ltpbw\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673682 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-run-ovn\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673701 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-host-cni-netd\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673730 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-host-run-netns\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673763 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-host-kubelet\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673786 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-host-slash\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673862 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnc94\" (UniqueName: \"kubernetes.io/projected/836be94a-c1de-4b1c-b98a-7af78a2a4607-kube-api-access-nnc94\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673881 4799 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673894 4799 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673908 4799 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673920 4799 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673936 4799 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673948 4799 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/836be94a-c1de-4b1c-b98a-7af78a2a4607-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.673960 4799 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/836be94a-c1de-4b1c-b98a-7af78a2a4607-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.675466 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tgr7w_60934e21-bc53-4f80-bb08-bb67af7301cd/kube-multus/0.log" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.675515 4799 generic.go:334] "Generic (PLEG): container finished" podID="60934e21-bc53-4f80-bb08-bb67af7301cd" containerID="49f68d9971ee77d48b2b7db56c05766ea054be9dbf688bf2110af470179aacfb" exitCode=2 Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.675603 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tgr7w" event={"ID":"60934e21-bc53-4f80-bb08-bb67af7301cd","Type":"ContainerDied","Data":"49f68d9971ee77d48b2b7db56c05766ea054be9dbf688bf2110af470179aacfb"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.675637 4799 scope.go:117] "RemoveContainer" containerID="10ceb586c802d2bce1bbe5dfb8fdd4186f2d1b3876d7036fb35493e8fc156db3" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.676080 4799 scope.go:117] "RemoveContainer" containerID="49f68d9971ee77d48b2b7db56c05766ea054be9dbf688bf2110af470179aacfb" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.678105 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hggcd_836be94a-c1de-4b1c-b98a-7af78a2a4607/ovnkube-controller/3.log" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.681826 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hggcd_836be94a-c1de-4b1c-b98a-7af78a2a4607/ovn-acl-logging/0.log" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.685959 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hggcd_836be94a-c1de-4b1c-b98a-7af78a2a4607/ovn-controller/0.log" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686291 4799 generic.go:334] "Generic (PLEG): container finished" podID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerID="a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113" exitCode=0 Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686335 4799 generic.go:334] "Generic (PLEG): container finished" podID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerID="b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8" exitCode=0 Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686342 4799 generic.go:334] "Generic (PLEG): container finished" podID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerID="1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6" exitCode=0 Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686349 4799 generic.go:334] "Generic (PLEG): container finished" podID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerID="60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee" exitCode=0 Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686355 4799 generic.go:334] "Generic (PLEG): container finished" podID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerID="4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3" exitCode=0 Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686361 4799 generic.go:334] "Generic (PLEG): container finished" podID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerID="5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5" exitCode=0 Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686367 4799 generic.go:334] "Generic (PLEG): container finished" podID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerID="14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909" exitCode=143 Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686373 4799 generic.go:334] "Generic (PLEG): container finished" podID="836be94a-c1de-4b1c-b98a-7af78a2a4607" containerID="495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211" exitCode=143 Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686391 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerDied","Data":"a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686427 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerDied","Data":"b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686438 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerDied","Data":"1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686446 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerDied","Data":"60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686456 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerDied","Data":"4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686465 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerDied","Data":"5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686474 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686485 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686490 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686496 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686502 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686507 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686512 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686517 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686522 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686526 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686534 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerDied","Data":"14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686541 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686546 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686551 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686556 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686565 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686569 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686574 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686579 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686583 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686588 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686597 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerDied","Data":"495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686604 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686610 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686616 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686621 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686627 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686632 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686637 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686642 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686647 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686652 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686659 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" event={"ID":"836be94a-c1de-4b1c-b98a-7af78a2a4607","Type":"ContainerDied","Data":"b37a897f1d7c7fd61b602fd229b5ec7f496fbebd5c4bc0144407a024fe391418"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686667 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686673 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686678 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686683 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686688 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686693 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686697 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686702 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686706 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686711 4799 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95"} Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.686585 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hggcd" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.722629 4799 scope.go:117] "RemoveContainer" containerID="a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.727897 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hggcd"] Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.732110 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hggcd"] Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.775039 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-host-cni-bin\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.775090 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-host-run-ovn-kubernetes\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.775125 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-run-openvswitch\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.775154 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-var-lib-openvswitch\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.775178 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-etc-openvswitch\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.775224 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-etc-openvswitch\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.775340 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-host-cni-bin\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.775541 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-run-openvswitch\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.775561 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-host-run-ovn-kubernetes\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.775445 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31c06f68-d4da-4ead-a1d1-f47806c1517b-ovnkube-config\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.775694 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-log-socket\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.775735 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-var-lib-openvswitch\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.775792 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-log-socket\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.775835 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-node-log\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.775922 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31c06f68-d4da-4ead-a1d1-f47806c1517b-ovn-node-metrics-cert\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.775997 4799 scope.go:117] "RemoveContainer" containerID="b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.776065 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-run-systemd\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.776104 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.776170 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-systemd-units\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.776211 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/31c06f68-d4da-4ead-a1d1-f47806c1517b-ovnkube-script-lib\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.776242 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltpbw\" (UniqueName: \"kubernetes.io/projected/31c06f68-d4da-4ead-a1d1-f47806c1517b-kube-api-access-ltpbw\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.776277 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-host-cni-netd\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.776266 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31c06f68-d4da-4ead-a1d1-f47806c1517b-ovnkube-config\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.776328 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-run-ovn\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.776366 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-host-run-netns\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.776415 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-systemd-units\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.776429 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-host-kubelet\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.776496 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-host-slash\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.776528 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31c06f68-d4da-4ead-a1d1-f47806c1517b-env-overrides\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.776694 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-run-systemd\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.776736 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.777127 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-run-ovn\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.777170 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-host-cni-netd\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.777225 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-node-log\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.777816 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-host-run-netns\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.777830 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/31c06f68-d4da-4ead-a1d1-f47806c1517b-ovnkube-script-lib\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.777877 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-host-kubelet\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.777915 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/31c06f68-d4da-4ead-a1d1-f47806c1517b-host-slash\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.778152 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31c06f68-d4da-4ead-a1d1-f47806c1517b-env-overrides\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.784163 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31c06f68-d4da-4ead-a1d1-f47806c1517b-ovn-node-metrics-cert\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.799351 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltpbw\" (UniqueName: \"kubernetes.io/projected/31c06f68-d4da-4ead-a1d1-f47806c1517b-kube-api-access-ltpbw\") pod \"ovnkube-node-fgj2p\" (UID: \"31c06f68-d4da-4ead-a1d1-f47806c1517b\") " pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.803641 4799 scope.go:117] "RemoveContainer" containerID="b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.824033 4799 scope.go:117] "RemoveContainer" containerID="1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.844874 4799 scope.go:117] "RemoveContainer" containerID="60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.845075 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.875861 4799 scope.go:117] "RemoveContainer" containerID="4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3" Jan 27 08:00:32 crc kubenswrapper[4799]: W0127 08:00:32.880867 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31c06f68_d4da_4ead_a1d1_f47806c1517b.slice/crio-bbe653aae27bcdcc170709df1b51f450fbf5dc5273d1ea633672708287b281f1 WatchSource:0}: Error finding container bbe653aae27bcdcc170709df1b51f450fbf5dc5273d1ea633672708287b281f1: Status 404 returned error can't find the container with id bbe653aae27bcdcc170709df1b51f450fbf5dc5273d1ea633672708287b281f1 Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.903533 4799 scope.go:117] "RemoveContainer" containerID="5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.967356 4799 scope.go:117] "RemoveContainer" containerID="14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909" Jan 27 08:00:32 crc kubenswrapper[4799]: I0127 08:00:32.984614 4799 scope.go:117] "RemoveContainer" containerID="495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.017429 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6qfkb" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.017770 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6qfkb" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.017509 4799 scope.go:117] "RemoveContainer" containerID="35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.030290 4799 scope.go:117] "RemoveContainer" containerID="a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113" Jan 27 08:00:33 crc kubenswrapper[4799]: E0127 08:00:33.030800 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113\": container with ID starting with a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113 not found: ID does not exist" containerID="a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.030827 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113"} err="failed to get container status \"a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113\": rpc error: code = NotFound desc = could not find container \"a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113\": container with ID starting with a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.030848 4799 scope.go:117] "RemoveContainer" containerID="b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3" Jan 27 08:00:33 crc kubenswrapper[4799]: E0127 08:00:33.031256 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3\": container with ID starting with b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3 not found: ID does not exist" containerID="b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.031289 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3"} err="failed to get container status \"b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3\": rpc error: code = NotFound desc = could not find container \"b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3\": container with ID starting with b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.031323 4799 scope.go:117] "RemoveContainer" containerID="b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8" Jan 27 08:00:33 crc kubenswrapper[4799]: E0127 08:00:33.031591 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\": container with ID starting with b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8 not found: ID does not exist" containerID="b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.031618 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8"} err="failed to get container status \"b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\": rpc error: code = NotFound desc = could not find container \"b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\": container with ID starting with b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.031635 4799 scope.go:117] "RemoveContainer" containerID="1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6" Jan 27 08:00:33 crc kubenswrapper[4799]: E0127 08:00:33.031918 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\": container with ID starting with 1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6 not found: ID does not exist" containerID="1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.031946 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6"} err="failed to get container status \"1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\": rpc error: code = NotFound desc = could not find container \"1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\": container with ID starting with 1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.031964 4799 scope.go:117] "RemoveContainer" containerID="60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee" Jan 27 08:00:33 crc kubenswrapper[4799]: E0127 08:00:33.032179 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\": container with ID starting with 60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee not found: ID does not exist" containerID="60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.032202 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee"} err="failed to get container status \"60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\": rpc error: code = NotFound desc = could not find container \"60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\": container with ID starting with 60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.032217 4799 scope.go:117] "RemoveContainer" containerID="4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3" Jan 27 08:00:33 crc kubenswrapper[4799]: E0127 08:00:33.032430 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\": container with ID starting with 4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3 not found: ID does not exist" containerID="4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.032453 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3"} err="failed to get container status \"4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\": rpc error: code = NotFound desc = could not find container \"4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\": container with ID starting with 4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.032466 4799 scope.go:117] "RemoveContainer" containerID="5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5" Jan 27 08:00:33 crc kubenswrapper[4799]: E0127 08:00:33.032675 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\": container with ID starting with 5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5 not found: ID does not exist" containerID="5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.032710 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5"} err="failed to get container status \"5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\": rpc error: code = NotFound desc = could not find container \"5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\": container with ID starting with 5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.032724 4799 scope.go:117] "RemoveContainer" containerID="14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909" Jan 27 08:00:33 crc kubenswrapper[4799]: E0127 08:00:33.032909 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\": container with ID starting with 14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909 not found: ID does not exist" containerID="14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.032929 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909"} err="failed to get container status \"14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\": rpc error: code = NotFound desc = could not find container \"14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\": container with ID starting with 14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.032943 4799 scope.go:117] "RemoveContainer" containerID="495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211" Jan 27 08:00:33 crc kubenswrapper[4799]: E0127 08:00:33.033124 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\": container with ID starting with 495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211 not found: ID does not exist" containerID="495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.033141 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211"} err="failed to get container status \"495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\": rpc error: code = NotFound desc = could not find container \"495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\": container with ID starting with 495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.033153 4799 scope.go:117] "RemoveContainer" containerID="35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95" Jan 27 08:00:33 crc kubenswrapper[4799]: E0127 08:00:33.033425 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\": container with ID starting with 35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95 not found: ID does not exist" containerID="35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.033443 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95"} err="failed to get container status \"35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\": rpc error: code = NotFound desc = could not find container \"35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\": container with ID starting with 35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.033456 4799 scope.go:117] "RemoveContainer" containerID="a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.033643 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113"} err="failed to get container status \"a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113\": rpc error: code = NotFound desc = could not find container \"a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113\": container with ID starting with a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.033662 4799 scope.go:117] "RemoveContainer" containerID="b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.033853 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3"} err="failed to get container status \"b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3\": rpc error: code = NotFound desc = could not find container \"b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3\": container with ID starting with b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.033878 4799 scope.go:117] "RemoveContainer" containerID="b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.034062 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8"} err="failed to get container status \"b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\": rpc error: code = NotFound desc = could not find container \"b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\": container with ID starting with b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.034075 4799 scope.go:117] "RemoveContainer" containerID="1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.034233 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6"} err="failed to get container status \"1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\": rpc error: code = NotFound desc = could not find container \"1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\": container with ID starting with 1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.034246 4799 scope.go:117] "RemoveContainer" containerID="60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.034603 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee"} err="failed to get container status \"60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\": rpc error: code = NotFound desc = could not find container \"60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\": container with ID starting with 60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.034617 4799 scope.go:117] "RemoveContainer" containerID="4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.035083 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3"} err="failed to get container status \"4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\": rpc error: code = NotFound desc = could not find container \"4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\": container with ID starting with 4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.035099 4799 scope.go:117] "RemoveContainer" containerID="5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.035377 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5"} err="failed to get container status \"5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\": rpc error: code = NotFound desc = could not find container \"5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\": container with ID starting with 5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.035395 4799 scope.go:117] "RemoveContainer" containerID="14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.035741 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909"} err="failed to get container status \"14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\": rpc error: code = NotFound desc = could not find container \"14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\": container with ID starting with 14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.035758 4799 scope.go:117] "RemoveContainer" containerID="495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.036279 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211"} err="failed to get container status \"495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\": rpc error: code = NotFound desc = could not find container \"495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\": container with ID starting with 495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.036392 4799 scope.go:117] "RemoveContainer" containerID="35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.040088 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95"} err="failed to get container status \"35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\": rpc error: code = NotFound desc = could not find container \"35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\": container with ID starting with 35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.040107 4799 scope.go:117] "RemoveContainer" containerID="a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.040564 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113"} err="failed to get container status \"a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113\": rpc error: code = NotFound desc = could not find container \"a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113\": container with ID starting with a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.040622 4799 scope.go:117] "RemoveContainer" containerID="b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.041176 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3"} err="failed to get container status \"b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3\": rpc error: code = NotFound desc = could not find container \"b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3\": container with ID starting with b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.041202 4799 scope.go:117] "RemoveContainer" containerID="b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.041440 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8"} err="failed to get container status \"b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\": rpc error: code = NotFound desc = could not find container \"b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\": container with ID starting with b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.041460 4799 scope.go:117] "RemoveContainer" containerID="1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.041718 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6"} err="failed to get container status \"1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\": rpc error: code = NotFound desc = could not find container \"1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\": container with ID starting with 1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.041737 4799 scope.go:117] "RemoveContainer" containerID="60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.043112 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee"} err="failed to get container status \"60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\": rpc error: code = NotFound desc = could not find container \"60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\": container with ID starting with 60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.043157 4799 scope.go:117] "RemoveContainer" containerID="4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.043494 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3"} err="failed to get container status \"4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\": rpc error: code = NotFound desc = could not find container \"4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\": container with ID starting with 4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.043511 4799 scope.go:117] "RemoveContainer" containerID="5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.043744 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5"} err="failed to get container status \"5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\": rpc error: code = NotFound desc = could not find container \"5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\": container with ID starting with 5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.043756 4799 scope.go:117] "RemoveContainer" containerID="14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.044017 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909"} err="failed to get container status \"14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\": rpc error: code = NotFound desc = could not find container \"14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\": container with ID starting with 14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.044029 4799 scope.go:117] "RemoveContainer" containerID="495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.044229 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211"} err="failed to get container status \"495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\": rpc error: code = NotFound desc = could not find container \"495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\": container with ID starting with 495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.044242 4799 scope.go:117] "RemoveContainer" containerID="35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.044456 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95"} err="failed to get container status \"35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\": rpc error: code = NotFound desc = could not find container \"35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\": container with ID starting with 35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.044468 4799 scope.go:117] "RemoveContainer" containerID="a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.044696 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113"} err="failed to get container status \"a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113\": rpc error: code = NotFound desc = could not find container \"a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113\": container with ID starting with a280d2636cea740d4266a831f45484b7cba897bd4eac70e288dd0e9576a96113 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.044710 4799 scope.go:117] "RemoveContainer" containerID="b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.044890 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3"} err="failed to get container status \"b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3\": rpc error: code = NotFound desc = could not find container \"b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3\": container with ID starting with b137c4b8a23fa36e4c4f15a087d6e64b9aac7a9726027062cc95edd2aa216bc3 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.044901 4799 scope.go:117] "RemoveContainer" containerID="b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.045203 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8"} err="failed to get container status \"b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\": rpc error: code = NotFound desc = could not find container \"b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8\": container with ID starting with b91280e557788e7fe5469346f8492cad1d2bfc78a08c4453c74339dbf70a6fe8 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.045216 4799 scope.go:117] "RemoveContainer" containerID="1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.045599 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6"} err="failed to get container status \"1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\": rpc error: code = NotFound desc = could not find container \"1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6\": container with ID starting with 1ca4cfd3f2aff82718578188e746e280b627647d8e19a2a1c00789d21bcb09d6 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.045617 4799 scope.go:117] "RemoveContainer" containerID="60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.045853 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee"} err="failed to get container status \"60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\": rpc error: code = NotFound desc = could not find container \"60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee\": container with ID starting with 60f23e5ade8ab76e99e59732bcaad01a8e2ec14c42f06e5e4d32134348add1ee not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.045867 4799 scope.go:117] "RemoveContainer" containerID="4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.047822 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3"} err="failed to get container status \"4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\": rpc error: code = NotFound desc = could not find container \"4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3\": container with ID starting with 4abebdb7776c48bfd93eac2814541fe5ecea506ef8e75a6aeda6903b967713f3 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.047841 4799 scope.go:117] "RemoveContainer" containerID="5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.048070 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5"} err="failed to get container status \"5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\": rpc error: code = NotFound desc = could not find container \"5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5\": container with ID starting with 5b3293ad6e47aa6d6e59467a086863cd9c091be2891e4d1fe91d2219bd3285b5 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.048084 4799 scope.go:117] "RemoveContainer" containerID="14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.048288 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909"} err="failed to get container status \"14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\": rpc error: code = NotFound desc = could not find container \"14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909\": container with ID starting with 14b66026458b4019756124949cb5764589cc071b425fa88c850548e42a13b909 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.048316 4799 scope.go:117] "RemoveContainer" containerID="495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.048525 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211"} err="failed to get container status \"495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\": rpc error: code = NotFound desc = could not find container \"495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211\": container with ID starting with 495afc5881255ac1da301d8fd53dbaf716ef6f738ac75803a789f9c3668db211 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.048539 4799 scope.go:117] "RemoveContainer" containerID="35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.049618 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95"} err="failed to get container status \"35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\": rpc error: code = NotFound desc = could not find container \"35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95\": container with ID starting with 35e47497388ba56628faa5aa461d9824937354361c121321e1aba4f7a4437d95 not found: ID does not exist" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.697350 4799 generic.go:334] "Generic (PLEG): container finished" podID="31c06f68-d4da-4ead-a1d1-f47806c1517b" containerID="d2fb1d7e6218711218fecd42c32e1f5738a6ee59454ac8dce0a67bd697ac8e34" exitCode=0 Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.697429 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" event={"ID":"31c06f68-d4da-4ead-a1d1-f47806c1517b","Type":"ContainerDied","Data":"d2fb1d7e6218711218fecd42c32e1f5738a6ee59454ac8dce0a67bd697ac8e34"} Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.697771 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" event={"ID":"31c06f68-d4da-4ead-a1d1-f47806c1517b","Type":"ContainerStarted","Data":"bbe653aae27bcdcc170709df1b51f450fbf5dc5273d1ea633672708287b281f1"} Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.700749 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tgr7w_60934e21-bc53-4f80-bb08-bb67af7301cd/kube-multus/1.log" Jan 27 08:00:33 crc kubenswrapper[4799]: I0127 08:00:33.700839 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tgr7w" event={"ID":"60934e21-bc53-4f80-bb08-bb67af7301cd","Type":"ContainerStarted","Data":"1451b65c746072ba4b88889dbb58eed19a85fed4e6b31dce3a1cbd468e3c6cfe"} Jan 27 08:00:34 crc kubenswrapper[4799]: I0127 08:00:34.070388 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6qfkb" podUID="01872076-ffd4-4499-b71d-f9ec337cc01c" containerName="registry-server" probeResult="failure" output=< Jan 27 08:00:34 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Jan 27 08:00:34 crc kubenswrapper[4799]: > Jan 27 08:00:34 crc kubenswrapper[4799]: I0127 08:00:34.461647 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="836be94a-c1de-4b1c-b98a-7af78a2a4607" path="/var/lib/kubelet/pods/836be94a-c1de-4b1c-b98a-7af78a2a4607/volumes" Jan 27 08:00:34 crc kubenswrapper[4799]: I0127 08:00:34.713434 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" event={"ID":"31c06f68-d4da-4ead-a1d1-f47806c1517b","Type":"ContainerStarted","Data":"ceeba7dc581b666fc15e90fcd29da3071a946dc643400c1323fc2cff4ba701dc"} Jan 27 08:00:34 crc kubenswrapper[4799]: I0127 08:00:34.713536 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" event={"ID":"31c06f68-d4da-4ead-a1d1-f47806c1517b","Type":"ContainerStarted","Data":"189b48570eef56aed8f60d0c33ea3f2c476b822921c3359eaaf4630f5bfcf6e5"} Jan 27 08:00:34 crc kubenswrapper[4799]: I0127 08:00:34.713557 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" event={"ID":"31c06f68-d4da-4ead-a1d1-f47806c1517b","Type":"ContainerStarted","Data":"0010a31e4d1e32474843fccfb9cc0e942de4e09f168b2310b70f333158301c89"} Jan 27 08:00:34 crc kubenswrapper[4799]: I0127 08:00:34.713575 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" event={"ID":"31c06f68-d4da-4ead-a1d1-f47806c1517b","Type":"ContainerStarted","Data":"cc4e4ec3d11b2189471aed404f4ec551c1f67a6c66374b46524b9a976b48f03c"} Jan 27 08:00:34 crc kubenswrapper[4799]: I0127 08:00:34.713592 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" event={"ID":"31c06f68-d4da-4ead-a1d1-f47806c1517b","Type":"ContainerStarted","Data":"ddf55b70d68b0ca639c5a634a83fcf1897440b4c84a53c4f5b0718ea460e5fc7"} Jan 27 08:00:34 crc kubenswrapper[4799]: I0127 08:00:34.713610 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" event={"ID":"31c06f68-d4da-4ead-a1d1-f47806c1517b","Type":"ContainerStarted","Data":"1a45cacfd1bc9f15595f73ab49e993c6356b3a2a98b7e8d62e9d4d59adab9acf"} Jan 27 08:00:36 crc kubenswrapper[4799]: I0127 08:00:36.729869 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" event={"ID":"31c06f68-d4da-4ead-a1d1-f47806c1517b","Type":"ContainerStarted","Data":"26ed096105d89693e459b3286edaa025b874e456f268448364ef364beab985fb"} Jan 27 08:00:39 crc kubenswrapper[4799]: I0127 08:00:39.751829 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" event={"ID":"31c06f68-d4da-4ead-a1d1-f47806c1517b","Type":"ContainerStarted","Data":"2da96093f29c9f01739bfafdad468bb3b8e509612e716b76ea0dab263e25a704"} Jan 27 08:00:39 crc kubenswrapper[4799]: I0127 08:00:39.752927 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:39 crc kubenswrapper[4799]: I0127 08:00:39.752953 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:39 crc kubenswrapper[4799]: I0127 08:00:39.779180 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" podStartSLOduration=7.779161245 podStartE2EDuration="7.779161245s" podCreationTimestamp="2026-01-27 08:00:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:00:39.775359409 +0000 UTC m=+906.086463494" watchObservedRunningTime="2026-01-27 08:00:39.779161245 +0000 UTC m=+906.090265310" Jan 27 08:00:39 crc kubenswrapper[4799]: I0127 08:00:39.784010 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:39 crc kubenswrapper[4799]: I0127 08:00:39.836956 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-x65vd"] Jan 27 08:00:39 crc kubenswrapper[4799]: I0127 08:00:39.837798 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-x65vd" Jan 27 08:00:39 crc kubenswrapper[4799]: I0127 08:00:39.839860 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 27 08:00:39 crc kubenswrapper[4799]: I0127 08:00:39.839997 4799 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-5dffm" Jan 27 08:00:39 crc kubenswrapper[4799]: I0127 08:00:39.840255 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-x65vd"] Jan 27 08:00:39 crc kubenswrapper[4799]: I0127 08:00:39.840737 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 27 08:00:39 crc kubenswrapper[4799]: I0127 08:00:39.840798 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 27 08:00:39 crc kubenswrapper[4799]: I0127 08:00:39.880862 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a490c085-39fb-4831-89cf-2ccaf0826bdd-crc-storage\") pod \"crc-storage-crc-x65vd\" (UID: \"a490c085-39fb-4831-89cf-2ccaf0826bdd\") " pod="crc-storage/crc-storage-crc-x65vd" Jan 27 08:00:39 crc kubenswrapper[4799]: I0127 08:00:39.880970 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rrrb\" (UniqueName: \"kubernetes.io/projected/a490c085-39fb-4831-89cf-2ccaf0826bdd-kube-api-access-9rrrb\") pod \"crc-storage-crc-x65vd\" (UID: \"a490c085-39fb-4831-89cf-2ccaf0826bdd\") " pod="crc-storage/crc-storage-crc-x65vd" Jan 27 08:00:39 crc kubenswrapper[4799]: I0127 08:00:39.881000 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a490c085-39fb-4831-89cf-2ccaf0826bdd-node-mnt\") pod \"crc-storage-crc-x65vd\" (UID: \"a490c085-39fb-4831-89cf-2ccaf0826bdd\") " pod="crc-storage/crc-storage-crc-x65vd" Jan 27 08:00:39 crc kubenswrapper[4799]: I0127 08:00:39.981751 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rrrb\" (UniqueName: \"kubernetes.io/projected/a490c085-39fb-4831-89cf-2ccaf0826bdd-kube-api-access-9rrrb\") pod \"crc-storage-crc-x65vd\" (UID: \"a490c085-39fb-4831-89cf-2ccaf0826bdd\") " pod="crc-storage/crc-storage-crc-x65vd" Jan 27 08:00:39 crc kubenswrapper[4799]: I0127 08:00:39.981825 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a490c085-39fb-4831-89cf-2ccaf0826bdd-node-mnt\") pod \"crc-storage-crc-x65vd\" (UID: \"a490c085-39fb-4831-89cf-2ccaf0826bdd\") " pod="crc-storage/crc-storage-crc-x65vd" Jan 27 08:00:39 crc kubenswrapper[4799]: I0127 08:00:39.981881 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a490c085-39fb-4831-89cf-2ccaf0826bdd-crc-storage\") pod \"crc-storage-crc-x65vd\" (UID: \"a490c085-39fb-4831-89cf-2ccaf0826bdd\") " pod="crc-storage/crc-storage-crc-x65vd" Jan 27 08:00:39 crc kubenswrapper[4799]: I0127 08:00:39.982216 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a490c085-39fb-4831-89cf-2ccaf0826bdd-node-mnt\") pod \"crc-storage-crc-x65vd\" (UID: \"a490c085-39fb-4831-89cf-2ccaf0826bdd\") " pod="crc-storage/crc-storage-crc-x65vd" Jan 27 08:00:39 crc kubenswrapper[4799]: I0127 08:00:39.982997 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a490c085-39fb-4831-89cf-2ccaf0826bdd-crc-storage\") pod \"crc-storage-crc-x65vd\" (UID: \"a490c085-39fb-4831-89cf-2ccaf0826bdd\") " pod="crc-storage/crc-storage-crc-x65vd" Jan 27 08:00:40 crc kubenswrapper[4799]: I0127 08:00:40.006910 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rrrb\" (UniqueName: \"kubernetes.io/projected/a490c085-39fb-4831-89cf-2ccaf0826bdd-kube-api-access-9rrrb\") pod \"crc-storage-crc-x65vd\" (UID: \"a490c085-39fb-4831-89cf-2ccaf0826bdd\") " pod="crc-storage/crc-storage-crc-x65vd" Jan 27 08:00:40 crc kubenswrapper[4799]: I0127 08:00:40.154248 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-x65vd" Jan 27 08:00:40 crc kubenswrapper[4799]: E0127 08:00:40.188558 4799 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-x65vd_crc-storage_a490c085-39fb-4831-89cf-2ccaf0826bdd_0(b9ed99cc5c70c43916982334be38a8147cc4d8e72f57e0f989f67b0ddeced13e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 08:00:40 crc kubenswrapper[4799]: E0127 08:00:40.188854 4799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-x65vd_crc-storage_a490c085-39fb-4831-89cf-2ccaf0826bdd_0(b9ed99cc5c70c43916982334be38a8147cc4d8e72f57e0f989f67b0ddeced13e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-x65vd" Jan 27 08:00:40 crc kubenswrapper[4799]: E0127 08:00:40.188880 4799 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-x65vd_crc-storage_a490c085-39fb-4831-89cf-2ccaf0826bdd_0(b9ed99cc5c70c43916982334be38a8147cc4d8e72f57e0f989f67b0ddeced13e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-x65vd" Jan 27 08:00:40 crc kubenswrapper[4799]: E0127 08:00:40.188932 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-x65vd_crc-storage(a490c085-39fb-4831-89cf-2ccaf0826bdd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-x65vd_crc-storage(a490c085-39fb-4831-89cf-2ccaf0826bdd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-x65vd_crc-storage_a490c085-39fb-4831-89cf-2ccaf0826bdd_0(b9ed99cc5c70c43916982334be38a8147cc4d8e72f57e0f989f67b0ddeced13e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-x65vd" podUID="a490c085-39fb-4831-89cf-2ccaf0826bdd" Jan 27 08:00:40 crc kubenswrapper[4799]: I0127 08:00:40.757451 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-x65vd" Jan 27 08:00:40 crc kubenswrapper[4799]: I0127 08:00:40.758098 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:40 crc kubenswrapper[4799]: I0127 08:00:40.758335 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-x65vd" Jan 27 08:00:40 crc kubenswrapper[4799]: E0127 08:00:40.787702 4799 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-x65vd_crc-storage_a490c085-39fb-4831-89cf-2ccaf0826bdd_0(cdb6b339944f3899853639c28963b6c3fd59a85a5757230111f0430b3844f50e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 08:00:40 crc kubenswrapper[4799]: E0127 08:00:40.787774 4799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-x65vd_crc-storage_a490c085-39fb-4831-89cf-2ccaf0826bdd_0(cdb6b339944f3899853639c28963b6c3fd59a85a5757230111f0430b3844f50e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-x65vd" Jan 27 08:00:40 crc kubenswrapper[4799]: E0127 08:00:40.787799 4799 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-x65vd_crc-storage_a490c085-39fb-4831-89cf-2ccaf0826bdd_0(cdb6b339944f3899853639c28963b6c3fd59a85a5757230111f0430b3844f50e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-x65vd" Jan 27 08:00:40 crc kubenswrapper[4799]: E0127 08:00:40.787851 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-x65vd_crc-storage(a490c085-39fb-4831-89cf-2ccaf0826bdd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-x65vd_crc-storage(a490c085-39fb-4831-89cf-2ccaf0826bdd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-x65vd_crc-storage_a490c085-39fb-4831-89cf-2ccaf0826bdd_0(cdb6b339944f3899853639c28963b6c3fd59a85a5757230111f0430b3844f50e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-x65vd" podUID="a490c085-39fb-4831-89cf-2ccaf0826bdd" Jan 27 08:00:40 crc kubenswrapper[4799]: I0127 08:00:40.794660 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:00:43 crc kubenswrapper[4799]: I0127 08:00:43.056564 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6qfkb" Jan 27 08:00:43 crc kubenswrapper[4799]: I0127 08:00:43.105943 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6qfkb" Jan 27 08:00:43 crc kubenswrapper[4799]: I0127 08:00:43.292091 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6qfkb"] Jan 27 08:00:44 crc kubenswrapper[4799]: I0127 08:00:44.783433 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6qfkb" podUID="01872076-ffd4-4499-b71d-f9ec337cc01c" containerName="registry-server" containerID="cri-o://90b67cd8127b087046d7160e01120a9ca93cb9fc444a12ce868c0489ac19bfbc" gracePeriod=2 Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.637928 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qfkb" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.701147 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pkn2z"] Jan 27 08:00:45 crc kubenswrapper[4799]: E0127 08:00:45.701415 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01872076-ffd4-4499-b71d-f9ec337cc01c" containerName="extract-utilities" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.701430 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="01872076-ffd4-4499-b71d-f9ec337cc01c" containerName="extract-utilities" Jan 27 08:00:45 crc kubenswrapper[4799]: E0127 08:00:45.701440 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01872076-ffd4-4499-b71d-f9ec337cc01c" containerName="registry-server" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.701448 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="01872076-ffd4-4499-b71d-f9ec337cc01c" containerName="registry-server" Jan 27 08:00:45 crc kubenswrapper[4799]: E0127 08:00:45.701469 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01872076-ffd4-4499-b71d-f9ec337cc01c" containerName="extract-content" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.701477 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="01872076-ffd4-4499-b71d-f9ec337cc01c" containerName="extract-content" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.701592 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="01872076-ffd4-4499-b71d-f9ec337cc01c" containerName="registry-server" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.702458 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pkn2z" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.717875 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pkn2z"] Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.763076 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggpl9\" (UniqueName: \"kubernetes.io/projected/01872076-ffd4-4499-b71d-f9ec337cc01c-kube-api-access-ggpl9\") pod \"01872076-ffd4-4499-b71d-f9ec337cc01c\" (UID: \"01872076-ffd4-4499-b71d-f9ec337cc01c\") " Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.763148 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01872076-ffd4-4499-b71d-f9ec337cc01c-utilities\") pod \"01872076-ffd4-4499-b71d-f9ec337cc01c\" (UID: \"01872076-ffd4-4499-b71d-f9ec337cc01c\") " Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.763198 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01872076-ffd4-4499-b71d-f9ec337cc01c-catalog-content\") pod \"01872076-ffd4-4499-b71d-f9ec337cc01c\" (UID: \"01872076-ffd4-4499-b71d-f9ec337cc01c\") " Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.763409 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a5de662-d02a-49af-b510-84d4567ad830-catalog-content\") pod \"redhat-marketplace-pkn2z\" (UID: \"1a5de662-d02a-49af-b510-84d4567ad830\") " pod="openshift-marketplace/redhat-marketplace-pkn2z" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.763475 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmrzm\" (UniqueName: \"kubernetes.io/projected/1a5de662-d02a-49af-b510-84d4567ad830-kube-api-access-zmrzm\") pod \"redhat-marketplace-pkn2z\" (UID: \"1a5de662-d02a-49af-b510-84d4567ad830\") " pod="openshift-marketplace/redhat-marketplace-pkn2z" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.763534 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a5de662-d02a-49af-b510-84d4567ad830-utilities\") pod \"redhat-marketplace-pkn2z\" (UID: \"1a5de662-d02a-49af-b510-84d4567ad830\") " pod="openshift-marketplace/redhat-marketplace-pkn2z" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.764348 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01872076-ffd4-4499-b71d-f9ec337cc01c-utilities" (OuterVolumeSpecName: "utilities") pod "01872076-ffd4-4499-b71d-f9ec337cc01c" (UID: "01872076-ffd4-4499-b71d-f9ec337cc01c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.768490 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01872076-ffd4-4499-b71d-f9ec337cc01c-kube-api-access-ggpl9" (OuterVolumeSpecName: "kube-api-access-ggpl9") pod "01872076-ffd4-4499-b71d-f9ec337cc01c" (UID: "01872076-ffd4-4499-b71d-f9ec337cc01c"). InnerVolumeSpecName "kube-api-access-ggpl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.790199 4799 generic.go:334] "Generic (PLEG): container finished" podID="01872076-ffd4-4499-b71d-f9ec337cc01c" containerID="90b67cd8127b087046d7160e01120a9ca93cb9fc444a12ce868c0489ac19bfbc" exitCode=0 Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.790253 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qfkb" event={"ID":"01872076-ffd4-4499-b71d-f9ec337cc01c","Type":"ContainerDied","Data":"90b67cd8127b087046d7160e01120a9ca93cb9fc444a12ce868c0489ac19bfbc"} Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.790323 4799 scope.go:117] "RemoveContainer" containerID="90b67cd8127b087046d7160e01120a9ca93cb9fc444a12ce868c0489ac19bfbc" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.790291 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qfkb" event={"ID":"01872076-ffd4-4499-b71d-f9ec337cc01c","Type":"ContainerDied","Data":"5f7ebabe8680b49d4fc97b3d2c4d7e0b58f1ddddce84578831bb32ad39b7a475"} Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.790433 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qfkb" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.809263 4799 scope.go:117] "RemoveContainer" containerID="f3515a7a8fde268aa90f135fc900443c56de682da32dbccf362b48adc34f6d49" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.829269 4799 scope.go:117] "RemoveContainer" containerID="f343722824737da1bb40508c2c1c8f7800367ecc3f742dad302944897596fc33" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.842537 4799 scope.go:117] "RemoveContainer" containerID="90b67cd8127b087046d7160e01120a9ca93cb9fc444a12ce868c0489ac19bfbc" Jan 27 08:00:45 crc kubenswrapper[4799]: E0127 08:00:45.843525 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90b67cd8127b087046d7160e01120a9ca93cb9fc444a12ce868c0489ac19bfbc\": container with ID starting with 90b67cd8127b087046d7160e01120a9ca93cb9fc444a12ce868c0489ac19bfbc not found: ID does not exist" containerID="90b67cd8127b087046d7160e01120a9ca93cb9fc444a12ce868c0489ac19bfbc" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.843575 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b67cd8127b087046d7160e01120a9ca93cb9fc444a12ce868c0489ac19bfbc"} err="failed to get container status \"90b67cd8127b087046d7160e01120a9ca93cb9fc444a12ce868c0489ac19bfbc\": rpc error: code = NotFound desc = could not find container \"90b67cd8127b087046d7160e01120a9ca93cb9fc444a12ce868c0489ac19bfbc\": container with ID starting with 90b67cd8127b087046d7160e01120a9ca93cb9fc444a12ce868c0489ac19bfbc not found: ID does not exist" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.843607 4799 scope.go:117] "RemoveContainer" containerID="f3515a7a8fde268aa90f135fc900443c56de682da32dbccf362b48adc34f6d49" Jan 27 08:00:45 crc kubenswrapper[4799]: E0127 08:00:45.844681 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3515a7a8fde268aa90f135fc900443c56de682da32dbccf362b48adc34f6d49\": container with ID starting with f3515a7a8fde268aa90f135fc900443c56de682da32dbccf362b48adc34f6d49 not found: ID does not exist" containerID="f3515a7a8fde268aa90f135fc900443c56de682da32dbccf362b48adc34f6d49" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.844715 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3515a7a8fde268aa90f135fc900443c56de682da32dbccf362b48adc34f6d49"} err="failed to get container status \"f3515a7a8fde268aa90f135fc900443c56de682da32dbccf362b48adc34f6d49\": rpc error: code = NotFound desc = could not find container \"f3515a7a8fde268aa90f135fc900443c56de682da32dbccf362b48adc34f6d49\": container with ID starting with f3515a7a8fde268aa90f135fc900443c56de682da32dbccf362b48adc34f6d49 not found: ID does not exist" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.844739 4799 scope.go:117] "RemoveContainer" containerID="f343722824737da1bb40508c2c1c8f7800367ecc3f742dad302944897596fc33" Jan 27 08:00:45 crc kubenswrapper[4799]: E0127 08:00:45.845023 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f343722824737da1bb40508c2c1c8f7800367ecc3f742dad302944897596fc33\": container with ID starting with f343722824737da1bb40508c2c1c8f7800367ecc3f742dad302944897596fc33 not found: ID does not exist" containerID="f343722824737da1bb40508c2c1c8f7800367ecc3f742dad302944897596fc33" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.845063 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f343722824737da1bb40508c2c1c8f7800367ecc3f742dad302944897596fc33"} err="failed to get container status \"f343722824737da1bb40508c2c1c8f7800367ecc3f742dad302944897596fc33\": rpc error: code = NotFound desc = could not find container \"f343722824737da1bb40508c2c1c8f7800367ecc3f742dad302944897596fc33\": container with ID starting with f343722824737da1bb40508c2c1c8f7800367ecc3f742dad302944897596fc33 not found: ID does not exist" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.864562 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmrzm\" (UniqueName: \"kubernetes.io/projected/1a5de662-d02a-49af-b510-84d4567ad830-kube-api-access-zmrzm\") pod \"redhat-marketplace-pkn2z\" (UID: \"1a5de662-d02a-49af-b510-84d4567ad830\") " pod="openshift-marketplace/redhat-marketplace-pkn2z" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.864656 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a5de662-d02a-49af-b510-84d4567ad830-utilities\") pod \"redhat-marketplace-pkn2z\" (UID: \"1a5de662-d02a-49af-b510-84d4567ad830\") " pod="openshift-marketplace/redhat-marketplace-pkn2z" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.864697 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a5de662-d02a-49af-b510-84d4567ad830-catalog-content\") pod \"redhat-marketplace-pkn2z\" (UID: \"1a5de662-d02a-49af-b510-84d4567ad830\") " pod="openshift-marketplace/redhat-marketplace-pkn2z" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.864768 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggpl9\" (UniqueName: \"kubernetes.io/projected/01872076-ffd4-4499-b71d-f9ec337cc01c-kube-api-access-ggpl9\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.864785 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01872076-ffd4-4499-b71d-f9ec337cc01c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.865327 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a5de662-d02a-49af-b510-84d4567ad830-catalog-content\") pod \"redhat-marketplace-pkn2z\" (UID: \"1a5de662-d02a-49af-b510-84d4567ad830\") " pod="openshift-marketplace/redhat-marketplace-pkn2z" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.865956 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a5de662-d02a-49af-b510-84d4567ad830-utilities\") pod \"redhat-marketplace-pkn2z\" (UID: \"1a5de662-d02a-49af-b510-84d4567ad830\") " pod="openshift-marketplace/redhat-marketplace-pkn2z" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.884138 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmrzm\" (UniqueName: \"kubernetes.io/projected/1a5de662-d02a-49af-b510-84d4567ad830-kube-api-access-zmrzm\") pod \"redhat-marketplace-pkn2z\" (UID: \"1a5de662-d02a-49af-b510-84d4567ad830\") " pod="openshift-marketplace/redhat-marketplace-pkn2z" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.889206 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01872076-ffd4-4499-b71d-f9ec337cc01c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01872076-ffd4-4499-b71d-f9ec337cc01c" (UID: "01872076-ffd4-4499-b71d-f9ec337cc01c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:00:45 crc kubenswrapper[4799]: I0127 08:00:45.965743 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01872076-ffd4-4499-b71d-f9ec337cc01c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:46 crc kubenswrapper[4799]: I0127 08:00:46.030092 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pkn2z" Jan 27 08:00:46 crc kubenswrapper[4799]: I0127 08:00:46.142042 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6qfkb"] Jan 27 08:00:46 crc kubenswrapper[4799]: I0127 08:00:46.144847 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6qfkb"] Jan 27 08:00:46 crc kubenswrapper[4799]: I0127 08:00:46.217256 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pkn2z"] Jan 27 08:00:46 crc kubenswrapper[4799]: I0127 08:00:46.458128 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01872076-ffd4-4499-b71d-f9ec337cc01c" path="/var/lib/kubelet/pods/01872076-ffd4-4499-b71d-f9ec337cc01c/volumes" Jan 27 08:00:46 crc kubenswrapper[4799]: I0127 08:00:46.798885 4799 generic.go:334] "Generic (PLEG): container finished" podID="1a5de662-d02a-49af-b510-84d4567ad830" containerID="e66572d160cfb5cc18e4cd1c333eaf208fd61ad2345fe215bfacdd570f072bf4" exitCode=0 Jan 27 08:00:46 crc kubenswrapper[4799]: I0127 08:00:46.798932 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkn2z" event={"ID":"1a5de662-d02a-49af-b510-84d4567ad830","Type":"ContainerDied","Data":"e66572d160cfb5cc18e4cd1c333eaf208fd61ad2345fe215bfacdd570f072bf4"} Jan 27 08:00:46 crc kubenswrapper[4799]: I0127 08:00:46.798965 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkn2z" event={"ID":"1a5de662-d02a-49af-b510-84d4567ad830","Type":"ContainerStarted","Data":"b680fc5230c0b7eb49d308a556386312ccd52319f88469623c58fd2361281cf0"} Jan 27 08:00:47 crc kubenswrapper[4799]: I0127 08:00:47.806344 4799 generic.go:334] "Generic (PLEG): container finished" podID="1a5de662-d02a-49af-b510-84d4567ad830" containerID="57c4a2e41099a2c0cac939aa897122519e33a00163fcfc765189eae06aacba98" exitCode=0 Jan 27 08:00:47 crc kubenswrapper[4799]: I0127 08:00:47.806454 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkn2z" event={"ID":"1a5de662-d02a-49af-b510-84d4567ad830","Type":"ContainerDied","Data":"57c4a2e41099a2c0cac939aa897122519e33a00163fcfc765189eae06aacba98"} Jan 27 08:00:49 crc kubenswrapper[4799]: I0127 08:00:49.818646 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkn2z" event={"ID":"1a5de662-d02a-49af-b510-84d4567ad830","Type":"ContainerStarted","Data":"6355078532d35d8840764ee33ecd0a656c5a375a3132351de7e953233e9444c6"} Jan 27 08:00:53 crc kubenswrapper[4799]: I0127 08:00:53.904457 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pkn2z" podStartSLOduration=6.958566475 podStartE2EDuration="8.904434082s" podCreationTimestamp="2026-01-27 08:00:45 +0000 UTC" firstStartedPulling="2026-01-27 08:00:46.800877103 +0000 UTC m=+913.111981178" lastFinishedPulling="2026-01-27 08:00:48.74674471 +0000 UTC m=+915.057848785" observedRunningTime="2026-01-27 08:00:49.844355428 +0000 UTC m=+916.155459503" watchObservedRunningTime="2026-01-27 08:00:53.904434082 +0000 UTC m=+920.215538167" Jan 27 08:00:53 crc kubenswrapper[4799]: I0127 08:00:53.909670 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zmfvg"] Jan 27 08:00:53 crc kubenswrapper[4799]: I0127 08:00:53.910869 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zmfvg" Jan 27 08:00:53 crc kubenswrapper[4799]: I0127 08:00:53.941916 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zmfvg"] Jan 27 08:00:53 crc kubenswrapper[4799]: I0127 08:00:53.969890 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90bdf854-e5f4-4390-9cea-a049ad195cce-catalog-content\") pod \"certified-operators-zmfvg\" (UID: \"90bdf854-e5f4-4390-9cea-a049ad195cce\") " pod="openshift-marketplace/certified-operators-zmfvg" Jan 27 08:00:53 crc kubenswrapper[4799]: I0127 08:00:53.969950 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h96p5\" (UniqueName: \"kubernetes.io/projected/90bdf854-e5f4-4390-9cea-a049ad195cce-kube-api-access-h96p5\") pod \"certified-operators-zmfvg\" (UID: \"90bdf854-e5f4-4390-9cea-a049ad195cce\") " pod="openshift-marketplace/certified-operators-zmfvg" Jan 27 08:00:53 crc kubenswrapper[4799]: I0127 08:00:53.969980 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90bdf854-e5f4-4390-9cea-a049ad195cce-utilities\") pod \"certified-operators-zmfvg\" (UID: \"90bdf854-e5f4-4390-9cea-a049ad195cce\") " pod="openshift-marketplace/certified-operators-zmfvg" Jan 27 08:00:54 crc kubenswrapper[4799]: I0127 08:00:54.071829 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90bdf854-e5f4-4390-9cea-a049ad195cce-catalog-content\") pod \"certified-operators-zmfvg\" (UID: \"90bdf854-e5f4-4390-9cea-a049ad195cce\") " pod="openshift-marketplace/certified-operators-zmfvg" Jan 27 08:00:54 crc kubenswrapper[4799]: I0127 08:00:54.071908 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h96p5\" (UniqueName: \"kubernetes.io/projected/90bdf854-e5f4-4390-9cea-a049ad195cce-kube-api-access-h96p5\") pod \"certified-operators-zmfvg\" (UID: \"90bdf854-e5f4-4390-9cea-a049ad195cce\") " pod="openshift-marketplace/certified-operators-zmfvg" Jan 27 08:00:54 crc kubenswrapper[4799]: I0127 08:00:54.071957 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90bdf854-e5f4-4390-9cea-a049ad195cce-utilities\") pod \"certified-operators-zmfvg\" (UID: \"90bdf854-e5f4-4390-9cea-a049ad195cce\") " pod="openshift-marketplace/certified-operators-zmfvg" Jan 27 08:00:54 crc kubenswrapper[4799]: I0127 08:00:54.072400 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90bdf854-e5f4-4390-9cea-a049ad195cce-catalog-content\") pod \"certified-operators-zmfvg\" (UID: \"90bdf854-e5f4-4390-9cea-a049ad195cce\") " pod="openshift-marketplace/certified-operators-zmfvg" Jan 27 08:00:54 crc kubenswrapper[4799]: I0127 08:00:54.072467 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90bdf854-e5f4-4390-9cea-a049ad195cce-utilities\") pod \"certified-operators-zmfvg\" (UID: \"90bdf854-e5f4-4390-9cea-a049ad195cce\") " pod="openshift-marketplace/certified-operators-zmfvg" Jan 27 08:00:54 crc kubenswrapper[4799]: I0127 08:00:54.099411 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h96p5\" (UniqueName: \"kubernetes.io/projected/90bdf854-e5f4-4390-9cea-a049ad195cce-kube-api-access-h96p5\") pod \"certified-operators-zmfvg\" (UID: \"90bdf854-e5f4-4390-9cea-a049ad195cce\") " pod="openshift-marketplace/certified-operators-zmfvg" Jan 27 08:00:54 crc kubenswrapper[4799]: I0127 08:00:54.237839 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zmfvg" Jan 27 08:00:54 crc kubenswrapper[4799]: I0127 08:00:54.450811 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-x65vd" Jan 27 08:00:54 crc kubenswrapper[4799]: I0127 08:00:54.454708 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-x65vd" Jan 27 08:00:54 crc kubenswrapper[4799]: I0127 08:00:54.460794 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zmfvg"] Jan 27 08:00:54 crc kubenswrapper[4799]: I0127 08:00:54.705690 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-x65vd"] Jan 27 08:00:54 crc kubenswrapper[4799]: W0127 08:00:54.744197 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda490c085_39fb_4831_89cf_2ccaf0826bdd.slice/crio-4a1fbef14d405e7943d395906d8b88e53db9d04a14af39ffdcdcf2eacf55ebeb WatchSource:0}: Error finding container 4a1fbef14d405e7943d395906d8b88e53db9d04a14af39ffdcdcf2eacf55ebeb: Status 404 returned error can't find the container with id 4a1fbef14d405e7943d395906d8b88e53db9d04a14af39ffdcdcf2eacf55ebeb Jan 27 08:00:54 crc kubenswrapper[4799]: I0127 08:00:54.857601 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-x65vd" event={"ID":"a490c085-39fb-4831-89cf-2ccaf0826bdd","Type":"ContainerStarted","Data":"4a1fbef14d405e7943d395906d8b88e53db9d04a14af39ffdcdcf2eacf55ebeb"} Jan 27 08:00:54 crc kubenswrapper[4799]: I0127 08:00:54.859010 4799 generic.go:334] "Generic (PLEG): container finished" podID="90bdf854-e5f4-4390-9cea-a049ad195cce" containerID="64320d3b6207984b9c3c6cde7c5347c3e653055fe81479df1ad5bce61b090ca0" exitCode=0 Jan 27 08:00:54 crc kubenswrapper[4799]: I0127 08:00:54.859038 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmfvg" event={"ID":"90bdf854-e5f4-4390-9cea-a049ad195cce","Type":"ContainerDied","Data":"64320d3b6207984b9c3c6cde7c5347c3e653055fe81479df1ad5bce61b090ca0"} Jan 27 08:00:54 crc kubenswrapper[4799]: I0127 08:00:54.859052 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmfvg" event={"ID":"90bdf854-e5f4-4390-9cea-a049ad195cce","Type":"ContainerStarted","Data":"c11a24536d8edb49b684e03ceb1ff7672bf22a6919c5b034197c5f28e70c3594"} Jan 27 08:00:56 crc kubenswrapper[4799]: I0127 08:00:56.030968 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pkn2z" Jan 27 08:00:56 crc kubenswrapper[4799]: I0127 08:00:56.031356 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pkn2z" Jan 27 08:00:56 crc kubenswrapper[4799]: I0127 08:00:56.070508 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pkn2z" Jan 27 08:00:56 crc kubenswrapper[4799]: I0127 08:00:56.877449 4799 generic.go:334] "Generic (PLEG): container finished" podID="90bdf854-e5f4-4390-9cea-a049ad195cce" containerID="6b872ef34f3e475e1a50ca4bb6a1c98c8b542e956e50db4b6a1da0a9347612b5" exitCode=0 Jan 27 08:00:56 crc kubenswrapper[4799]: I0127 08:00:56.877736 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmfvg" event={"ID":"90bdf854-e5f4-4390-9cea-a049ad195cce","Type":"ContainerDied","Data":"6b872ef34f3e475e1a50ca4bb6a1c98c8b542e956e50db4b6a1da0a9347612b5"} Jan 27 08:00:56 crc kubenswrapper[4799]: I0127 08:00:56.880608 4799 generic.go:334] "Generic (PLEG): container finished" podID="a490c085-39fb-4831-89cf-2ccaf0826bdd" containerID="45e689bb17e3a2ac39c362e0a8f7a1fdd7d647298079cb5d445febefd7b9a5b6" exitCode=0 Jan 27 08:00:56 crc kubenswrapper[4799]: I0127 08:00:56.880942 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-x65vd" event={"ID":"a490c085-39fb-4831-89cf-2ccaf0826bdd","Type":"ContainerDied","Data":"45e689bb17e3a2ac39c362e0a8f7a1fdd7d647298079cb5d445febefd7b9a5b6"} Jan 27 08:00:56 crc kubenswrapper[4799]: I0127 08:00:56.929682 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pkn2z" Jan 27 08:00:57 crc kubenswrapper[4799]: I0127 08:00:57.889841 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmfvg" event={"ID":"90bdf854-e5f4-4390-9cea-a049ad195cce","Type":"ContainerStarted","Data":"93fa406f6e2a6358e7bdfd792f2326336e61aeffe6fae751a5428b9fb0e05698"} Jan 27 08:00:58 crc kubenswrapper[4799]: I0127 08:00:58.160512 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-x65vd" Jan 27 08:00:58 crc kubenswrapper[4799]: I0127 08:00:58.176144 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zmfvg" podStartSLOduration=2.7204006080000003 podStartE2EDuration="5.17612248s" podCreationTimestamp="2026-01-27 08:00:53 +0000 UTC" firstStartedPulling="2026-01-27 08:00:54.860507358 +0000 UTC m=+921.171611423" lastFinishedPulling="2026-01-27 08:00:57.31622919 +0000 UTC m=+923.627333295" observedRunningTime="2026-01-27 08:00:57.926007896 +0000 UTC m=+924.237111951" watchObservedRunningTime="2026-01-27 08:00:58.17612248 +0000 UTC m=+924.487226545" Jan 27 08:00:58 crc kubenswrapper[4799]: I0127 08:00:58.229041 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rrrb\" (UniqueName: \"kubernetes.io/projected/a490c085-39fb-4831-89cf-2ccaf0826bdd-kube-api-access-9rrrb\") pod \"a490c085-39fb-4831-89cf-2ccaf0826bdd\" (UID: \"a490c085-39fb-4831-89cf-2ccaf0826bdd\") " Jan 27 08:00:58 crc kubenswrapper[4799]: I0127 08:00:58.229110 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a490c085-39fb-4831-89cf-2ccaf0826bdd-node-mnt\") pod \"a490c085-39fb-4831-89cf-2ccaf0826bdd\" (UID: \"a490c085-39fb-4831-89cf-2ccaf0826bdd\") " Jan 27 08:00:58 crc kubenswrapper[4799]: I0127 08:00:58.229265 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a490c085-39fb-4831-89cf-2ccaf0826bdd-crc-storage\") pod \"a490c085-39fb-4831-89cf-2ccaf0826bdd\" (UID: \"a490c085-39fb-4831-89cf-2ccaf0826bdd\") " Jan 27 08:00:58 crc kubenswrapper[4799]: I0127 08:00:58.229271 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a490c085-39fb-4831-89cf-2ccaf0826bdd-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "a490c085-39fb-4831-89cf-2ccaf0826bdd" (UID: "a490c085-39fb-4831-89cf-2ccaf0826bdd"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:00:58 crc kubenswrapper[4799]: I0127 08:00:58.229595 4799 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a490c085-39fb-4831-89cf-2ccaf0826bdd-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:58 crc kubenswrapper[4799]: I0127 08:00:58.235721 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a490c085-39fb-4831-89cf-2ccaf0826bdd-kube-api-access-9rrrb" (OuterVolumeSpecName: "kube-api-access-9rrrb") pod "a490c085-39fb-4831-89cf-2ccaf0826bdd" (UID: "a490c085-39fb-4831-89cf-2ccaf0826bdd"). InnerVolumeSpecName "kube-api-access-9rrrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:00:58 crc kubenswrapper[4799]: I0127 08:00:58.242409 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a490c085-39fb-4831-89cf-2ccaf0826bdd-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "a490c085-39fb-4831-89cf-2ccaf0826bdd" (UID: "a490c085-39fb-4831-89cf-2ccaf0826bdd"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:00:58 crc kubenswrapper[4799]: I0127 08:00:58.330639 4799 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a490c085-39fb-4831-89cf-2ccaf0826bdd-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:58 crc kubenswrapper[4799]: I0127 08:00:58.331042 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rrrb\" (UniqueName: \"kubernetes.io/projected/a490c085-39fb-4831-89cf-2ccaf0826bdd-kube-api-access-9rrrb\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:58 crc kubenswrapper[4799]: I0127 08:00:58.896219 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-x65vd" Jan 27 08:00:58 crc kubenswrapper[4799]: I0127 08:00:58.896211 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-x65vd" event={"ID":"a490c085-39fb-4831-89cf-2ccaf0826bdd","Type":"ContainerDied","Data":"4a1fbef14d405e7943d395906d8b88e53db9d04a14af39ffdcdcf2eacf55ebeb"} Jan 27 08:00:58 crc kubenswrapper[4799]: I0127 08:00:58.896288 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a1fbef14d405e7943d395906d8b88e53db9d04a14af39ffdcdcf2eacf55ebeb" Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.293600 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pkn2z"] Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.293850 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pkn2z" podUID="1a5de662-d02a-49af-b510-84d4567ad830" containerName="registry-server" containerID="cri-o://6355078532d35d8840764ee33ecd0a656c5a375a3132351de7e953233e9444c6" gracePeriod=2 Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.706114 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pkn2z" Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.749131 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a5de662-d02a-49af-b510-84d4567ad830-catalog-content\") pod \"1a5de662-d02a-49af-b510-84d4567ad830\" (UID: \"1a5de662-d02a-49af-b510-84d4567ad830\") " Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.749272 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a5de662-d02a-49af-b510-84d4567ad830-utilities\") pod \"1a5de662-d02a-49af-b510-84d4567ad830\" (UID: \"1a5de662-d02a-49af-b510-84d4567ad830\") " Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.749454 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmrzm\" (UniqueName: \"kubernetes.io/projected/1a5de662-d02a-49af-b510-84d4567ad830-kube-api-access-zmrzm\") pod \"1a5de662-d02a-49af-b510-84d4567ad830\" (UID: \"1a5de662-d02a-49af-b510-84d4567ad830\") " Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.750599 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a5de662-d02a-49af-b510-84d4567ad830-utilities" (OuterVolumeSpecName: "utilities") pod "1a5de662-d02a-49af-b510-84d4567ad830" (UID: "1a5de662-d02a-49af-b510-84d4567ad830"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.755564 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a5de662-d02a-49af-b510-84d4567ad830-kube-api-access-zmrzm" (OuterVolumeSpecName: "kube-api-access-zmrzm") pod "1a5de662-d02a-49af-b510-84d4567ad830" (UID: "1a5de662-d02a-49af-b510-84d4567ad830"). InnerVolumeSpecName "kube-api-access-zmrzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.779621 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a5de662-d02a-49af-b510-84d4567ad830-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a5de662-d02a-49af-b510-84d4567ad830" (UID: "1a5de662-d02a-49af-b510-84d4567ad830"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.851105 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a5de662-d02a-49af-b510-84d4567ad830-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.851179 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a5de662-d02a-49af-b510-84d4567ad830-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.851202 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmrzm\" (UniqueName: \"kubernetes.io/projected/1a5de662-d02a-49af-b510-84d4567ad830-kube-api-access-zmrzm\") on node \"crc\" DevicePath \"\"" Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.907685 4799 generic.go:334] "Generic (PLEG): container finished" podID="1a5de662-d02a-49af-b510-84d4567ad830" containerID="6355078532d35d8840764ee33ecd0a656c5a375a3132351de7e953233e9444c6" exitCode=0 Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.907756 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkn2z" event={"ID":"1a5de662-d02a-49af-b510-84d4567ad830","Type":"ContainerDied","Data":"6355078532d35d8840764ee33ecd0a656c5a375a3132351de7e953233e9444c6"} Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.907787 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pkn2z" Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.907806 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkn2z" event={"ID":"1a5de662-d02a-49af-b510-84d4567ad830","Type":"ContainerDied","Data":"b680fc5230c0b7eb49d308a556386312ccd52319f88469623c58fd2361281cf0"} Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.907824 4799 scope.go:117] "RemoveContainer" containerID="6355078532d35d8840764ee33ecd0a656c5a375a3132351de7e953233e9444c6" Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.928682 4799 scope.go:117] "RemoveContainer" containerID="57c4a2e41099a2c0cac939aa897122519e33a00163fcfc765189eae06aacba98" Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.968570 4799 scope.go:117] "RemoveContainer" containerID="e66572d160cfb5cc18e4cd1c333eaf208fd61ad2345fe215bfacdd570f072bf4" Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.977816 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pkn2z"] Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.983235 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pkn2z"] Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.994709 4799 scope.go:117] "RemoveContainer" containerID="6355078532d35d8840764ee33ecd0a656c5a375a3132351de7e953233e9444c6" Jan 27 08:00:59 crc kubenswrapper[4799]: E0127 08:00:59.995239 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6355078532d35d8840764ee33ecd0a656c5a375a3132351de7e953233e9444c6\": container with ID starting with 6355078532d35d8840764ee33ecd0a656c5a375a3132351de7e953233e9444c6 not found: ID does not exist" containerID="6355078532d35d8840764ee33ecd0a656c5a375a3132351de7e953233e9444c6" Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.995280 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6355078532d35d8840764ee33ecd0a656c5a375a3132351de7e953233e9444c6"} err="failed to get container status \"6355078532d35d8840764ee33ecd0a656c5a375a3132351de7e953233e9444c6\": rpc error: code = NotFound desc = could not find container \"6355078532d35d8840764ee33ecd0a656c5a375a3132351de7e953233e9444c6\": container with ID starting with 6355078532d35d8840764ee33ecd0a656c5a375a3132351de7e953233e9444c6 not found: ID does not exist" Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.995340 4799 scope.go:117] "RemoveContainer" containerID="57c4a2e41099a2c0cac939aa897122519e33a00163fcfc765189eae06aacba98" Jan 27 08:00:59 crc kubenswrapper[4799]: E0127 08:00:59.995868 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57c4a2e41099a2c0cac939aa897122519e33a00163fcfc765189eae06aacba98\": container with ID starting with 57c4a2e41099a2c0cac939aa897122519e33a00163fcfc765189eae06aacba98 not found: ID does not exist" containerID="57c4a2e41099a2c0cac939aa897122519e33a00163fcfc765189eae06aacba98" Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.995893 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57c4a2e41099a2c0cac939aa897122519e33a00163fcfc765189eae06aacba98"} err="failed to get container status \"57c4a2e41099a2c0cac939aa897122519e33a00163fcfc765189eae06aacba98\": rpc error: code = NotFound desc = could not find container \"57c4a2e41099a2c0cac939aa897122519e33a00163fcfc765189eae06aacba98\": container with ID starting with 57c4a2e41099a2c0cac939aa897122519e33a00163fcfc765189eae06aacba98 not found: ID does not exist" Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.995913 4799 scope.go:117] "RemoveContainer" containerID="e66572d160cfb5cc18e4cd1c333eaf208fd61ad2345fe215bfacdd570f072bf4" Jan 27 08:00:59 crc kubenswrapper[4799]: E0127 08:00:59.996120 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e66572d160cfb5cc18e4cd1c333eaf208fd61ad2345fe215bfacdd570f072bf4\": container with ID starting with e66572d160cfb5cc18e4cd1c333eaf208fd61ad2345fe215bfacdd570f072bf4 not found: ID does not exist" containerID="e66572d160cfb5cc18e4cd1c333eaf208fd61ad2345fe215bfacdd570f072bf4" Jan 27 08:00:59 crc kubenswrapper[4799]: I0127 08:00:59.996146 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e66572d160cfb5cc18e4cd1c333eaf208fd61ad2345fe215bfacdd570f072bf4"} err="failed to get container status \"e66572d160cfb5cc18e4cd1c333eaf208fd61ad2345fe215bfacdd570f072bf4\": rpc error: code = NotFound desc = could not find container \"e66572d160cfb5cc18e4cd1c333eaf208fd61ad2345fe215bfacdd570f072bf4\": container with ID starting with e66572d160cfb5cc18e4cd1c333eaf208fd61ad2345fe215bfacdd570f072bf4 not found: ID does not exist" Jan 27 08:01:00 crc kubenswrapper[4799]: I0127 08:01:00.463555 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a5de662-d02a-49af-b510-84d4567ad830" path="/var/lib/kubelet/pods/1a5de662-d02a-49af-b510-84d4567ad830/volumes" Jan 27 08:01:02 crc kubenswrapper[4799]: I0127 08:01:02.874924 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fgj2p" Jan 27 08:01:04 crc kubenswrapper[4799]: I0127 08:01:04.238547 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zmfvg" Jan 27 08:01:04 crc kubenswrapper[4799]: I0127 08:01:04.238654 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zmfvg" Jan 27 08:01:04 crc kubenswrapper[4799]: I0127 08:01:04.304573 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zmfvg" Jan 27 08:01:04 crc kubenswrapper[4799]: I0127 08:01:04.995600 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zmfvg" Jan 27 08:01:06 crc kubenswrapper[4799]: I0127 08:01:06.341453 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zmfvg"] Jan 27 08:01:06 crc kubenswrapper[4799]: I0127 08:01:06.638221 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927"] Jan 27 08:01:06 crc kubenswrapper[4799]: E0127 08:01:06.638492 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a5de662-d02a-49af-b510-84d4567ad830" containerName="extract-content" Jan 27 08:01:06 crc kubenswrapper[4799]: I0127 08:01:06.638508 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5de662-d02a-49af-b510-84d4567ad830" containerName="extract-content" Jan 27 08:01:06 crc kubenswrapper[4799]: E0127 08:01:06.638531 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a5de662-d02a-49af-b510-84d4567ad830" containerName="extract-utilities" Jan 27 08:01:06 crc kubenswrapper[4799]: I0127 08:01:06.638539 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5de662-d02a-49af-b510-84d4567ad830" containerName="extract-utilities" Jan 27 08:01:06 crc kubenswrapper[4799]: E0127 08:01:06.638548 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a490c085-39fb-4831-89cf-2ccaf0826bdd" containerName="storage" Jan 27 08:01:06 crc kubenswrapper[4799]: I0127 08:01:06.638556 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a490c085-39fb-4831-89cf-2ccaf0826bdd" containerName="storage" Jan 27 08:01:06 crc kubenswrapper[4799]: E0127 08:01:06.638566 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a5de662-d02a-49af-b510-84d4567ad830" containerName="registry-server" Jan 27 08:01:06 crc kubenswrapper[4799]: I0127 08:01:06.638573 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5de662-d02a-49af-b510-84d4567ad830" containerName="registry-server" Jan 27 08:01:06 crc kubenswrapper[4799]: I0127 08:01:06.638702 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a5de662-d02a-49af-b510-84d4567ad830" containerName="registry-server" Jan 27 08:01:06 crc kubenswrapper[4799]: I0127 08:01:06.638716 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a490c085-39fb-4831-89cf-2ccaf0826bdd" containerName="storage" Jan 27 08:01:06 crc kubenswrapper[4799]: I0127 08:01:06.639773 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927" Jan 27 08:01:06 crc kubenswrapper[4799]: I0127 08:01:06.641478 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 08:01:06 crc kubenswrapper[4799]: I0127 08:01:06.648191 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927"] Jan 27 08:01:06 crc kubenswrapper[4799]: I0127 08:01:06.754707 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3c8980e-d12a-4646-bc2a-ab79fa15f95e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927\" (UID: \"a3c8980e-d12a-4646-bc2a-ab79fa15f95e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927" Jan 27 08:01:06 crc kubenswrapper[4799]: I0127 08:01:06.754769 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x82l\" (UniqueName: \"kubernetes.io/projected/a3c8980e-d12a-4646-bc2a-ab79fa15f95e-kube-api-access-7x82l\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927\" (UID: \"a3c8980e-d12a-4646-bc2a-ab79fa15f95e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927" Jan 27 08:01:06 crc kubenswrapper[4799]: I0127 08:01:06.754800 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3c8980e-d12a-4646-bc2a-ab79fa15f95e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927\" (UID: \"a3c8980e-d12a-4646-bc2a-ab79fa15f95e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927" Jan 27 08:01:06 crc kubenswrapper[4799]: I0127 08:01:06.856385 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3c8980e-d12a-4646-bc2a-ab79fa15f95e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927\" (UID: \"a3c8980e-d12a-4646-bc2a-ab79fa15f95e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927" Jan 27 08:01:06 crc kubenswrapper[4799]: I0127 08:01:06.856445 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x82l\" (UniqueName: \"kubernetes.io/projected/a3c8980e-d12a-4646-bc2a-ab79fa15f95e-kube-api-access-7x82l\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927\" (UID: \"a3c8980e-d12a-4646-bc2a-ab79fa15f95e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927" Jan 27 08:01:06 crc kubenswrapper[4799]: I0127 08:01:06.856469 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3c8980e-d12a-4646-bc2a-ab79fa15f95e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927\" (UID: \"a3c8980e-d12a-4646-bc2a-ab79fa15f95e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927" Jan 27 08:01:06 crc kubenswrapper[4799]: I0127 08:01:06.857105 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3c8980e-d12a-4646-bc2a-ab79fa15f95e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927\" (UID: \"a3c8980e-d12a-4646-bc2a-ab79fa15f95e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927" Jan 27 08:01:06 crc kubenswrapper[4799]: I0127 08:01:06.857162 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3c8980e-d12a-4646-bc2a-ab79fa15f95e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927\" (UID: \"a3c8980e-d12a-4646-bc2a-ab79fa15f95e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927" Jan 27 08:01:06 crc kubenswrapper[4799]: I0127 08:01:06.878233 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x82l\" (UniqueName: \"kubernetes.io/projected/a3c8980e-d12a-4646-bc2a-ab79fa15f95e-kube-api-access-7x82l\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927\" (UID: \"a3c8980e-d12a-4646-bc2a-ab79fa15f95e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927" Jan 27 08:01:06 crc kubenswrapper[4799]: I0127 08:01:06.954940 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zmfvg" podUID="90bdf854-e5f4-4390-9cea-a049ad195cce" containerName="registry-server" containerID="cri-o://93fa406f6e2a6358e7bdfd792f2326336e61aeffe6fae751a5428b9fb0e05698" gracePeriod=2 Jan 27 08:01:06 crc kubenswrapper[4799]: I0127 08:01:06.958011 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927" Jan 27 08:01:07 crc kubenswrapper[4799]: I0127 08:01:07.214410 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927"] Jan 27 08:01:07 crc kubenswrapper[4799]: W0127 08:01:07.223492 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3c8980e_d12a_4646_bc2a_ab79fa15f95e.slice/crio-eef8999bbf9c7b10dcdefe616536753fead29e2ff70bbad155218ac31425b5c6 WatchSource:0}: Error finding container eef8999bbf9c7b10dcdefe616536753fead29e2ff70bbad155218ac31425b5c6: Status 404 returned error can't find the container with id eef8999bbf9c7b10dcdefe616536753fead29e2ff70bbad155218ac31425b5c6 Jan 27 08:01:07 crc kubenswrapper[4799]: I0127 08:01:07.781247 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zmfvg" Jan 27 08:01:07 crc kubenswrapper[4799]: I0127 08:01:07.872107 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90bdf854-e5f4-4390-9cea-a049ad195cce-catalog-content\") pod \"90bdf854-e5f4-4390-9cea-a049ad195cce\" (UID: \"90bdf854-e5f4-4390-9cea-a049ad195cce\") " Jan 27 08:01:07 crc kubenswrapper[4799]: I0127 08:01:07.872207 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90bdf854-e5f4-4390-9cea-a049ad195cce-utilities\") pod \"90bdf854-e5f4-4390-9cea-a049ad195cce\" (UID: \"90bdf854-e5f4-4390-9cea-a049ad195cce\") " Jan 27 08:01:07 crc kubenswrapper[4799]: I0127 08:01:07.872344 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h96p5\" (UniqueName: \"kubernetes.io/projected/90bdf854-e5f4-4390-9cea-a049ad195cce-kube-api-access-h96p5\") pod \"90bdf854-e5f4-4390-9cea-a049ad195cce\" (UID: \"90bdf854-e5f4-4390-9cea-a049ad195cce\") " Jan 27 08:01:07 crc kubenswrapper[4799]: I0127 08:01:07.873896 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90bdf854-e5f4-4390-9cea-a049ad195cce-utilities" (OuterVolumeSpecName: "utilities") pod "90bdf854-e5f4-4390-9cea-a049ad195cce" (UID: "90bdf854-e5f4-4390-9cea-a049ad195cce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:01:07 crc kubenswrapper[4799]: I0127 08:01:07.879649 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90bdf854-e5f4-4390-9cea-a049ad195cce-kube-api-access-h96p5" (OuterVolumeSpecName: "kube-api-access-h96p5") pod "90bdf854-e5f4-4390-9cea-a049ad195cce" (UID: "90bdf854-e5f4-4390-9cea-a049ad195cce"). InnerVolumeSpecName "kube-api-access-h96p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:01:07 crc kubenswrapper[4799]: I0127 08:01:07.937463 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90bdf854-e5f4-4390-9cea-a049ad195cce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90bdf854-e5f4-4390-9cea-a049ad195cce" (UID: "90bdf854-e5f4-4390-9cea-a049ad195cce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:01:07 crc kubenswrapper[4799]: I0127 08:01:07.964141 4799 generic.go:334] "Generic (PLEG): container finished" podID="90bdf854-e5f4-4390-9cea-a049ad195cce" containerID="93fa406f6e2a6358e7bdfd792f2326336e61aeffe6fae751a5428b9fb0e05698" exitCode=0 Jan 27 08:01:07 crc kubenswrapper[4799]: I0127 08:01:07.964216 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zmfvg" Jan 27 08:01:07 crc kubenswrapper[4799]: I0127 08:01:07.964229 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmfvg" event={"ID":"90bdf854-e5f4-4390-9cea-a049ad195cce","Type":"ContainerDied","Data":"93fa406f6e2a6358e7bdfd792f2326336e61aeffe6fae751a5428b9fb0e05698"} Jan 27 08:01:07 crc kubenswrapper[4799]: I0127 08:01:07.964269 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmfvg" event={"ID":"90bdf854-e5f4-4390-9cea-a049ad195cce","Type":"ContainerDied","Data":"c11a24536d8edb49b684e03ceb1ff7672bf22a6919c5b034197c5f28e70c3594"} Jan 27 08:01:07 crc kubenswrapper[4799]: I0127 08:01:07.964337 4799 scope.go:117] "RemoveContainer" containerID="93fa406f6e2a6358e7bdfd792f2326336e61aeffe6fae751a5428b9fb0e05698" Jan 27 08:01:07 crc kubenswrapper[4799]: I0127 08:01:07.966840 4799 generic.go:334] "Generic (PLEG): container finished" podID="a3c8980e-d12a-4646-bc2a-ab79fa15f95e" containerID="66a5c16b5733823f545f35f939939f1ff75fde6252401ba85bfda91c9f722a8e" exitCode=0 Jan 27 08:01:07 crc kubenswrapper[4799]: I0127 08:01:07.966867 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927" event={"ID":"a3c8980e-d12a-4646-bc2a-ab79fa15f95e","Type":"ContainerDied","Data":"66a5c16b5733823f545f35f939939f1ff75fde6252401ba85bfda91c9f722a8e"} Jan 27 08:01:07 crc kubenswrapper[4799]: I0127 08:01:07.966906 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927" event={"ID":"a3c8980e-d12a-4646-bc2a-ab79fa15f95e","Type":"ContainerStarted","Data":"eef8999bbf9c7b10dcdefe616536753fead29e2ff70bbad155218ac31425b5c6"} Jan 27 08:01:07 crc kubenswrapper[4799]: I0127 08:01:07.974003 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90bdf854-e5f4-4390-9cea-a049ad195cce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:01:07 crc kubenswrapper[4799]: I0127 08:01:07.974447 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90bdf854-e5f4-4390-9cea-a049ad195cce-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:01:07 crc kubenswrapper[4799]: I0127 08:01:07.974459 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h96p5\" (UniqueName: \"kubernetes.io/projected/90bdf854-e5f4-4390-9cea-a049ad195cce-kube-api-access-h96p5\") on node \"crc\" DevicePath \"\"" Jan 27 08:01:07 crc kubenswrapper[4799]: I0127 08:01:07.990452 4799 scope.go:117] "RemoveContainer" containerID="6b872ef34f3e475e1a50ca4bb6a1c98c8b542e956e50db4b6a1da0a9347612b5" Jan 27 08:01:08 crc kubenswrapper[4799]: I0127 08:01:08.020450 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zmfvg"] Jan 27 08:01:08 crc kubenswrapper[4799]: I0127 08:01:08.030910 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zmfvg"] Jan 27 08:01:08 crc kubenswrapper[4799]: I0127 08:01:08.038068 4799 scope.go:117] "RemoveContainer" containerID="64320d3b6207984b9c3c6cde7c5347c3e653055fe81479df1ad5bce61b090ca0" Jan 27 08:01:08 crc kubenswrapper[4799]: I0127 08:01:08.050912 4799 scope.go:117] "RemoveContainer" containerID="93fa406f6e2a6358e7bdfd792f2326336e61aeffe6fae751a5428b9fb0e05698" Jan 27 08:01:08 crc kubenswrapper[4799]: E0127 08:01:08.051411 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93fa406f6e2a6358e7bdfd792f2326336e61aeffe6fae751a5428b9fb0e05698\": container with ID starting with 93fa406f6e2a6358e7bdfd792f2326336e61aeffe6fae751a5428b9fb0e05698 not found: ID does not exist" containerID="93fa406f6e2a6358e7bdfd792f2326336e61aeffe6fae751a5428b9fb0e05698" Jan 27 08:01:08 crc kubenswrapper[4799]: I0127 08:01:08.051461 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93fa406f6e2a6358e7bdfd792f2326336e61aeffe6fae751a5428b9fb0e05698"} err="failed to get container status \"93fa406f6e2a6358e7bdfd792f2326336e61aeffe6fae751a5428b9fb0e05698\": rpc error: code = NotFound desc = could not find container \"93fa406f6e2a6358e7bdfd792f2326336e61aeffe6fae751a5428b9fb0e05698\": container with ID starting with 93fa406f6e2a6358e7bdfd792f2326336e61aeffe6fae751a5428b9fb0e05698 not found: ID does not exist" Jan 27 08:01:08 crc kubenswrapper[4799]: I0127 08:01:08.051489 4799 scope.go:117] "RemoveContainer" containerID="6b872ef34f3e475e1a50ca4bb6a1c98c8b542e956e50db4b6a1da0a9347612b5" Jan 27 08:01:08 crc kubenswrapper[4799]: E0127 08:01:08.051839 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b872ef34f3e475e1a50ca4bb6a1c98c8b542e956e50db4b6a1da0a9347612b5\": container with ID starting with 6b872ef34f3e475e1a50ca4bb6a1c98c8b542e956e50db4b6a1da0a9347612b5 not found: ID does not exist" containerID="6b872ef34f3e475e1a50ca4bb6a1c98c8b542e956e50db4b6a1da0a9347612b5" Jan 27 08:01:08 crc kubenswrapper[4799]: I0127 08:01:08.051871 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b872ef34f3e475e1a50ca4bb6a1c98c8b542e956e50db4b6a1da0a9347612b5"} err="failed to get container status \"6b872ef34f3e475e1a50ca4bb6a1c98c8b542e956e50db4b6a1da0a9347612b5\": rpc error: code = NotFound desc = could not find container \"6b872ef34f3e475e1a50ca4bb6a1c98c8b542e956e50db4b6a1da0a9347612b5\": container with ID starting with 6b872ef34f3e475e1a50ca4bb6a1c98c8b542e956e50db4b6a1da0a9347612b5 not found: ID does not exist" Jan 27 08:01:08 crc kubenswrapper[4799]: I0127 08:01:08.051890 4799 scope.go:117] "RemoveContainer" containerID="64320d3b6207984b9c3c6cde7c5347c3e653055fe81479df1ad5bce61b090ca0" Jan 27 08:01:08 crc kubenswrapper[4799]: E0127 08:01:08.052393 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64320d3b6207984b9c3c6cde7c5347c3e653055fe81479df1ad5bce61b090ca0\": container with ID starting with 64320d3b6207984b9c3c6cde7c5347c3e653055fe81479df1ad5bce61b090ca0 not found: ID does not exist" containerID="64320d3b6207984b9c3c6cde7c5347c3e653055fe81479df1ad5bce61b090ca0" Jan 27 08:01:08 crc kubenswrapper[4799]: I0127 08:01:08.052437 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64320d3b6207984b9c3c6cde7c5347c3e653055fe81479df1ad5bce61b090ca0"} err="failed to get container status \"64320d3b6207984b9c3c6cde7c5347c3e653055fe81479df1ad5bce61b090ca0\": rpc error: code = NotFound desc = could not find container \"64320d3b6207984b9c3c6cde7c5347c3e653055fe81479df1ad5bce61b090ca0\": container with ID starting with 64320d3b6207984b9c3c6cde7c5347c3e653055fe81479df1ad5bce61b090ca0 not found: ID does not exist" Jan 27 08:01:08 crc kubenswrapper[4799]: I0127 08:01:08.480113 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90bdf854-e5f4-4390-9cea-a049ad195cce" path="/var/lib/kubelet/pods/90bdf854-e5f4-4390-9cea-a049ad195cce/volumes" Jan 27 08:01:09 crc kubenswrapper[4799]: I0127 08:01:09.990199 4799 generic.go:334] "Generic (PLEG): container finished" podID="a3c8980e-d12a-4646-bc2a-ab79fa15f95e" containerID="f26e423a5bc9f5cbf5b81219f2255a5021d8971b897a6d4771e2afb78ab15a57" exitCode=0 Jan 27 08:01:09 crc kubenswrapper[4799]: I0127 08:01:09.990331 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927" event={"ID":"a3c8980e-d12a-4646-bc2a-ab79fa15f95e","Type":"ContainerDied","Data":"f26e423a5bc9f5cbf5b81219f2255a5021d8971b897a6d4771e2afb78ab15a57"} Jan 27 08:01:11 crc kubenswrapper[4799]: I0127 08:01:11.000562 4799 generic.go:334] "Generic (PLEG): container finished" podID="a3c8980e-d12a-4646-bc2a-ab79fa15f95e" containerID="5ebcc1b9acb32d1a491dc6440332c4beabe921c1bc0c386001ff84e989f9375e" exitCode=0 Jan 27 08:01:11 crc kubenswrapper[4799]: I0127 08:01:11.000631 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927" event={"ID":"a3c8980e-d12a-4646-bc2a-ab79fa15f95e","Type":"ContainerDied","Data":"5ebcc1b9acb32d1a491dc6440332c4beabe921c1bc0c386001ff84e989f9375e"} Jan 27 08:01:12 crc kubenswrapper[4799]: I0127 08:01:12.294717 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927" Jan 27 08:01:12 crc kubenswrapper[4799]: I0127 08:01:12.339718 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3c8980e-d12a-4646-bc2a-ab79fa15f95e-util\") pod \"a3c8980e-d12a-4646-bc2a-ab79fa15f95e\" (UID: \"a3c8980e-d12a-4646-bc2a-ab79fa15f95e\") " Jan 27 08:01:12 crc kubenswrapper[4799]: I0127 08:01:12.339810 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x82l\" (UniqueName: \"kubernetes.io/projected/a3c8980e-d12a-4646-bc2a-ab79fa15f95e-kube-api-access-7x82l\") pod \"a3c8980e-d12a-4646-bc2a-ab79fa15f95e\" (UID: \"a3c8980e-d12a-4646-bc2a-ab79fa15f95e\") " Jan 27 08:01:12 crc kubenswrapper[4799]: I0127 08:01:12.339920 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3c8980e-d12a-4646-bc2a-ab79fa15f95e-bundle\") pod \"a3c8980e-d12a-4646-bc2a-ab79fa15f95e\" (UID: \"a3c8980e-d12a-4646-bc2a-ab79fa15f95e\") " Jan 27 08:01:12 crc kubenswrapper[4799]: I0127 08:01:12.340643 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3c8980e-d12a-4646-bc2a-ab79fa15f95e-bundle" (OuterVolumeSpecName: "bundle") pod "a3c8980e-d12a-4646-bc2a-ab79fa15f95e" (UID: "a3c8980e-d12a-4646-bc2a-ab79fa15f95e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:01:12 crc kubenswrapper[4799]: I0127 08:01:12.345009 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3c8980e-d12a-4646-bc2a-ab79fa15f95e-kube-api-access-7x82l" (OuterVolumeSpecName: "kube-api-access-7x82l") pod "a3c8980e-d12a-4646-bc2a-ab79fa15f95e" (UID: "a3c8980e-d12a-4646-bc2a-ab79fa15f95e"). InnerVolumeSpecName "kube-api-access-7x82l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:01:12 crc kubenswrapper[4799]: I0127 08:01:12.354741 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3c8980e-d12a-4646-bc2a-ab79fa15f95e-util" (OuterVolumeSpecName: "util") pod "a3c8980e-d12a-4646-bc2a-ab79fa15f95e" (UID: "a3c8980e-d12a-4646-bc2a-ab79fa15f95e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:01:12 crc kubenswrapper[4799]: I0127 08:01:12.440988 4799 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3c8980e-d12a-4646-bc2a-ab79fa15f95e-util\") on node \"crc\" DevicePath \"\"" Jan 27 08:01:12 crc kubenswrapper[4799]: I0127 08:01:12.441038 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x82l\" (UniqueName: \"kubernetes.io/projected/a3c8980e-d12a-4646-bc2a-ab79fa15f95e-kube-api-access-7x82l\") on node \"crc\" DevicePath \"\"" Jan 27 08:01:12 crc kubenswrapper[4799]: I0127 08:01:12.441054 4799 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3c8980e-d12a-4646-bc2a-ab79fa15f95e-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:01:13 crc kubenswrapper[4799]: I0127 08:01:13.016506 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927" event={"ID":"a3c8980e-d12a-4646-bc2a-ab79fa15f95e","Type":"ContainerDied","Data":"eef8999bbf9c7b10dcdefe616536753fead29e2ff70bbad155218ac31425b5c6"} Jan 27 08:01:13 crc kubenswrapper[4799]: I0127 08:01:13.016558 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eef8999bbf9c7b10dcdefe616536753fead29e2ff70bbad155218ac31425b5c6" Jan 27 08:01:13 crc kubenswrapper[4799]: I0127 08:01:13.016593 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927" Jan 27 08:01:14 crc kubenswrapper[4799]: I0127 08:01:14.489139 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pcqcj"] Jan 27 08:01:14 crc kubenswrapper[4799]: E0127 08:01:14.489872 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c8980e-d12a-4646-bc2a-ab79fa15f95e" containerName="extract" Jan 27 08:01:14 crc kubenswrapper[4799]: I0127 08:01:14.489888 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c8980e-d12a-4646-bc2a-ab79fa15f95e" containerName="extract" Jan 27 08:01:14 crc kubenswrapper[4799]: E0127 08:01:14.489899 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90bdf854-e5f4-4390-9cea-a049ad195cce" containerName="extract-utilities" Jan 27 08:01:14 crc kubenswrapper[4799]: I0127 08:01:14.489905 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="90bdf854-e5f4-4390-9cea-a049ad195cce" containerName="extract-utilities" Jan 27 08:01:14 crc kubenswrapper[4799]: E0127 08:01:14.489917 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c8980e-d12a-4646-bc2a-ab79fa15f95e" containerName="util" Jan 27 08:01:14 crc kubenswrapper[4799]: I0127 08:01:14.489923 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c8980e-d12a-4646-bc2a-ab79fa15f95e" containerName="util" Jan 27 08:01:14 crc kubenswrapper[4799]: E0127 08:01:14.489930 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90bdf854-e5f4-4390-9cea-a049ad195cce" containerName="extract-content" Jan 27 08:01:14 crc kubenswrapper[4799]: I0127 08:01:14.489937 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="90bdf854-e5f4-4390-9cea-a049ad195cce" containerName="extract-content" Jan 27 08:01:14 crc kubenswrapper[4799]: E0127 08:01:14.489951 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c8980e-d12a-4646-bc2a-ab79fa15f95e" containerName="pull" Jan 27 08:01:14 crc kubenswrapper[4799]: I0127 08:01:14.489957 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c8980e-d12a-4646-bc2a-ab79fa15f95e" containerName="pull" Jan 27 08:01:14 crc kubenswrapper[4799]: E0127 08:01:14.489968 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90bdf854-e5f4-4390-9cea-a049ad195cce" containerName="registry-server" Jan 27 08:01:14 crc kubenswrapper[4799]: I0127 08:01:14.489973 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="90bdf854-e5f4-4390-9cea-a049ad195cce" containerName="registry-server" Jan 27 08:01:14 crc kubenswrapper[4799]: I0127 08:01:14.490071 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="90bdf854-e5f4-4390-9cea-a049ad195cce" containerName="registry-server" Jan 27 08:01:14 crc kubenswrapper[4799]: I0127 08:01:14.490083 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3c8980e-d12a-4646-bc2a-ab79fa15f95e" containerName="extract" Jan 27 08:01:14 crc kubenswrapper[4799]: I0127 08:01:14.490595 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-pcqcj" Jan 27 08:01:14 crc kubenswrapper[4799]: I0127 08:01:14.493960 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 08:01:14 crc kubenswrapper[4799]: I0127 08:01:14.494564 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-nnrgf" Jan 27 08:01:14 crc kubenswrapper[4799]: I0127 08:01:14.494636 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 08:01:14 crc kubenswrapper[4799]: I0127 08:01:14.503004 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pcqcj"] Jan 27 08:01:14 crc kubenswrapper[4799]: I0127 08:01:14.574773 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpfdf\" (UniqueName: \"kubernetes.io/projected/3b644f64-e142-4c2a-89b2-f1e8a2c9f5ff-kube-api-access-gpfdf\") pod \"nmstate-operator-646758c888-pcqcj\" (UID: \"3b644f64-e142-4c2a-89b2-f1e8a2c9f5ff\") " pod="openshift-nmstate/nmstate-operator-646758c888-pcqcj" Jan 27 08:01:14 crc kubenswrapper[4799]: I0127 08:01:14.676376 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpfdf\" (UniqueName: \"kubernetes.io/projected/3b644f64-e142-4c2a-89b2-f1e8a2c9f5ff-kube-api-access-gpfdf\") pod \"nmstate-operator-646758c888-pcqcj\" (UID: \"3b644f64-e142-4c2a-89b2-f1e8a2c9f5ff\") " pod="openshift-nmstate/nmstate-operator-646758c888-pcqcj" Jan 27 08:01:14 crc kubenswrapper[4799]: I0127 08:01:14.704765 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpfdf\" (UniqueName: \"kubernetes.io/projected/3b644f64-e142-4c2a-89b2-f1e8a2c9f5ff-kube-api-access-gpfdf\") pod \"nmstate-operator-646758c888-pcqcj\" (UID: \"3b644f64-e142-4c2a-89b2-f1e8a2c9f5ff\") " pod="openshift-nmstate/nmstate-operator-646758c888-pcqcj" Jan 27 08:01:14 crc kubenswrapper[4799]: I0127 08:01:14.805246 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-pcqcj" Jan 27 08:01:15 crc kubenswrapper[4799]: I0127 08:01:15.261535 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pcqcj"] Jan 27 08:01:15 crc kubenswrapper[4799]: W0127 08:01:15.277571 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b644f64_e142_4c2a_89b2_f1e8a2c9f5ff.slice/crio-fde30df219d36ca14a05342b2423d349c76e01a5c7e12874ba1d3d5119aff8fa WatchSource:0}: Error finding container fde30df219d36ca14a05342b2423d349c76e01a5c7e12874ba1d3d5119aff8fa: Status 404 returned error can't find the container with id fde30df219d36ca14a05342b2423d349c76e01a5c7e12874ba1d3d5119aff8fa Jan 27 08:01:16 crc kubenswrapper[4799]: I0127 08:01:16.033071 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-pcqcj" event={"ID":"3b644f64-e142-4c2a-89b2-f1e8a2c9f5ff","Type":"ContainerStarted","Data":"fde30df219d36ca14a05342b2423d349c76e01a5c7e12874ba1d3d5119aff8fa"} Jan 27 08:01:18 crc kubenswrapper[4799]: I0127 08:01:18.046015 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-pcqcj" event={"ID":"3b644f64-e142-4c2a-89b2-f1e8a2c9f5ff","Type":"ContainerStarted","Data":"ee6650c226dab3f28b5ec43712477edb540b69e133483568b8d377b0c46971d2"} Jan 27 08:01:18 crc kubenswrapper[4799]: I0127 08:01:18.065201 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-pcqcj" podStartSLOduration=1.988844091 podStartE2EDuration="4.065179663s" podCreationTimestamp="2026-01-27 08:01:14 +0000 UTC" firstStartedPulling="2026-01-27 08:01:15.28041849 +0000 UTC m=+941.591522555" lastFinishedPulling="2026-01-27 08:01:17.356754072 +0000 UTC m=+943.667858127" observedRunningTime="2026-01-27 08:01:18.060619236 +0000 UTC m=+944.371723321" watchObservedRunningTime="2026-01-27 08:01:18.065179663 +0000 UTC m=+944.376283738" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.065617 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-76m8g"] Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.066668 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-76m8g" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.068752 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-8rqq7" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.082986 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-76m8g"] Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.099375 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-7tglw"] Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.100367 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7tglw" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.107290 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.119608 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-7tglw"] Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.133526 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-wpbqw"] Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.134428 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wpbqw" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.153774 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cx75\" (UniqueName: \"kubernetes.io/projected/05d2b510-84eb-45e6-851f-f3c8ead6c49f-kube-api-access-9cx75\") pod \"nmstate-metrics-54757c584b-76m8g\" (UID: \"05d2b510-84eb-45e6-851f-f3c8ead6c49f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-76m8g" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.239252 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbtdq"] Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.240028 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbtdq" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.250874 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-tm6r2" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.251059 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.252016 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.254546 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmj5s\" (UniqueName: \"kubernetes.io/projected/b3bbbfb1-9446-4db4-a64c-4124bdb3609f-kube-api-access-jmj5s\") pod \"nmstate-webhook-8474b5b9d8-7tglw\" (UID: \"b3bbbfb1-9446-4db4-a64c-4124bdb3609f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7tglw" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.254624 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv42b\" (UniqueName: \"kubernetes.io/projected/1a6a281a-0d48-4e63-abd5-1cf7fc08baf7-kube-api-access-zv42b\") pod \"nmstate-handler-wpbqw\" (UID: \"1a6a281a-0d48-4e63-abd5-1cf7fc08baf7\") " pod="openshift-nmstate/nmstate-handler-wpbqw" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.254663 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1a6a281a-0d48-4e63-abd5-1cf7fc08baf7-ovs-socket\") pod \"nmstate-handler-wpbqw\" (UID: \"1a6a281a-0d48-4e63-abd5-1cf7fc08baf7\") " pod="openshift-nmstate/nmstate-handler-wpbqw" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.254713 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b3bbbfb1-9446-4db4-a64c-4124bdb3609f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-7tglw\" (UID: \"b3bbbfb1-9446-4db4-a64c-4124bdb3609f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7tglw" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.254769 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cx75\" (UniqueName: \"kubernetes.io/projected/05d2b510-84eb-45e6-851f-f3c8ead6c49f-kube-api-access-9cx75\") pod \"nmstate-metrics-54757c584b-76m8g\" (UID: \"05d2b510-84eb-45e6-851f-f3c8ead6c49f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-76m8g" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.254811 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1a6a281a-0d48-4e63-abd5-1cf7fc08baf7-nmstate-lock\") pod \"nmstate-handler-wpbqw\" (UID: \"1a6a281a-0d48-4e63-abd5-1cf7fc08baf7\") " pod="openshift-nmstate/nmstate-handler-wpbqw" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.254839 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1a6a281a-0d48-4e63-abd5-1cf7fc08baf7-dbus-socket\") pod \"nmstate-handler-wpbqw\" (UID: \"1a6a281a-0d48-4e63-abd5-1cf7fc08baf7\") " pod="openshift-nmstate/nmstate-handler-wpbqw" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.259309 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbtdq"] Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.281795 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cx75\" (UniqueName: \"kubernetes.io/projected/05d2b510-84eb-45e6-851f-f3c8ead6c49f-kube-api-access-9cx75\") pod \"nmstate-metrics-54757c584b-76m8g\" (UID: \"05d2b510-84eb-45e6-851f-f3c8ead6c49f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-76m8g" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.356256 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1a6a281a-0d48-4e63-abd5-1cf7fc08baf7-ovs-socket\") pod \"nmstate-handler-wpbqw\" (UID: \"1a6a281a-0d48-4e63-abd5-1cf7fc08baf7\") " pod="openshift-nmstate/nmstate-handler-wpbqw" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.356332 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b3bbbfb1-9446-4db4-a64c-4124bdb3609f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-7tglw\" (UID: \"b3bbbfb1-9446-4db4-a64c-4124bdb3609f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7tglw" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.356372 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7291c9f-4df4-41fd-b55c-ec9e771c4088-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-dbtdq\" (UID: \"e7291c9f-4df4-41fd-b55c-ec9e771c4088\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbtdq" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.356423 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmhzl\" (UniqueName: \"kubernetes.io/projected/e7291c9f-4df4-41fd-b55c-ec9e771c4088-kube-api-access-dmhzl\") pod \"nmstate-console-plugin-7754f76f8b-dbtdq\" (UID: \"e7291c9f-4df4-41fd-b55c-ec9e771c4088\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbtdq" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.356420 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1a6a281a-0d48-4e63-abd5-1cf7fc08baf7-ovs-socket\") pod \"nmstate-handler-wpbqw\" (UID: \"1a6a281a-0d48-4e63-abd5-1cf7fc08baf7\") " pod="openshift-nmstate/nmstate-handler-wpbqw" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.356449 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e7291c9f-4df4-41fd-b55c-ec9e771c4088-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-dbtdq\" (UID: \"e7291c9f-4df4-41fd-b55c-ec9e771c4088\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbtdq" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.356475 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1a6a281a-0d48-4e63-abd5-1cf7fc08baf7-nmstate-lock\") pod \"nmstate-handler-wpbqw\" (UID: \"1a6a281a-0d48-4e63-abd5-1cf7fc08baf7\") " pod="openshift-nmstate/nmstate-handler-wpbqw" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.356662 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1a6a281a-0d48-4e63-abd5-1cf7fc08baf7-nmstate-lock\") pod \"nmstate-handler-wpbqw\" (UID: \"1a6a281a-0d48-4e63-abd5-1cf7fc08baf7\") " pod="openshift-nmstate/nmstate-handler-wpbqw" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.356678 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1a6a281a-0d48-4e63-abd5-1cf7fc08baf7-dbus-socket\") pod \"nmstate-handler-wpbqw\" (UID: \"1a6a281a-0d48-4e63-abd5-1cf7fc08baf7\") " pod="openshift-nmstate/nmstate-handler-wpbqw" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.356916 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmj5s\" (UniqueName: \"kubernetes.io/projected/b3bbbfb1-9446-4db4-a64c-4124bdb3609f-kube-api-access-jmj5s\") pod \"nmstate-webhook-8474b5b9d8-7tglw\" (UID: \"b3bbbfb1-9446-4db4-a64c-4124bdb3609f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7tglw" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.357026 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv42b\" (UniqueName: \"kubernetes.io/projected/1a6a281a-0d48-4e63-abd5-1cf7fc08baf7-kube-api-access-zv42b\") pod \"nmstate-handler-wpbqw\" (UID: \"1a6a281a-0d48-4e63-abd5-1cf7fc08baf7\") " pod="openshift-nmstate/nmstate-handler-wpbqw" Jan 27 08:01:19 crc kubenswrapper[4799]: E0127 08:01:19.357048 4799 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 27 08:01:19 crc kubenswrapper[4799]: E0127 08:01:19.357179 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3bbbfb1-9446-4db4-a64c-4124bdb3609f-tls-key-pair podName:b3bbbfb1-9446-4db4-a64c-4124bdb3609f nodeName:}" failed. No retries permitted until 2026-01-27 08:01:19.857148264 +0000 UTC m=+946.168252329 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/b3bbbfb1-9446-4db4-a64c-4124bdb3609f-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-7tglw" (UID: "b3bbbfb1-9446-4db4-a64c-4124bdb3609f") : secret "openshift-nmstate-webhook" not found Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.357263 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1a6a281a-0d48-4e63-abd5-1cf7fc08baf7-dbus-socket\") pod \"nmstate-handler-wpbqw\" (UID: \"1a6a281a-0d48-4e63-abd5-1cf7fc08baf7\") " pod="openshift-nmstate/nmstate-handler-wpbqw" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.377987 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmj5s\" (UniqueName: \"kubernetes.io/projected/b3bbbfb1-9446-4db4-a64c-4124bdb3609f-kube-api-access-jmj5s\") pod \"nmstate-webhook-8474b5b9d8-7tglw\" (UID: \"b3bbbfb1-9446-4db4-a64c-4124bdb3609f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7tglw" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.378880 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv42b\" (UniqueName: \"kubernetes.io/projected/1a6a281a-0d48-4e63-abd5-1cf7fc08baf7-kube-api-access-zv42b\") pod \"nmstate-handler-wpbqw\" (UID: \"1a6a281a-0d48-4e63-abd5-1cf7fc08baf7\") " pod="openshift-nmstate/nmstate-handler-wpbqw" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.384850 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-76m8g" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.441723 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-79476f9bb4-7f225"] Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.442393 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.451050 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wpbqw" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.458787 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7291c9f-4df4-41fd-b55c-ec9e771c4088-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-dbtdq\" (UID: \"e7291c9f-4df4-41fd-b55c-ec9e771c4088\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbtdq" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.458838 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmhzl\" (UniqueName: \"kubernetes.io/projected/e7291c9f-4df4-41fd-b55c-ec9e771c4088-kube-api-access-dmhzl\") pod \"nmstate-console-plugin-7754f76f8b-dbtdq\" (UID: \"e7291c9f-4df4-41fd-b55c-ec9e771c4088\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbtdq" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.458862 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e7291c9f-4df4-41fd-b55c-ec9e771c4088-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-dbtdq\" (UID: \"e7291c9f-4df4-41fd-b55c-ec9e771c4088\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbtdq" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.460071 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e7291c9f-4df4-41fd-b55c-ec9e771c4088-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-dbtdq\" (UID: \"e7291c9f-4df4-41fd-b55c-ec9e771c4088\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbtdq" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.461563 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79476f9bb4-7f225"] Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.466012 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7291c9f-4df4-41fd-b55c-ec9e771c4088-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-dbtdq\" (UID: \"e7291c9f-4df4-41fd-b55c-ec9e771c4088\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbtdq" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.478350 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmhzl\" (UniqueName: \"kubernetes.io/projected/e7291c9f-4df4-41fd-b55c-ec9e771c4088-kube-api-access-dmhzl\") pod \"nmstate-console-plugin-7754f76f8b-dbtdq\" (UID: \"e7291c9f-4df4-41fd-b55c-ec9e771c4088\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbtdq" Jan 27 08:01:19 crc kubenswrapper[4799]: W0127 08:01:19.483221 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a6a281a_0d48_4e63_abd5_1cf7fc08baf7.slice/crio-bc78aa2e89739365373243baacaba8f0d606bbc60cc2fc14c7b9eb0f4b9ce7a2 WatchSource:0}: Error finding container bc78aa2e89739365373243baacaba8f0d606bbc60cc2fc14c7b9eb0f4b9ce7a2: Status 404 returned error can't find the container with id bc78aa2e89739365373243baacaba8f0d606bbc60cc2fc14c7b9eb0f4b9ce7a2 Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.553568 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbtdq" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.559859 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5b772069-fbdb-4197-a723-08943b57e902-service-ca\") pod \"console-79476f9bb4-7f225\" (UID: \"5b772069-fbdb-4197-a723-08943b57e902\") " pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.559906 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxjg7\" (UniqueName: \"kubernetes.io/projected/5b772069-fbdb-4197-a723-08943b57e902-kube-api-access-nxjg7\") pod \"console-79476f9bb4-7f225\" (UID: \"5b772069-fbdb-4197-a723-08943b57e902\") " pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.559958 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5b772069-fbdb-4197-a723-08943b57e902-console-oauth-config\") pod \"console-79476f9bb4-7f225\" (UID: \"5b772069-fbdb-4197-a723-08943b57e902\") " pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.559996 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b772069-fbdb-4197-a723-08943b57e902-trusted-ca-bundle\") pod \"console-79476f9bb4-7f225\" (UID: \"5b772069-fbdb-4197-a723-08943b57e902\") " pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.560020 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b772069-fbdb-4197-a723-08943b57e902-console-serving-cert\") pod \"console-79476f9bb4-7f225\" (UID: \"5b772069-fbdb-4197-a723-08943b57e902\") " pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.560043 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5b772069-fbdb-4197-a723-08943b57e902-console-config\") pod \"console-79476f9bb4-7f225\" (UID: \"5b772069-fbdb-4197-a723-08943b57e902\") " pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.560071 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5b772069-fbdb-4197-a723-08943b57e902-oauth-serving-cert\") pod \"console-79476f9bb4-7f225\" (UID: \"5b772069-fbdb-4197-a723-08943b57e902\") " pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.631489 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-76m8g"] Jan 27 08:01:19 crc kubenswrapper[4799]: W0127 08:01:19.647768 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05d2b510_84eb_45e6_851f_f3c8ead6c49f.slice/crio-d1fd14e068c191660e4324edcda8f5732a150bcfa89cf4db158d03929f141f5c WatchSource:0}: Error finding container d1fd14e068c191660e4324edcda8f5732a150bcfa89cf4db158d03929f141f5c: Status 404 returned error can't find the container with id d1fd14e068c191660e4324edcda8f5732a150bcfa89cf4db158d03929f141f5c Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.662116 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5b772069-fbdb-4197-a723-08943b57e902-service-ca\") pod \"console-79476f9bb4-7f225\" (UID: \"5b772069-fbdb-4197-a723-08943b57e902\") " pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.662171 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxjg7\" (UniqueName: \"kubernetes.io/projected/5b772069-fbdb-4197-a723-08943b57e902-kube-api-access-nxjg7\") pod \"console-79476f9bb4-7f225\" (UID: \"5b772069-fbdb-4197-a723-08943b57e902\") " pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.662217 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5b772069-fbdb-4197-a723-08943b57e902-console-oauth-config\") pod \"console-79476f9bb4-7f225\" (UID: \"5b772069-fbdb-4197-a723-08943b57e902\") " pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.662284 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b772069-fbdb-4197-a723-08943b57e902-trusted-ca-bundle\") pod \"console-79476f9bb4-7f225\" (UID: \"5b772069-fbdb-4197-a723-08943b57e902\") " pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.662383 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b772069-fbdb-4197-a723-08943b57e902-console-serving-cert\") pod \"console-79476f9bb4-7f225\" (UID: \"5b772069-fbdb-4197-a723-08943b57e902\") " pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.662412 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5b772069-fbdb-4197-a723-08943b57e902-console-config\") pod \"console-79476f9bb4-7f225\" (UID: \"5b772069-fbdb-4197-a723-08943b57e902\") " pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.662441 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5b772069-fbdb-4197-a723-08943b57e902-oauth-serving-cert\") pod \"console-79476f9bb4-7f225\" (UID: \"5b772069-fbdb-4197-a723-08943b57e902\") " pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.663173 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5b772069-fbdb-4197-a723-08943b57e902-service-ca\") pod \"console-79476f9bb4-7f225\" (UID: \"5b772069-fbdb-4197-a723-08943b57e902\") " pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.663330 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5b772069-fbdb-4197-a723-08943b57e902-console-config\") pod \"console-79476f9bb4-7f225\" (UID: \"5b772069-fbdb-4197-a723-08943b57e902\") " pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.663422 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5b772069-fbdb-4197-a723-08943b57e902-oauth-serving-cert\") pod \"console-79476f9bb4-7f225\" (UID: \"5b772069-fbdb-4197-a723-08943b57e902\") " pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.663552 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b772069-fbdb-4197-a723-08943b57e902-trusted-ca-bundle\") pod \"console-79476f9bb4-7f225\" (UID: \"5b772069-fbdb-4197-a723-08943b57e902\") " pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.667123 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5b772069-fbdb-4197-a723-08943b57e902-console-oauth-config\") pod \"console-79476f9bb4-7f225\" (UID: \"5b772069-fbdb-4197-a723-08943b57e902\") " pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.667165 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b772069-fbdb-4197-a723-08943b57e902-console-serving-cert\") pod \"console-79476f9bb4-7f225\" (UID: \"5b772069-fbdb-4197-a723-08943b57e902\") " pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.681929 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxjg7\" (UniqueName: \"kubernetes.io/projected/5b772069-fbdb-4197-a723-08943b57e902-kube-api-access-nxjg7\") pod \"console-79476f9bb4-7f225\" (UID: \"5b772069-fbdb-4197-a723-08943b57e902\") " pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.758085 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbtdq"] Jan 27 08:01:19 crc kubenswrapper[4799]: W0127 08:01:19.762682 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7291c9f_4df4_41fd_b55c_ec9e771c4088.slice/crio-b752d6a15e01eb44f7deed56a6dcd84d1899a1aa82e07af5c6e6dcc3f8f17c1c WatchSource:0}: Error finding container b752d6a15e01eb44f7deed56a6dcd84d1899a1aa82e07af5c6e6dcc3f8f17c1c: Status 404 returned error can't find the container with id b752d6a15e01eb44f7deed56a6dcd84d1899a1aa82e07af5c6e6dcc3f8f17c1c Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.767014 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.865891 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b3bbbfb1-9446-4db4-a64c-4124bdb3609f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-7tglw\" (UID: \"b3bbbfb1-9446-4db4-a64c-4124bdb3609f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7tglw" Jan 27 08:01:19 crc kubenswrapper[4799]: I0127 08:01:19.876614 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b3bbbfb1-9446-4db4-a64c-4124bdb3609f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-7tglw\" (UID: \"b3bbbfb1-9446-4db4-a64c-4124bdb3609f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7tglw" Jan 27 08:01:20 crc kubenswrapper[4799]: I0127 08:01:20.017204 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7tglw" Jan 27 08:01:20 crc kubenswrapper[4799]: I0127 08:01:20.060575 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbtdq" event={"ID":"e7291c9f-4df4-41fd-b55c-ec9e771c4088","Type":"ContainerStarted","Data":"b752d6a15e01eb44f7deed56a6dcd84d1899a1aa82e07af5c6e6dcc3f8f17c1c"} Jan 27 08:01:20 crc kubenswrapper[4799]: I0127 08:01:20.062228 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wpbqw" event={"ID":"1a6a281a-0d48-4e63-abd5-1cf7fc08baf7","Type":"ContainerStarted","Data":"bc78aa2e89739365373243baacaba8f0d606bbc60cc2fc14c7b9eb0f4b9ce7a2"} Jan 27 08:01:20 crc kubenswrapper[4799]: I0127 08:01:20.063121 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-76m8g" event={"ID":"05d2b510-84eb-45e6-851f-f3c8ead6c49f","Type":"ContainerStarted","Data":"d1fd14e068c191660e4324edcda8f5732a150bcfa89cf4db158d03929f141f5c"} Jan 27 08:01:20 crc kubenswrapper[4799]: I0127 08:01:20.084173 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79476f9bb4-7f225"] Jan 27 08:01:20 crc kubenswrapper[4799]: W0127 08:01:20.092039 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b772069_fbdb_4197_a723_08943b57e902.slice/crio-0a6381f7e5d7a2c96fbb0c7fec5b50f5b960660d315a2c57d1f5e63f1888bee8 WatchSource:0}: Error finding container 0a6381f7e5d7a2c96fbb0c7fec5b50f5b960660d315a2c57d1f5e63f1888bee8: Status 404 returned error can't find the container with id 0a6381f7e5d7a2c96fbb0c7fec5b50f5b960660d315a2c57d1f5e63f1888bee8 Jan 27 08:01:20 crc kubenswrapper[4799]: I0127 08:01:20.221582 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-7tglw"] Jan 27 08:01:20 crc kubenswrapper[4799]: W0127 08:01:20.226835 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3bbbfb1_9446_4db4_a64c_4124bdb3609f.slice/crio-c2330e8057ed03cbedb4f4b1858ce0c78d94df56035e36f830b3429c94bc2194 WatchSource:0}: Error finding container c2330e8057ed03cbedb4f4b1858ce0c78d94df56035e36f830b3429c94bc2194: Status 404 returned error can't find the container with id c2330e8057ed03cbedb4f4b1858ce0c78d94df56035e36f830b3429c94bc2194 Jan 27 08:01:21 crc kubenswrapper[4799]: I0127 08:01:21.084594 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7tglw" event={"ID":"b3bbbfb1-9446-4db4-a64c-4124bdb3609f","Type":"ContainerStarted","Data":"c2330e8057ed03cbedb4f4b1858ce0c78d94df56035e36f830b3429c94bc2194"} Jan 27 08:01:21 crc kubenswrapper[4799]: I0127 08:01:21.088052 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79476f9bb4-7f225" event={"ID":"5b772069-fbdb-4197-a723-08943b57e902","Type":"ContainerStarted","Data":"6eeb47d2b0343cfcba3a71132717c5e00092fd44ba198e2f587597d34353d659"} Jan 27 08:01:21 crc kubenswrapper[4799]: I0127 08:01:21.088109 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79476f9bb4-7f225" event={"ID":"5b772069-fbdb-4197-a723-08943b57e902","Type":"ContainerStarted","Data":"0a6381f7e5d7a2c96fbb0c7fec5b50f5b960660d315a2c57d1f5e63f1888bee8"} Jan 27 08:01:21 crc kubenswrapper[4799]: I0127 08:01:21.114051 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-79476f9bb4-7f225" podStartSLOduration=2.114031616 podStartE2EDuration="2.114031616s" podCreationTimestamp="2026-01-27 08:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:01:21.10765132 +0000 UTC m=+947.418755405" watchObservedRunningTime="2026-01-27 08:01:21.114031616 +0000 UTC m=+947.425135671" Jan 27 08:01:23 crc kubenswrapper[4799]: I0127 08:01:23.103737 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7tglw" event={"ID":"b3bbbfb1-9446-4db4-a64c-4124bdb3609f","Type":"ContainerStarted","Data":"ab3e6943a876c5d3de51ae4e96e21e386cf29f477500e79c17f64105924e2cd3"} Jan 27 08:01:23 crc kubenswrapper[4799]: I0127 08:01:23.104378 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7tglw" Jan 27 08:01:23 crc kubenswrapper[4799]: I0127 08:01:23.106677 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbtdq" event={"ID":"e7291c9f-4df4-41fd-b55c-ec9e771c4088","Type":"ContainerStarted","Data":"a8c3bf04328cf793ddc42c5146386670acaf6c7578a4214f872a89dbf0f76b03"} Jan 27 08:01:23 crc kubenswrapper[4799]: I0127 08:01:23.108550 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wpbqw" event={"ID":"1a6a281a-0d48-4e63-abd5-1cf7fc08baf7","Type":"ContainerStarted","Data":"c4673c3bc89420be4369d6fcf5ed9f8b50e36a35e2fb208ed7703ace4ce9ee37"} Jan 27 08:01:23 crc kubenswrapper[4799]: I0127 08:01:23.108739 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-wpbqw" Jan 27 08:01:23 crc kubenswrapper[4799]: I0127 08:01:23.110431 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-76m8g" event={"ID":"05d2b510-84eb-45e6-851f-f3c8ead6c49f","Type":"ContainerStarted","Data":"f512e625cecf0b253e856361e4a51a528224f1055f658017ec97b40d190de947"} Jan 27 08:01:23 crc kubenswrapper[4799]: I0127 08:01:23.131605 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7tglw" podStartSLOduration=1.960056121 podStartE2EDuration="4.131581884s" podCreationTimestamp="2026-01-27 08:01:19 +0000 UTC" firstStartedPulling="2026-01-27 08:01:20.230178266 +0000 UTC m=+946.541282331" lastFinishedPulling="2026-01-27 08:01:22.401704029 +0000 UTC m=+948.712808094" observedRunningTime="2026-01-27 08:01:23.124804566 +0000 UTC m=+949.435908641" watchObservedRunningTime="2026-01-27 08:01:23.131581884 +0000 UTC m=+949.442685949" Jan 27 08:01:23 crc kubenswrapper[4799]: I0127 08:01:23.144770 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-wpbqw" podStartSLOduration=1.234780172 podStartE2EDuration="4.144751307s" podCreationTimestamp="2026-01-27 08:01:19 +0000 UTC" firstStartedPulling="2026-01-27 08:01:19.488519885 +0000 UTC m=+945.799623950" lastFinishedPulling="2026-01-27 08:01:22.39849102 +0000 UTC m=+948.709595085" observedRunningTime="2026-01-27 08:01:23.141062306 +0000 UTC m=+949.452166381" watchObservedRunningTime="2026-01-27 08:01:23.144751307 +0000 UTC m=+949.455855372" Jan 27 08:01:24 crc kubenswrapper[4799]: I0127 08:01:24.475233 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbtdq" podStartSLOduration=2.843418498 podStartE2EDuration="5.475211603s" podCreationTimestamp="2026-01-27 08:01:19 +0000 UTC" firstStartedPulling="2026-01-27 08:01:19.766733676 +0000 UTC m=+946.077837741" lastFinishedPulling="2026-01-27 08:01:22.398526771 +0000 UTC m=+948.709630846" observedRunningTime="2026-01-27 08:01:23.161771789 +0000 UTC m=+949.472875854" watchObservedRunningTime="2026-01-27 08:01:24.475211603 +0000 UTC m=+950.786315668" Jan 27 08:01:26 crc kubenswrapper[4799]: I0127 08:01:26.137158 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-76m8g" event={"ID":"05d2b510-84eb-45e6-851f-f3c8ead6c49f","Type":"ContainerStarted","Data":"38888758fb32f9fbf0cffe456bd70111e5c72ffb3c38cb413e4de68eb2b84591"} Jan 27 08:01:26 crc kubenswrapper[4799]: I0127 08:01:26.169418 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-76m8g" podStartSLOduration=1.816089931 podStartE2EDuration="7.169385542s" podCreationTimestamp="2026-01-27 08:01:19 +0000 UTC" firstStartedPulling="2026-01-27 08:01:19.650241216 +0000 UTC m=+945.961345281" lastFinishedPulling="2026-01-27 08:01:25.003536827 +0000 UTC m=+951.314640892" observedRunningTime="2026-01-27 08:01:26.159586661 +0000 UTC m=+952.470690736" watchObservedRunningTime="2026-01-27 08:01:26.169385542 +0000 UTC m=+952.480489607" Jan 27 08:01:29 crc kubenswrapper[4799]: I0127 08:01:29.487264 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-wpbqw" Jan 27 08:01:29 crc kubenswrapper[4799]: I0127 08:01:29.767702 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:29 crc kubenswrapper[4799]: I0127 08:01:29.768077 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:29 crc kubenswrapper[4799]: I0127 08:01:29.773812 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:30 crc kubenswrapper[4799]: I0127 08:01:30.169115 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-79476f9bb4-7f225" Jan 27 08:01:30 crc kubenswrapper[4799]: I0127 08:01:30.229238 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-bl4wn"] Jan 27 08:01:40 crc kubenswrapper[4799]: I0127 08:01:40.025434 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7tglw" Jan 27 08:01:54 crc kubenswrapper[4799]: I0127 08:01:54.912314 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286"] Jan 27 08:01:54 crc kubenswrapper[4799]: I0127 08:01:54.914030 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286" Jan 27 08:01:54 crc kubenswrapper[4799]: I0127 08:01:54.917187 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 08:01:54 crc kubenswrapper[4799]: I0127 08:01:54.931476 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286"] Jan 27 08:01:54 crc kubenswrapper[4799]: I0127 08:01:54.982982 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/401be67a-e5de-4ad0-bf00-9294434cc929-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286\" (UID: \"401be67a-e5de-4ad0-bf00-9294434cc929\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286" Jan 27 08:01:54 crc kubenswrapper[4799]: I0127 08:01:54.983029 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/401be67a-e5de-4ad0-bf00-9294434cc929-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286\" (UID: \"401be67a-e5de-4ad0-bf00-9294434cc929\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286" Jan 27 08:01:54 crc kubenswrapper[4799]: I0127 08:01:54.983080 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s59dg\" (UniqueName: \"kubernetes.io/projected/401be67a-e5de-4ad0-bf00-9294434cc929-kube-api-access-s59dg\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286\" (UID: \"401be67a-e5de-4ad0-bf00-9294434cc929\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.084733 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s59dg\" (UniqueName: \"kubernetes.io/projected/401be67a-e5de-4ad0-bf00-9294434cc929-kube-api-access-s59dg\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286\" (UID: \"401be67a-e5de-4ad0-bf00-9294434cc929\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.084823 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/401be67a-e5de-4ad0-bf00-9294434cc929-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286\" (UID: \"401be67a-e5de-4ad0-bf00-9294434cc929\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.084843 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/401be67a-e5de-4ad0-bf00-9294434cc929-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286\" (UID: \"401be67a-e5de-4ad0-bf00-9294434cc929\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.085398 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/401be67a-e5de-4ad0-bf00-9294434cc929-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286\" (UID: \"401be67a-e5de-4ad0-bf00-9294434cc929\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.085738 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/401be67a-e5de-4ad0-bf00-9294434cc929-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286\" (UID: \"401be67a-e5de-4ad0-bf00-9294434cc929\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.106928 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s59dg\" (UniqueName: \"kubernetes.io/projected/401be67a-e5de-4ad0-bf00-9294434cc929-kube-api-access-s59dg\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286\" (UID: \"401be67a-e5de-4ad0-bf00-9294434cc929\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.235890 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.271201 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-bl4wn" podUID="1c1b6ac6-0dc3-4f65-bb94-d448893ae317" containerName="console" containerID="cri-o://105bac36d8218230a41cbf9a55411a5966a501cee3209f150c60d22a2003d5b5" gracePeriod=15 Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.664737 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-bl4wn_1c1b6ac6-0dc3-4f65-bb94-d448893ae317/console/0.log" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.665166 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.744855 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286"] Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.797206 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-oauth-serving-cert\") pod \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.797273 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-oauth-config\") pod \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.797348 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-config\") pod \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.797421 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdss4\" (UniqueName: \"kubernetes.io/projected/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-kube-api-access-pdss4\") pod \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.797495 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-trusted-ca-bundle\") pod \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.797556 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-service-ca\") pod \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.797633 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-serving-cert\") pod \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\" (UID: \"1c1b6ac6-0dc3-4f65-bb94-d448893ae317\") " Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.798510 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1c1b6ac6-0dc3-4f65-bb94-d448893ae317" (UID: "1c1b6ac6-0dc3-4f65-bb94-d448893ae317"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.798554 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-config" (OuterVolumeSpecName: "console-config") pod "1c1b6ac6-0dc3-4f65-bb94-d448893ae317" (UID: "1c1b6ac6-0dc3-4f65-bb94-d448893ae317"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.798550 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-service-ca" (OuterVolumeSpecName: "service-ca") pod "1c1b6ac6-0dc3-4f65-bb94-d448893ae317" (UID: "1c1b6ac6-0dc3-4f65-bb94-d448893ae317"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.798635 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "1c1b6ac6-0dc3-4f65-bb94-d448893ae317" (UID: "1c1b6ac6-0dc3-4f65-bb94-d448893ae317"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.804036 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-kube-api-access-pdss4" (OuterVolumeSpecName: "kube-api-access-pdss4") pod "1c1b6ac6-0dc3-4f65-bb94-d448893ae317" (UID: "1c1b6ac6-0dc3-4f65-bb94-d448893ae317"). InnerVolumeSpecName "kube-api-access-pdss4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.804058 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "1c1b6ac6-0dc3-4f65-bb94-d448893ae317" (UID: "1c1b6ac6-0dc3-4f65-bb94-d448893ae317"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.804345 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "1c1b6ac6-0dc3-4f65-bb94-d448893ae317" (UID: "1c1b6ac6-0dc3-4f65-bb94-d448893ae317"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.899960 4799 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.900693 4799 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.900770 4799 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.900800 4799 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.900824 4799 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.900846 4799 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:01:55 crc kubenswrapper[4799]: I0127 08:01:55.900866 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdss4\" (UniqueName: \"kubernetes.io/projected/1c1b6ac6-0dc3-4f65-bb94-d448893ae317-kube-api-access-pdss4\") on node \"crc\" DevicePath \"\"" Jan 27 08:01:56 crc kubenswrapper[4799]: I0127 08:01:56.360281 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-bl4wn_1c1b6ac6-0dc3-4f65-bb94-d448893ae317/console/0.log" Jan 27 08:01:56 crc kubenswrapper[4799]: I0127 08:01:56.360693 4799 generic.go:334] "Generic (PLEG): container finished" podID="1c1b6ac6-0dc3-4f65-bb94-d448893ae317" containerID="105bac36d8218230a41cbf9a55411a5966a501cee3209f150c60d22a2003d5b5" exitCode=2 Jan 27 08:01:56 crc kubenswrapper[4799]: I0127 08:01:56.360812 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bl4wn" event={"ID":"1c1b6ac6-0dc3-4f65-bb94-d448893ae317","Type":"ContainerDied","Data":"105bac36d8218230a41cbf9a55411a5966a501cee3209f150c60d22a2003d5b5"} Jan 27 08:01:56 crc kubenswrapper[4799]: I0127 08:01:56.360867 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bl4wn" Jan 27 08:01:56 crc kubenswrapper[4799]: I0127 08:01:56.360895 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bl4wn" event={"ID":"1c1b6ac6-0dc3-4f65-bb94-d448893ae317","Type":"ContainerDied","Data":"0b4ba95d01efbdaf11f1a2f9f3c6434531db41dcd54291b58e351dd5dbdc1bbc"} Jan 27 08:01:56 crc kubenswrapper[4799]: I0127 08:01:56.360932 4799 scope.go:117] "RemoveContainer" containerID="105bac36d8218230a41cbf9a55411a5966a501cee3209f150c60d22a2003d5b5" Jan 27 08:01:56 crc kubenswrapper[4799]: I0127 08:01:56.363811 4799 generic.go:334] "Generic (PLEG): container finished" podID="401be67a-e5de-4ad0-bf00-9294434cc929" containerID="3034f2ba0b6be9170f7a1876aa8dd1f76371ee1a5c5a74ce8ab38ae0bf2360d9" exitCode=0 Jan 27 08:01:56 crc kubenswrapper[4799]: I0127 08:01:56.363882 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286" event={"ID":"401be67a-e5de-4ad0-bf00-9294434cc929","Type":"ContainerDied","Data":"3034f2ba0b6be9170f7a1876aa8dd1f76371ee1a5c5a74ce8ab38ae0bf2360d9"} Jan 27 08:01:56 crc kubenswrapper[4799]: I0127 08:01:56.363957 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286" event={"ID":"401be67a-e5de-4ad0-bf00-9294434cc929","Type":"ContainerStarted","Data":"4d85833a7e66a9663cdc65cbbdd19a7250b63495f5f90d699e48e02b1e670343"} Jan 27 08:01:56 crc kubenswrapper[4799]: I0127 08:01:56.395283 4799 scope.go:117] "RemoveContainer" containerID="105bac36d8218230a41cbf9a55411a5966a501cee3209f150c60d22a2003d5b5" Jan 27 08:01:56 crc kubenswrapper[4799]: E0127 08:01:56.395866 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"105bac36d8218230a41cbf9a55411a5966a501cee3209f150c60d22a2003d5b5\": container with ID starting with 105bac36d8218230a41cbf9a55411a5966a501cee3209f150c60d22a2003d5b5 not found: ID does not exist" containerID="105bac36d8218230a41cbf9a55411a5966a501cee3209f150c60d22a2003d5b5" Jan 27 08:01:56 crc kubenswrapper[4799]: I0127 08:01:56.395910 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"105bac36d8218230a41cbf9a55411a5966a501cee3209f150c60d22a2003d5b5"} err="failed to get container status \"105bac36d8218230a41cbf9a55411a5966a501cee3209f150c60d22a2003d5b5\": rpc error: code = NotFound desc = could not find container \"105bac36d8218230a41cbf9a55411a5966a501cee3209f150c60d22a2003d5b5\": container with ID starting with 105bac36d8218230a41cbf9a55411a5966a501cee3209f150c60d22a2003d5b5 not found: ID does not exist" Jan 27 08:01:56 crc kubenswrapper[4799]: I0127 08:01:56.433404 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-bl4wn"] Jan 27 08:01:56 crc kubenswrapper[4799]: I0127 08:01:56.438407 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-bl4wn"] Jan 27 08:01:56 crc kubenswrapper[4799]: I0127 08:01:56.465953 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c1b6ac6-0dc3-4f65-bb94-d448893ae317" path="/var/lib/kubelet/pods/1c1b6ac6-0dc3-4f65-bb94-d448893ae317/volumes" Jan 27 08:01:58 crc kubenswrapper[4799]: I0127 08:01:58.390273 4799 generic.go:334] "Generic (PLEG): container finished" podID="401be67a-e5de-4ad0-bf00-9294434cc929" containerID="828898e61476a39db72f7009b4e82080b6ec45ce1e3188b456be7a8b427dd40c" exitCode=0 Jan 27 08:01:58 crc kubenswrapper[4799]: I0127 08:01:58.390386 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286" event={"ID":"401be67a-e5de-4ad0-bf00-9294434cc929","Type":"ContainerDied","Data":"828898e61476a39db72f7009b4e82080b6ec45ce1e3188b456be7a8b427dd40c"} Jan 27 08:01:59 crc kubenswrapper[4799]: I0127 08:01:59.403068 4799 generic.go:334] "Generic (PLEG): container finished" podID="401be67a-e5de-4ad0-bf00-9294434cc929" containerID="cc725daf64863611b0c76c412cf0eab574c3e80006d3b6e341e2faee77140dff" exitCode=0 Jan 27 08:01:59 crc kubenswrapper[4799]: I0127 08:01:59.403133 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286" event={"ID":"401be67a-e5de-4ad0-bf00-9294434cc929","Type":"ContainerDied","Data":"cc725daf64863611b0c76c412cf0eab574c3e80006d3b6e341e2faee77140dff"} Jan 27 08:02:00 crc kubenswrapper[4799]: I0127 08:02:00.692622 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286" Jan 27 08:02:00 crc kubenswrapper[4799]: I0127 08:02:00.777582 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/401be67a-e5de-4ad0-bf00-9294434cc929-util\") pod \"401be67a-e5de-4ad0-bf00-9294434cc929\" (UID: \"401be67a-e5de-4ad0-bf00-9294434cc929\") " Jan 27 08:02:00 crc kubenswrapper[4799]: I0127 08:02:00.777754 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s59dg\" (UniqueName: \"kubernetes.io/projected/401be67a-e5de-4ad0-bf00-9294434cc929-kube-api-access-s59dg\") pod \"401be67a-e5de-4ad0-bf00-9294434cc929\" (UID: \"401be67a-e5de-4ad0-bf00-9294434cc929\") " Jan 27 08:02:00 crc kubenswrapper[4799]: I0127 08:02:00.777818 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/401be67a-e5de-4ad0-bf00-9294434cc929-bundle\") pod \"401be67a-e5de-4ad0-bf00-9294434cc929\" (UID: \"401be67a-e5de-4ad0-bf00-9294434cc929\") " Jan 27 08:02:00 crc kubenswrapper[4799]: I0127 08:02:00.779446 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/401be67a-e5de-4ad0-bf00-9294434cc929-bundle" (OuterVolumeSpecName: "bundle") pod "401be67a-e5de-4ad0-bf00-9294434cc929" (UID: "401be67a-e5de-4ad0-bf00-9294434cc929"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:02:00 crc kubenswrapper[4799]: I0127 08:02:00.784394 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/401be67a-e5de-4ad0-bf00-9294434cc929-kube-api-access-s59dg" (OuterVolumeSpecName: "kube-api-access-s59dg") pod "401be67a-e5de-4ad0-bf00-9294434cc929" (UID: "401be67a-e5de-4ad0-bf00-9294434cc929"). InnerVolumeSpecName "kube-api-access-s59dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:02:00 crc kubenswrapper[4799]: I0127 08:02:00.790527 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/401be67a-e5de-4ad0-bf00-9294434cc929-util" (OuterVolumeSpecName: "util") pod "401be67a-e5de-4ad0-bf00-9294434cc929" (UID: "401be67a-e5de-4ad0-bf00-9294434cc929"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:02:00 crc kubenswrapper[4799]: I0127 08:02:00.880107 4799 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/401be67a-e5de-4ad0-bf00-9294434cc929-util\") on node \"crc\" DevicePath \"\"" Jan 27 08:02:00 crc kubenswrapper[4799]: I0127 08:02:00.880236 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s59dg\" (UniqueName: \"kubernetes.io/projected/401be67a-e5de-4ad0-bf00-9294434cc929-kube-api-access-s59dg\") on node \"crc\" DevicePath \"\"" Jan 27 08:02:00 crc kubenswrapper[4799]: I0127 08:02:00.880436 4799 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/401be67a-e5de-4ad0-bf00-9294434cc929-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:02:01 crc kubenswrapper[4799]: I0127 08:02:01.426379 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286" event={"ID":"401be67a-e5de-4ad0-bf00-9294434cc929","Type":"ContainerDied","Data":"4d85833a7e66a9663cdc65cbbdd19a7250b63495f5f90d699e48e02b1e670343"} Jan 27 08:02:01 crc kubenswrapper[4799]: I0127 08:02:01.426454 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d85833a7e66a9663cdc65cbbdd19a7250b63495f5f90d699e48e02b1e670343" Jan 27 08:02:01 crc kubenswrapper[4799]: I0127 08:02:01.426491 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:09.999732 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6d5c6f5f66-nmqdn"] Jan 27 08:02:10 crc kubenswrapper[4799]: E0127 08:02:10.000574 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401be67a-e5de-4ad0-bf00-9294434cc929" containerName="pull" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.000587 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="401be67a-e5de-4ad0-bf00-9294434cc929" containerName="pull" Jan 27 08:02:10 crc kubenswrapper[4799]: E0127 08:02:10.000598 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401be67a-e5de-4ad0-bf00-9294434cc929" containerName="extract" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.000604 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="401be67a-e5de-4ad0-bf00-9294434cc929" containerName="extract" Jan 27 08:02:10 crc kubenswrapper[4799]: E0127 08:02:10.000618 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c1b6ac6-0dc3-4f65-bb94-d448893ae317" containerName="console" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.000624 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c1b6ac6-0dc3-4f65-bb94-d448893ae317" containerName="console" Jan 27 08:02:10 crc kubenswrapper[4799]: E0127 08:02:10.000633 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401be67a-e5de-4ad0-bf00-9294434cc929" containerName="util" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.000639 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="401be67a-e5de-4ad0-bf00-9294434cc929" containerName="util" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.000730 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c1b6ac6-0dc3-4f65-bb94-d448893ae317" containerName="console" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.000739 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="401be67a-e5de-4ad0-bf00-9294434cc929" containerName="extract" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.001110 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6d5c6f5f66-nmqdn" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.008650 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.008970 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.009157 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.009379 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-tbbh5" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.027832 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.048079 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6d5c6f5f66-nmqdn"] Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.130075 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/06ea268e-b6bc-4056-8cb0-5113c8d2a54f-webhook-cert\") pod \"metallb-operator-controller-manager-6d5c6f5f66-nmqdn\" (UID: \"06ea268e-b6bc-4056-8cb0-5113c8d2a54f\") " pod="metallb-system/metallb-operator-controller-manager-6d5c6f5f66-nmqdn" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.130124 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/06ea268e-b6bc-4056-8cb0-5113c8d2a54f-apiservice-cert\") pod \"metallb-operator-controller-manager-6d5c6f5f66-nmqdn\" (UID: \"06ea268e-b6bc-4056-8cb0-5113c8d2a54f\") " pod="metallb-system/metallb-operator-controller-manager-6d5c6f5f66-nmqdn" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.130201 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bt4v\" (UniqueName: \"kubernetes.io/projected/06ea268e-b6bc-4056-8cb0-5113c8d2a54f-kube-api-access-5bt4v\") pod \"metallb-operator-controller-manager-6d5c6f5f66-nmqdn\" (UID: \"06ea268e-b6bc-4056-8cb0-5113c8d2a54f\") " pod="metallb-system/metallb-operator-controller-manager-6d5c6f5f66-nmqdn" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.231790 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/06ea268e-b6bc-4056-8cb0-5113c8d2a54f-webhook-cert\") pod \"metallb-operator-controller-manager-6d5c6f5f66-nmqdn\" (UID: \"06ea268e-b6bc-4056-8cb0-5113c8d2a54f\") " pod="metallb-system/metallb-operator-controller-manager-6d5c6f5f66-nmqdn" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.231835 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/06ea268e-b6bc-4056-8cb0-5113c8d2a54f-apiservice-cert\") pod \"metallb-operator-controller-manager-6d5c6f5f66-nmqdn\" (UID: \"06ea268e-b6bc-4056-8cb0-5113c8d2a54f\") " pod="metallb-system/metallb-operator-controller-manager-6d5c6f5f66-nmqdn" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.231896 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bt4v\" (UniqueName: \"kubernetes.io/projected/06ea268e-b6bc-4056-8cb0-5113c8d2a54f-kube-api-access-5bt4v\") pod \"metallb-operator-controller-manager-6d5c6f5f66-nmqdn\" (UID: \"06ea268e-b6bc-4056-8cb0-5113c8d2a54f\") " pod="metallb-system/metallb-operator-controller-manager-6d5c6f5f66-nmqdn" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.237835 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/06ea268e-b6bc-4056-8cb0-5113c8d2a54f-webhook-cert\") pod \"metallb-operator-controller-manager-6d5c6f5f66-nmqdn\" (UID: \"06ea268e-b6bc-4056-8cb0-5113c8d2a54f\") " pod="metallb-system/metallb-operator-controller-manager-6d5c6f5f66-nmqdn" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.260035 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/06ea268e-b6bc-4056-8cb0-5113c8d2a54f-apiservice-cert\") pod \"metallb-operator-controller-manager-6d5c6f5f66-nmqdn\" (UID: \"06ea268e-b6bc-4056-8cb0-5113c8d2a54f\") " pod="metallb-system/metallb-operator-controller-manager-6d5c6f5f66-nmqdn" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.272981 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bt4v\" (UniqueName: \"kubernetes.io/projected/06ea268e-b6bc-4056-8cb0-5113c8d2a54f-kube-api-access-5bt4v\") pod \"metallb-operator-controller-manager-6d5c6f5f66-nmqdn\" (UID: \"06ea268e-b6bc-4056-8cb0-5113c8d2a54f\") " pod="metallb-system/metallb-operator-controller-manager-6d5c6f5f66-nmqdn" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.273049 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-55fb679855-n7lbv"] Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.273814 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-55fb679855-n7lbv" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.277229 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-55fb679855-n7lbv"] Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.277583 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.277808 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.278017 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-ds2vr" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.326392 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6d5c6f5f66-nmqdn" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.332973 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc463ea2-ef36-43be-82ac-dab18b86c215-apiservice-cert\") pod \"metallb-operator-webhook-server-55fb679855-n7lbv\" (UID: \"dc463ea2-ef36-43be-82ac-dab18b86c215\") " pod="metallb-system/metallb-operator-webhook-server-55fb679855-n7lbv" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.333039 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc463ea2-ef36-43be-82ac-dab18b86c215-webhook-cert\") pod \"metallb-operator-webhook-server-55fb679855-n7lbv\" (UID: \"dc463ea2-ef36-43be-82ac-dab18b86c215\") " pod="metallb-system/metallb-operator-webhook-server-55fb679855-n7lbv" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.333150 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tprh\" (UniqueName: \"kubernetes.io/projected/dc463ea2-ef36-43be-82ac-dab18b86c215-kube-api-access-2tprh\") pod \"metallb-operator-webhook-server-55fb679855-n7lbv\" (UID: \"dc463ea2-ef36-43be-82ac-dab18b86c215\") " pod="metallb-system/metallb-operator-webhook-server-55fb679855-n7lbv" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.434875 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc463ea2-ef36-43be-82ac-dab18b86c215-webhook-cert\") pod \"metallb-operator-webhook-server-55fb679855-n7lbv\" (UID: \"dc463ea2-ef36-43be-82ac-dab18b86c215\") " pod="metallb-system/metallb-operator-webhook-server-55fb679855-n7lbv" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.435319 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tprh\" (UniqueName: \"kubernetes.io/projected/dc463ea2-ef36-43be-82ac-dab18b86c215-kube-api-access-2tprh\") pod \"metallb-operator-webhook-server-55fb679855-n7lbv\" (UID: \"dc463ea2-ef36-43be-82ac-dab18b86c215\") " pod="metallb-system/metallb-operator-webhook-server-55fb679855-n7lbv" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.435380 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc463ea2-ef36-43be-82ac-dab18b86c215-apiservice-cert\") pod \"metallb-operator-webhook-server-55fb679855-n7lbv\" (UID: \"dc463ea2-ef36-43be-82ac-dab18b86c215\") " pod="metallb-system/metallb-operator-webhook-server-55fb679855-n7lbv" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.441869 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc463ea2-ef36-43be-82ac-dab18b86c215-webhook-cert\") pod \"metallb-operator-webhook-server-55fb679855-n7lbv\" (UID: \"dc463ea2-ef36-43be-82ac-dab18b86c215\") " pod="metallb-system/metallb-operator-webhook-server-55fb679855-n7lbv" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.443914 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc463ea2-ef36-43be-82ac-dab18b86c215-apiservice-cert\") pod \"metallb-operator-webhook-server-55fb679855-n7lbv\" (UID: \"dc463ea2-ef36-43be-82ac-dab18b86c215\") " pod="metallb-system/metallb-operator-webhook-server-55fb679855-n7lbv" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.451750 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tprh\" (UniqueName: \"kubernetes.io/projected/dc463ea2-ef36-43be-82ac-dab18b86c215-kube-api-access-2tprh\") pod \"metallb-operator-webhook-server-55fb679855-n7lbv\" (UID: \"dc463ea2-ef36-43be-82ac-dab18b86c215\") " pod="metallb-system/metallb-operator-webhook-server-55fb679855-n7lbv" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.539668 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6d5c6f5f66-nmqdn"] Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.604529 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-55fb679855-n7lbv" Jan 27 08:02:10 crc kubenswrapper[4799]: I0127 08:02:10.835357 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-55fb679855-n7lbv"] Jan 27 08:02:10 crc kubenswrapper[4799]: W0127 08:02:10.837393 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc463ea2_ef36_43be_82ac_dab18b86c215.slice/crio-24043c0211549f9475ede03e9c84e47271eb9ce2e6932f7e6826be48d2e7e01e WatchSource:0}: Error finding container 24043c0211549f9475ede03e9c84e47271eb9ce2e6932f7e6826be48d2e7e01e: Status 404 returned error can't find the container with id 24043c0211549f9475ede03e9c84e47271eb9ce2e6932f7e6826be48d2e7e01e Jan 27 08:02:11 crc kubenswrapper[4799]: I0127 08:02:11.509185 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6d5c6f5f66-nmqdn" event={"ID":"06ea268e-b6bc-4056-8cb0-5113c8d2a54f","Type":"ContainerStarted","Data":"f154485c571e9e868a6c57ad0927b33c4ce922cc040755135b6744804a512573"} Jan 27 08:02:11 crc kubenswrapper[4799]: I0127 08:02:11.510958 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-55fb679855-n7lbv" event={"ID":"dc463ea2-ef36-43be-82ac-dab18b86c215","Type":"ContainerStarted","Data":"24043c0211549f9475ede03e9c84e47271eb9ce2e6932f7e6826be48d2e7e01e"} Jan 27 08:02:13 crc kubenswrapper[4799]: I0127 08:02:13.525717 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6d5c6f5f66-nmqdn" event={"ID":"06ea268e-b6bc-4056-8cb0-5113c8d2a54f","Type":"ContainerStarted","Data":"a3d23f27ef0b78443d28f2a3c9c4c2f41db0dcda6c72bae3b2b678b74d314717"} Jan 27 08:02:13 crc kubenswrapper[4799]: I0127 08:02:13.526096 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6d5c6f5f66-nmqdn" Jan 27 08:02:14 crc kubenswrapper[4799]: I0127 08:02:14.489934 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6d5c6f5f66-nmqdn" podStartSLOduration=2.768406778 podStartE2EDuration="5.489912866s" podCreationTimestamp="2026-01-27 08:02:09 +0000 UTC" firstStartedPulling="2026-01-27 08:02:10.544702281 +0000 UTC m=+996.855806346" lastFinishedPulling="2026-01-27 08:02:13.266208359 +0000 UTC m=+999.577312434" observedRunningTime="2026-01-27 08:02:13.552371832 +0000 UTC m=+999.863475927" watchObservedRunningTime="2026-01-27 08:02:14.489912866 +0000 UTC m=+1000.801016931" Jan 27 08:02:15 crc kubenswrapper[4799]: I0127 08:02:15.541220 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-55fb679855-n7lbv" event={"ID":"dc463ea2-ef36-43be-82ac-dab18b86c215","Type":"ContainerStarted","Data":"dc3452f14e464af7fdbd598bcca98c1ad6b359f29a36f26a0ec58a5c02facde1"} Jan 27 08:02:15 crc kubenswrapper[4799]: I0127 08:02:15.541541 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-55fb679855-n7lbv" Jan 27 08:02:15 crc kubenswrapper[4799]: I0127 08:02:15.569902 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-55fb679855-n7lbv" podStartSLOduration=1.548513896 podStartE2EDuration="5.569879454s" podCreationTimestamp="2026-01-27 08:02:10 +0000 UTC" firstStartedPulling="2026-01-27 08:02:10.841640151 +0000 UTC m=+997.152744216" lastFinishedPulling="2026-01-27 08:02:14.863005709 +0000 UTC m=+1001.174109774" observedRunningTime="2026-01-27 08:02:15.569408581 +0000 UTC m=+1001.880512666" watchObservedRunningTime="2026-01-27 08:02:15.569879454 +0000 UTC m=+1001.880983519" Jan 27 08:02:23 crc kubenswrapper[4799]: I0127 08:02:23.731827 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:02:23 crc kubenswrapper[4799]: I0127 08:02:23.732531 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:02:30 crc kubenswrapper[4799]: I0127 08:02:30.609135 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-55fb679855-n7lbv" Jan 27 08:02:50 crc kubenswrapper[4799]: I0127 08:02:50.331067 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6d5c6f5f66-nmqdn" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.091903 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-tjzjd"] Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.094791 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.106844 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.106970 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.112084 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-p95bt"] Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.112906 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p95bt" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.121548 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-fkplp" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.121650 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.127714 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-p95bt"] Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.174042 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-2klzt"] Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.175227 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2klzt" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.176836 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.176881 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.177125 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.177522 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-hbjm6" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.192152 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-mdvm9"] Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.193155 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-mdvm9" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.195661 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.215819 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-mdvm9"] Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.289234 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wsgq\" (UniqueName: \"kubernetes.io/projected/f1e9dc09-f278-46d6-8a6f-0c617a7446f9-kube-api-access-2wsgq\") pod \"frr-k8s-webhook-server-7df86c4f6c-p95bt\" (UID: \"f1e9dc09-f278-46d6-8a6f-0c617a7446f9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p95bt" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.289288 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da662e44-679d-4336-975b-374c7f799f27-metrics-certs\") pod \"frr-k8s-tjzjd\" (UID: \"da662e44-679d-4336-975b-374c7f799f27\") " pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.289334 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/da662e44-679d-4336-975b-374c7f799f27-frr-startup\") pod \"frr-k8s-tjzjd\" (UID: \"da662e44-679d-4336-975b-374c7f799f27\") " pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.289354 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/da662e44-679d-4336-975b-374c7f799f27-reloader\") pod \"frr-k8s-tjzjd\" (UID: \"da662e44-679d-4336-975b-374c7f799f27\") " pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.289412 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9bc57c87-6dfd-4725-81b3-f8dadfb587a3-metallb-excludel2\") pod \"speaker-2klzt\" (UID: \"9bc57c87-6dfd-4725-81b3-f8dadfb587a3\") " pod="metallb-system/speaker-2klzt" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.289455 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/da662e44-679d-4336-975b-374c7f799f27-frr-sockets\") pod \"frr-k8s-tjzjd\" (UID: \"da662e44-679d-4336-975b-374c7f799f27\") " pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.289471 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1e9dc09-f278-46d6-8a6f-0c617a7446f9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-p95bt\" (UID: \"f1e9dc09-f278-46d6-8a6f-0c617a7446f9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p95bt" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.289492 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbxq7\" (UniqueName: \"kubernetes.io/projected/da662e44-679d-4336-975b-374c7f799f27-kube-api-access-tbxq7\") pod \"frr-k8s-tjzjd\" (UID: \"da662e44-679d-4336-975b-374c7f799f27\") " pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.289512 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bc57c87-6dfd-4725-81b3-f8dadfb587a3-metrics-certs\") pod \"speaker-2klzt\" (UID: \"9bc57c87-6dfd-4725-81b3-f8dadfb587a3\") " pod="metallb-system/speaker-2klzt" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.289526 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9bc57c87-6dfd-4725-81b3-f8dadfb587a3-memberlist\") pod \"speaker-2klzt\" (UID: \"9bc57c87-6dfd-4725-81b3-f8dadfb587a3\") " pod="metallb-system/speaker-2klzt" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.289548 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/da662e44-679d-4336-975b-374c7f799f27-frr-conf\") pod \"frr-k8s-tjzjd\" (UID: \"da662e44-679d-4336-975b-374c7f799f27\") " pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.289568 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75h86\" (UniqueName: \"kubernetes.io/projected/9bc57c87-6dfd-4725-81b3-f8dadfb587a3-kube-api-access-75h86\") pod \"speaker-2klzt\" (UID: \"9bc57c87-6dfd-4725-81b3-f8dadfb587a3\") " pod="metallb-system/speaker-2klzt" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.289583 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/da662e44-679d-4336-975b-374c7f799f27-metrics\") pod \"frr-k8s-tjzjd\" (UID: \"da662e44-679d-4336-975b-374c7f799f27\") " pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.390258 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wsgq\" (UniqueName: \"kubernetes.io/projected/f1e9dc09-f278-46d6-8a6f-0c617a7446f9-kube-api-access-2wsgq\") pod \"frr-k8s-webhook-server-7df86c4f6c-p95bt\" (UID: \"f1e9dc09-f278-46d6-8a6f-0c617a7446f9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p95bt" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.390323 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da662e44-679d-4336-975b-374c7f799f27-metrics-certs\") pod \"frr-k8s-tjzjd\" (UID: \"da662e44-679d-4336-975b-374c7f799f27\") " pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.390348 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/da662e44-679d-4336-975b-374c7f799f27-frr-startup\") pod \"frr-k8s-tjzjd\" (UID: \"da662e44-679d-4336-975b-374c7f799f27\") " pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.390369 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxdxt\" (UniqueName: \"kubernetes.io/projected/bf75d430-27db-44eb-b2f3-7921d18f0dc1-kube-api-access-wxdxt\") pod \"controller-6968d8fdc4-mdvm9\" (UID: \"bf75d430-27db-44eb-b2f3-7921d18f0dc1\") " pod="metallb-system/controller-6968d8fdc4-mdvm9" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.390391 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/da662e44-679d-4336-975b-374c7f799f27-reloader\") pod \"frr-k8s-tjzjd\" (UID: \"da662e44-679d-4336-975b-374c7f799f27\") " pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.390425 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9bc57c87-6dfd-4725-81b3-f8dadfb587a3-metallb-excludel2\") pod \"speaker-2klzt\" (UID: \"9bc57c87-6dfd-4725-81b3-f8dadfb587a3\") " pod="metallb-system/speaker-2klzt" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.390443 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf75d430-27db-44eb-b2f3-7921d18f0dc1-metrics-certs\") pod \"controller-6968d8fdc4-mdvm9\" (UID: \"bf75d430-27db-44eb-b2f3-7921d18f0dc1\") " pod="metallb-system/controller-6968d8fdc4-mdvm9" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.390479 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/da662e44-679d-4336-975b-374c7f799f27-frr-sockets\") pod \"frr-k8s-tjzjd\" (UID: \"da662e44-679d-4336-975b-374c7f799f27\") " pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.390502 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1e9dc09-f278-46d6-8a6f-0c617a7446f9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-p95bt\" (UID: \"f1e9dc09-f278-46d6-8a6f-0c617a7446f9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p95bt" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.390526 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbxq7\" (UniqueName: \"kubernetes.io/projected/da662e44-679d-4336-975b-374c7f799f27-kube-api-access-tbxq7\") pod \"frr-k8s-tjzjd\" (UID: \"da662e44-679d-4336-975b-374c7f799f27\") " pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.390549 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bc57c87-6dfd-4725-81b3-f8dadfb587a3-metrics-certs\") pod \"speaker-2klzt\" (UID: \"9bc57c87-6dfd-4725-81b3-f8dadfb587a3\") " pod="metallb-system/speaker-2klzt" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.390573 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9bc57c87-6dfd-4725-81b3-f8dadfb587a3-memberlist\") pod \"speaker-2klzt\" (UID: \"9bc57c87-6dfd-4725-81b3-f8dadfb587a3\") " pod="metallb-system/speaker-2klzt" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.390603 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/da662e44-679d-4336-975b-374c7f799f27-frr-conf\") pod \"frr-k8s-tjzjd\" (UID: \"da662e44-679d-4336-975b-374c7f799f27\") " pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.390624 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf75d430-27db-44eb-b2f3-7921d18f0dc1-cert\") pod \"controller-6968d8fdc4-mdvm9\" (UID: \"bf75d430-27db-44eb-b2f3-7921d18f0dc1\") " pod="metallb-system/controller-6968d8fdc4-mdvm9" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.390655 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75h86\" (UniqueName: \"kubernetes.io/projected/9bc57c87-6dfd-4725-81b3-f8dadfb587a3-kube-api-access-75h86\") pod \"speaker-2klzt\" (UID: \"9bc57c87-6dfd-4725-81b3-f8dadfb587a3\") " pod="metallb-system/speaker-2klzt" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.390678 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/da662e44-679d-4336-975b-374c7f799f27-metrics\") pod \"frr-k8s-tjzjd\" (UID: \"da662e44-679d-4336-975b-374c7f799f27\") " pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.391436 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/da662e44-679d-4336-975b-374c7f799f27-metrics\") pod \"frr-k8s-tjzjd\" (UID: \"da662e44-679d-4336-975b-374c7f799f27\") " pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.393339 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/da662e44-679d-4336-975b-374c7f799f27-reloader\") pod \"frr-k8s-tjzjd\" (UID: \"da662e44-679d-4336-975b-374c7f799f27\") " pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: E0127 08:02:51.393471 4799 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 08:02:51 crc kubenswrapper[4799]: E0127 08:02:51.393603 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bc57c87-6dfd-4725-81b3-f8dadfb587a3-memberlist podName:9bc57c87-6dfd-4725-81b3-f8dadfb587a3 nodeName:}" failed. No retries permitted until 2026-01-27 08:02:51.893569826 +0000 UTC m=+1038.204673891 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9bc57c87-6dfd-4725-81b3-f8dadfb587a3-memberlist") pod "speaker-2klzt" (UID: "9bc57c87-6dfd-4725-81b3-f8dadfb587a3") : secret "metallb-memberlist" not found Jan 27 08:02:51 crc kubenswrapper[4799]: E0127 08:02:51.393603 4799 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 27 08:02:51 crc kubenswrapper[4799]: E0127 08:02:51.393697 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bc57c87-6dfd-4725-81b3-f8dadfb587a3-metrics-certs podName:9bc57c87-6dfd-4725-81b3-f8dadfb587a3 nodeName:}" failed. No retries permitted until 2026-01-27 08:02:51.893666339 +0000 UTC m=+1038.204770434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9bc57c87-6dfd-4725-81b3-f8dadfb587a3-metrics-certs") pod "speaker-2klzt" (UID: "9bc57c87-6dfd-4725-81b3-f8dadfb587a3") : secret "speaker-certs-secret" not found Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.394127 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/da662e44-679d-4336-975b-374c7f799f27-frr-conf\") pod \"frr-k8s-tjzjd\" (UID: \"da662e44-679d-4336-975b-374c7f799f27\") " pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.394182 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9bc57c87-6dfd-4725-81b3-f8dadfb587a3-metallb-excludel2\") pod \"speaker-2klzt\" (UID: \"9bc57c87-6dfd-4725-81b3-f8dadfb587a3\") " pod="metallb-system/speaker-2klzt" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.394213 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/da662e44-679d-4336-975b-374c7f799f27-frr-sockets\") pod \"frr-k8s-tjzjd\" (UID: \"da662e44-679d-4336-975b-374c7f799f27\") " pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.394348 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/da662e44-679d-4336-975b-374c7f799f27-frr-startup\") pod \"frr-k8s-tjzjd\" (UID: \"da662e44-679d-4336-975b-374c7f799f27\") " pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.413282 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1e9dc09-f278-46d6-8a6f-0c617a7446f9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-p95bt\" (UID: \"f1e9dc09-f278-46d6-8a6f-0c617a7446f9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p95bt" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.415729 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da662e44-679d-4336-975b-374c7f799f27-metrics-certs\") pod \"frr-k8s-tjzjd\" (UID: \"da662e44-679d-4336-975b-374c7f799f27\") " pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.421857 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wsgq\" (UniqueName: \"kubernetes.io/projected/f1e9dc09-f278-46d6-8a6f-0c617a7446f9-kube-api-access-2wsgq\") pod \"frr-k8s-webhook-server-7df86c4f6c-p95bt\" (UID: \"f1e9dc09-f278-46d6-8a6f-0c617a7446f9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p95bt" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.421883 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbxq7\" (UniqueName: \"kubernetes.io/projected/da662e44-679d-4336-975b-374c7f799f27-kube-api-access-tbxq7\") pod \"frr-k8s-tjzjd\" (UID: \"da662e44-679d-4336-975b-374c7f799f27\") " pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.424627 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p95bt" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.429093 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75h86\" (UniqueName: \"kubernetes.io/projected/9bc57c87-6dfd-4725-81b3-f8dadfb587a3-kube-api-access-75h86\") pod \"speaker-2klzt\" (UID: \"9bc57c87-6dfd-4725-81b3-f8dadfb587a3\") " pod="metallb-system/speaker-2klzt" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.491704 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxdxt\" (UniqueName: \"kubernetes.io/projected/bf75d430-27db-44eb-b2f3-7921d18f0dc1-kube-api-access-wxdxt\") pod \"controller-6968d8fdc4-mdvm9\" (UID: \"bf75d430-27db-44eb-b2f3-7921d18f0dc1\") " pod="metallb-system/controller-6968d8fdc4-mdvm9" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.491774 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf75d430-27db-44eb-b2f3-7921d18f0dc1-metrics-certs\") pod \"controller-6968d8fdc4-mdvm9\" (UID: \"bf75d430-27db-44eb-b2f3-7921d18f0dc1\") " pod="metallb-system/controller-6968d8fdc4-mdvm9" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.491832 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf75d430-27db-44eb-b2f3-7921d18f0dc1-cert\") pod \"controller-6968d8fdc4-mdvm9\" (UID: \"bf75d430-27db-44eb-b2f3-7921d18f0dc1\") " pod="metallb-system/controller-6968d8fdc4-mdvm9" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.493889 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.497608 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf75d430-27db-44eb-b2f3-7921d18f0dc1-metrics-certs\") pod \"controller-6968d8fdc4-mdvm9\" (UID: \"bf75d430-27db-44eb-b2f3-7921d18f0dc1\") " pod="metallb-system/controller-6968d8fdc4-mdvm9" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.506482 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf75d430-27db-44eb-b2f3-7921d18f0dc1-cert\") pod \"controller-6968d8fdc4-mdvm9\" (UID: \"bf75d430-27db-44eb-b2f3-7921d18f0dc1\") " pod="metallb-system/controller-6968d8fdc4-mdvm9" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.508295 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxdxt\" (UniqueName: \"kubernetes.io/projected/bf75d430-27db-44eb-b2f3-7921d18f0dc1-kube-api-access-wxdxt\") pod \"controller-6968d8fdc4-mdvm9\" (UID: \"bf75d430-27db-44eb-b2f3-7921d18f0dc1\") " pod="metallb-system/controller-6968d8fdc4-mdvm9" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.717722 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.805054 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-mdvm9" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.882977 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-p95bt"] Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.907464 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9bc57c87-6dfd-4725-81b3-f8dadfb587a3-memberlist\") pod \"speaker-2klzt\" (UID: \"9bc57c87-6dfd-4725-81b3-f8dadfb587a3\") " pod="metallb-system/speaker-2klzt" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.907530 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bc57c87-6dfd-4725-81b3-f8dadfb587a3-metrics-certs\") pod \"speaker-2klzt\" (UID: \"9bc57c87-6dfd-4725-81b3-f8dadfb587a3\") " pod="metallb-system/speaker-2klzt" Jan 27 08:02:51 crc kubenswrapper[4799]: E0127 08:02:51.907631 4799 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 08:02:51 crc kubenswrapper[4799]: E0127 08:02:51.907694 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bc57c87-6dfd-4725-81b3-f8dadfb587a3-memberlist podName:9bc57c87-6dfd-4725-81b3-f8dadfb587a3 nodeName:}" failed. No retries permitted until 2026-01-27 08:02:52.907674914 +0000 UTC m=+1039.218778979 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9bc57c87-6dfd-4725-81b3-f8dadfb587a3-memberlist") pod "speaker-2klzt" (UID: "9bc57c87-6dfd-4725-81b3-f8dadfb587a3") : secret "metallb-memberlist" not found Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.914929 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bc57c87-6dfd-4725-81b3-f8dadfb587a3-metrics-certs\") pod \"speaker-2klzt\" (UID: \"9bc57c87-6dfd-4725-81b3-f8dadfb587a3\") " pod="metallb-system/speaker-2klzt" Jan 27 08:02:51 crc kubenswrapper[4799]: I0127 08:02:51.996188 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-mdvm9"] Jan 27 08:02:52 crc kubenswrapper[4799]: I0127 08:02:52.835115 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p95bt" event={"ID":"f1e9dc09-f278-46d6-8a6f-0c617a7446f9","Type":"ContainerStarted","Data":"9afc7747f7ebdf7ae2e8ddf7119f97671463c31917dcc1bef999481cf0ca9158"} Jan 27 08:02:52 crc kubenswrapper[4799]: I0127 08:02:52.838395 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-mdvm9" event={"ID":"bf75d430-27db-44eb-b2f3-7921d18f0dc1","Type":"ContainerStarted","Data":"1dc86bc54a53f8bb104fc611599e82f8381f3765a98e955db1a59ecbdae6c7e8"} Jan 27 08:02:52 crc kubenswrapper[4799]: I0127 08:02:52.838441 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-mdvm9" event={"ID":"bf75d430-27db-44eb-b2f3-7921d18f0dc1","Type":"ContainerStarted","Data":"76c7620dc9d6b7f9eef06fb355f32afef4ba5a30ddc2bb9152fd32dd1538fa18"} Jan 27 08:02:52 crc kubenswrapper[4799]: I0127 08:02:52.838454 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-mdvm9" event={"ID":"bf75d430-27db-44eb-b2f3-7921d18f0dc1","Type":"ContainerStarted","Data":"7adfed8f367a8aa396da3e0abb2820ee33cefec112b44c16a478a3fabf1aa15c"} Jan 27 08:02:52 crc kubenswrapper[4799]: I0127 08:02:52.838515 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-mdvm9" Jan 27 08:02:52 crc kubenswrapper[4799]: I0127 08:02:52.839684 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tjzjd" event={"ID":"da662e44-679d-4336-975b-374c7f799f27","Type":"ContainerStarted","Data":"842fb40028194eaea20e6368403385acc76fc85dd51b10021dd45acc5d44f519"} Jan 27 08:02:52 crc kubenswrapper[4799]: I0127 08:02:52.919085 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9bc57c87-6dfd-4725-81b3-f8dadfb587a3-memberlist\") pod \"speaker-2klzt\" (UID: \"9bc57c87-6dfd-4725-81b3-f8dadfb587a3\") " pod="metallb-system/speaker-2klzt" Jan 27 08:02:52 crc kubenswrapper[4799]: I0127 08:02:52.924557 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9bc57c87-6dfd-4725-81b3-f8dadfb587a3-memberlist\") pod \"speaker-2klzt\" (UID: \"9bc57c87-6dfd-4725-81b3-f8dadfb587a3\") " pod="metallb-system/speaker-2klzt" Jan 27 08:02:52 crc kubenswrapper[4799]: I0127 08:02:52.988693 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2klzt" Jan 27 08:02:53 crc kubenswrapper[4799]: W0127 08:02:53.014221 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bc57c87_6dfd_4725_81b3_f8dadfb587a3.slice/crio-3ddc0e0aa719f089f914dcee61c98edeb367e119249b786e61ca6c6030cc406b WatchSource:0}: Error finding container 3ddc0e0aa719f089f914dcee61c98edeb367e119249b786e61ca6c6030cc406b: Status 404 returned error can't find the container with id 3ddc0e0aa719f089f914dcee61c98edeb367e119249b786e61ca6c6030cc406b Jan 27 08:02:53 crc kubenswrapper[4799]: I0127 08:02:53.730850 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:02:53 crc kubenswrapper[4799]: I0127 08:02:53.731236 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:02:53 crc kubenswrapper[4799]: I0127 08:02:53.845984 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2klzt" event={"ID":"9bc57c87-6dfd-4725-81b3-f8dadfb587a3","Type":"ContainerStarted","Data":"ca971f5b872f4a604131220373a871dd126a68422bae06faeb78359098a115fc"} Jan 27 08:02:53 crc kubenswrapper[4799]: I0127 08:02:53.846038 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2klzt" event={"ID":"9bc57c87-6dfd-4725-81b3-f8dadfb587a3","Type":"ContainerStarted","Data":"edfe0c096a7bed51014f902cbd53dc08d003b2ae8b78415cea7e98d923deaaf3"} Jan 27 08:02:53 crc kubenswrapper[4799]: I0127 08:02:53.846053 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2klzt" event={"ID":"9bc57c87-6dfd-4725-81b3-f8dadfb587a3","Type":"ContainerStarted","Data":"3ddc0e0aa719f089f914dcee61c98edeb367e119249b786e61ca6c6030cc406b"} Jan 27 08:02:53 crc kubenswrapper[4799]: I0127 08:02:53.867472 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-2klzt" podStartSLOduration=2.8674543789999998 podStartE2EDuration="2.867454379s" podCreationTimestamp="2026-01-27 08:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:02:53.863814359 +0000 UTC m=+1040.174918434" watchObservedRunningTime="2026-01-27 08:02:53.867454379 +0000 UTC m=+1040.178558454" Jan 27 08:02:53 crc kubenswrapper[4799]: I0127 08:02:53.870578 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-mdvm9" podStartSLOduration=2.870561645 podStartE2EDuration="2.870561645s" podCreationTimestamp="2026-01-27 08:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:02:52.859636164 +0000 UTC m=+1039.170740269" watchObservedRunningTime="2026-01-27 08:02:53.870561645 +0000 UTC m=+1040.181665730" Jan 27 08:02:59 crc kubenswrapper[4799]: I0127 08:02:59.886344 4799 generic.go:334] "Generic (PLEG): container finished" podID="da662e44-679d-4336-975b-374c7f799f27" containerID="5e276174a981a6f38094f6ec1870d554cef102453f11d405153feaaf124daa79" exitCode=0 Jan 27 08:02:59 crc kubenswrapper[4799]: I0127 08:02:59.886491 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tjzjd" event={"ID":"da662e44-679d-4336-975b-374c7f799f27","Type":"ContainerDied","Data":"5e276174a981a6f38094f6ec1870d554cef102453f11d405153feaaf124daa79"} Jan 27 08:02:59 crc kubenswrapper[4799]: I0127 08:02:59.889158 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p95bt" event={"ID":"f1e9dc09-f278-46d6-8a6f-0c617a7446f9","Type":"ContainerStarted","Data":"00c49542fe58b305318abc4d3195062cd58b98b92c4d5295b14f470edd10ce20"} Jan 27 08:02:59 crc kubenswrapper[4799]: I0127 08:02:59.889360 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p95bt" Jan 27 08:02:59 crc kubenswrapper[4799]: I0127 08:02:59.949236 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p95bt" podStartSLOduration=1.7630763310000002 podStartE2EDuration="8.949205997s" podCreationTimestamp="2026-01-27 08:02:51 +0000 UTC" firstStartedPulling="2026-01-27 08:02:51.897891355 +0000 UTC m=+1038.208995430" lastFinishedPulling="2026-01-27 08:02:59.084021031 +0000 UTC m=+1045.395125096" observedRunningTime="2026-01-27 08:02:59.947475739 +0000 UTC m=+1046.258579824" watchObservedRunningTime="2026-01-27 08:02:59.949205997 +0000 UTC m=+1046.260310102" Jan 27 08:03:00 crc kubenswrapper[4799]: I0127 08:03:00.899706 4799 generic.go:334] "Generic (PLEG): container finished" podID="da662e44-679d-4336-975b-374c7f799f27" containerID="e9b4f81c74413ab2d50b39fa79e74c024c57ddd8c915b7ec3625a5ad8e2a374a" exitCode=0 Jan 27 08:03:00 crc kubenswrapper[4799]: I0127 08:03:00.899836 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tjzjd" event={"ID":"da662e44-679d-4336-975b-374c7f799f27","Type":"ContainerDied","Data":"e9b4f81c74413ab2d50b39fa79e74c024c57ddd8c915b7ec3625a5ad8e2a374a"} Jan 27 08:03:01 crc kubenswrapper[4799]: I0127 08:03:01.910245 4799 generic.go:334] "Generic (PLEG): container finished" podID="da662e44-679d-4336-975b-374c7f799f27" containerID="fa4415d4d1c5b72af4e3fa1104b9dfc94f4e1dfcfb112312111238975568bee6" exitCode=0 Jan 27 08:03:01 crc kubenswrapper[4799]: I0127 08:03:01.910312 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tjzjd" event={"ID":"da662e44-679d-4336-975b-374c7f799f27","Type":"ContainerDied","Data":"fa4415d4d1c5b72af4e3fa1104b9dfc94f4e1dfcfb112312111238975568bee6"} Jan 27 08:03:02 crc kubenswrapper[4799]: I0127 08:03:02.919782 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tjzjd" event={"ID":"da662e44-679d-4336-975b-374c7f799f27","Type":"ContainerStarted","Data":"30ec677976f28b228b23caf6b35514ec5c894049bc0527474836a71095bebde9"} Jan 27 08:03:02 crc kubenswrapper[4799]: I0127 08:03:02.920174 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tjzjd" event={"ID":"da662e44-679d-4336-975b-374c7f799f27","Type":"ContainerStarted","Data":"805e8239e20a22612ceefa6485ea70a36b47a69958241b18e08ac25c9e0d6557"} Jan 27 08:03:02 crc kubenswrapper[4799]: I0127 08:03:02.920188 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tjzjd" event={"ID":"da662e44-679d-4336-975b-374c7f799f27","Type":"ContainerStarted","Data":"db1337bc0b3482f56ebb2d7cecd9ccdd562d5f2a045286188f2828711adcec1e"} Jan 27 08:03:02 crc kubenswrapper[4799]: I0127 08:03:02.920202 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tjzjd" event={"ID":"da662e44-679d-4336-975b-374c7f799f27","Type":"ContainerStarted","Data":"2c1a9cb321cfdb455fefcbed83f4263422f3452299f4f5b3f4e7a0a47670507c"} Jan 27 08:03:02 crc kubenswrapper[4799]: I0127 08:03:02.920214 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tjzjd" event={"ID":"da662e44-679d-4336-975b-374c7f799f27","Type":"ContainerStarted","Data":"2c65d43d6ee3723d2023bbd3a12fc05cc6bc072afc85dfc5c9d6fe3fc41f56a4"} Jan 27 08:03:02 crc kubenswrapper[4799]: I0127 08:03:02.989681 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-2klzt" Jan 27 08:03:03 crc kubenswrapper[4799]: I0127 08:03:03.935971 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tjzjd" event={"ID":"da662e44-679d-4336-975b-374c7f799f27","Type":"ContainerStarted","Data":"32b8142712e6cdab9006e4f4d838f7e6d9d7754abb0b718aa8b32b5e84b6406d"} Jan 27 08:03:03 crc kubenswrapper[4799]: I0127 08:03:03.936265 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:03:03 crc kubenswrapper[4799]: I0127 08:03:03.965572 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-tjzjd" podStartSLOduration=5.736385368 podStartE2EDuration="12.965550027s" podCreationTimestamp="2026-01-27 08:02:51 +0000 UTC" firstStartedPulling="2026-01-27 08:02:51.873743952 +0000 UTC m=+1038.184848017" lastFinishedPulling="2026-01-27 08:02:59.102908611 +0000 UTC m=+1045.414012676" observedRunningTime="2026-01-27 08:03:03.960965501 +0000 UTC m=+1050.272069636" watchObservedRunningTime="2026-01-27 08:03:03.965550027 +0000 UTC m=+1050.276654102" Jan 27 08:03:06 crc kubenswrapper[4799]: I0127 08:03:06.719475 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:03:06 crc kubenswrapper[4799]: I0127 08:03:06.772874 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:03:11 crc kubenswrapper[4799]: I0127 08:03:11.433811 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p95bt" Jan 27 08:03:11 crc kubenswrapper[4799]: I0127 08:03:11.811094 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-mdvm9" Jan 27 08:03:12 crc kubenswrapper[4799]: I0127 08:03:12.992574 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-2klzt" Jan 27 08:03:14 crc kubenswrapper[4799]: I0127 08:03:14.836262 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5"] Jan 27 08:03:14 crc kubenswrapper[4799]: I0127 08:03:14.837365 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5" Jan 27 08:03:14 crc kubenswrapper[4799]: I0127 08:03:14.840479 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 08:03:14 crc kubenswrapper[4799]: I0127 08:03:14.861712 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5"] Jan 27 08:03:14 crc kubenswrapper[4799]: I0127 08:03:14.944982 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7d3f13c-4213-485e-bbf6-88453f2abd8b-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5\" (UID: \"f7d3f13c-4213-485e-bbf6-88453f2abd8b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5" Jan 27 08:03:14 crc kubenswrapper[4799]: I0127 08:03:14.945024 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7d3f13c-4213-485e-bbf6-88453f2abd8b-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5\" (UID: \"f7d3f13c-4213-485e-bbf6-88453f2abd8b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5" Jan 27 08:03:14 crc kubenswrapper[4799]: I0127 08:03:14.945215 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9lbk\" (UniqueName: \"kubernetes.io/projected/f7d3f13c-4213-485e-bbf6-88453f2abd8b-kube-api-access-c9lbk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5\" (UID: \"f7d3f13c-4213-485e-bbf6-88453f2abd8b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5" Jan 27 08:03:15 crc kubenswrapper[4799]: I0127 08:03:15.046420 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7d3f13c-4213-485e-bbf6-88453f2abd8b-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5\" (UID: \"f7d3f13c-4213-485e-bbf6-88453f2abd8b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5" Jan 27 08:03:15 crc kubenswrapper[4799]: I0127 08:03:15.046464 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7d3f13c-4213-485e-bbf6-88453f2abd8b-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5\" (UID: \"f7d3f13c-4213-485e-bbf6-88453f2abd8b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5" Jan 27 08:03:15 crc kubenswrapper[4799]: I0127 08:03:15.046514 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9lbk\" (UniqueName: \"kubernetes.io/projected/f7d3f13c-4213-485e-bbf6-88453f2abd8b-kube-api-access-c9lbk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5\" (UID: \"f7d3f13c-4213-485e-bbf6-88453f2abd8b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5" Jan 27 08:03:15 crc kubenswrapper[4799]: I0127 08:03:15.047351 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7d3f13c-4213-485e-bbf6-88453f2abd8b-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5\" (UID: \"f7d3f13c-4213-485e-bbf6-88453f2abd8b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5" Jan 27 08:03:15 crc kubenswrapper[4799]: I0127 08:03:15.047389 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7d3f13c-4213-485e-bbf6-88453f2abd8b-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5\" (UID: \"f7d3f13c-4213-485e-bbf6-88453f2abd8b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5" Jan 27 08:03:15 crc kubenswrapper[4799]: I0127 08:03:15.083149 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9lbk\" (UniqueName: \"kubernetes.io/projected/f7d3f13c-4213-485e-bbf6-88453f2abd8b-kube-api-access-c9lbk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5\" (UID: \"f7d3f13c-4213-485e-bbf6-88453f2abd8b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5" Jan 27 08:03:15 crc kubenswrapper[4799]: I0127 08:03:15.161415 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5" Jan 27 08:03:15 crc kubenswrapper[4799]: I0127 08:03:15.404594 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5"] Jan 27 08:03:16 crc kubenswrapper[4799]: I0127 08:03:16.012852 4799 generic.go:334] "Generic (PLEG): container finished" podID="f7d3f13c-4213-485e-bbf6-88453f2abd8b" containerID="39d36f1a8f2c260ba8126e2abadaa2b0bd980b163e8bb44ade3bed3248b4fe43" exitCode=0 Jan 27 08:03:16 crc kubenswrapper[4799]: I0127 08:03:16.013161 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5" event={"ID":"f7d3f13c-4213-485e-bbf6-88453f2abd8b","Type":"ContainerDied","Data":"39d36f1a8f2c260ba8126e2abadaa2b0bd980b163e8bb44ade3bed3248b4fe43"} Jan 27 08:03:16 crc kubenswrapper[4799]: I0127 08:03:16.013202 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5" event={"ID":"f7d3f13c-4213-485e-bbf6-88453f2abd8b","Type":"ContainerStarted","Data":"222132ee4a560d9050645013f7cd6309938f06e34333437eafb1d74da6dfbe3e"} Jan 27 08:03:20 crc kubenswrapper[4799]: I0127 08:03:20.042430 4799 generic.go:334] "Generic (PLEG): container finished" podID="f7d3f13c-4213-485e-bbf6-88453f2abd8b" containerID="7725dd681047483b36a34a0a03c42635f5d96e8a96869329668f92dc7cea3ae7" exitCode=0 Jan 27 08:03:20 crc kubenswrapper[4799]: I0127 08:03:20.042494 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5" event={"ID":"f7d3f13c-4213-485e-bbf6-88453f2abd8b","Type":"ContainerDied","Data":"7725dd681047483b36a34a0a03c42635f5d96e8a96869329668f92dc7cea3ae7"} Jan 27 08:03:21 crc kubenswrapper[4799]: I0127 08:03:21.053321 4799 generic.go:334] "Generic (PLEG): container finished" podID="f7d3f13c-4213-485e-bbf6-88453f2abd8b" containerID="008410370c69d2006f1ca549d8a8bb5e7e2e6591d876d40fe7383e7cf55cb5b2" exitCode=0 Jan 27 08:03:21 crc kubenswrapper[4799]: I0127 08:03:21.053486 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5" event={"ID":"f7d3f13c-4213-485e-bbf6-88453f2abd8b","Type":"ContainerDied","Data":"008410370c69d2006f1ca549d8a8bb5e7e2e6591d876d40fe7383e7cf55cb5b2"} Jan 27 08:03:21 crc kubenswrapper[4799]: I0127 08:03:21.723112 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-tjzjd" Jan 27 08:03:22 crc kubenswrapper[4799]: I0127 08:03:22.304277 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5" Jan 27 08:03:22 crc kubenswrapper[4799]: I0127 08:03:22.450343 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9lbk\" (UniqueName: \"kubernetes.io/projected/f7d3f13c-4213-485e-bbf6-88453f2abd8b-kube-api-access-c9lbk\") pod \"f7d3f13c-4213-485e-bbf6-88453f2abd8b\" (UID: \"f7d3f13c-4213-485e-bbf6-88453f2abd8b\") " Jan 27 08:03:22 crc kubenswrapper[4799]: I0127 08:03:22.450647 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7d3f13c-4213-485e-bbf6-88453f2abd8b-util\") pod \"f7d3f13c-4213-485e-bbf6-88453f2abd8b\" (UID: \"f7d3f13c-4213-485e-bbf6-88453f2abd8b\") " Jan 27 08:03:22 crc kubenswrapper[4799]: I0127 08:03:22.450756 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7d3f13c-4213-485e-bbf6-88453f2abd8b-bundle\") pod \"f7d3f13c-4213-485e-bbf6-88453f2abd8b\" (UID: \"f7d3f13c-4213-485e-bbf6-88453f2abd8b\") " Jan 27 08:03:22 crc kubenswrapper[4799]: I0127 08:03:22.451948 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7d3f13c-4213-485e-bbf6-88453f2abd8b-bundle" (OuterVolumeSpecName: "bundle") pod "f7d3f13c-4213-485e-bbf6-88453f2abd8b" (UID: "f7d3f13c-4213-485e-bbf6-88453f2abd8b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:03:22 crc kubenswrapper[4799]: I0127 08:03:22.455588 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7d3f13c-4213-485e-bbf6-88453f2abd8b-kube-api-access-c9lbk" (OuterVolumeSpecName: "kube-api-access-c9lbk") pod "f7d3f13c-4213-485e-bbf6-88453f2abd8b" (UID: "f7d3f13c-4213-485e-bbf6-88453f2abd8b"). InnerVolumeSpecName "kube-api-access-c9lbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:03:22 crc kubenswrapper[4799]: I0127 08:03:22.461804 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7d3f13c-4213-485e-bbf6-88453f2abd8b-util" (OuterVolumeSpecName: "util") pod "f7d3f13c-4213-485e-bbf6-88453f2abd8b" (UID: "f7d3f13c-4213-485e-bbf6-88453f2abd8b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:03:22 crc kubenswrapper[4799]: I0127 08:03:22.552157 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9lbk\" (UniqueName: \"kubernetes.io/projected/f7d3f13c-4213-485e-bbf6-88453f2abd8b-kube-api-access-c9lbk\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:22 crc kubenswrapper[4799]: I0127 08:03:22.552220 4799 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7d3f13c-4213-485e-bbf6-88453f2abd8b-util\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:22 crc kubenswrapper[4799]: I0127 08:03:22.552241 4799 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7d3f13c-4213-485e-bbf6-88453f2abd8b-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:03:23 crc kubenswrapper[4799]: I0127 08:03:23.068827 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5" event={"ID":"f7d3f13c-4213-485e-bbf6-88453f2abd8b","Type":"ContainerDied","Data":"222132ee4a560d9050645013f7cd6309938f06e34333437eafb1d74da6dfbe3e"} Jan 27 08:03:23 crc kubenswrapper[4799]: I0127 08:03:23.068888 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="222132ee4a560d9050645013f7cd6309938f06e34333437eafb1d74da6dfbe3e" Jan 27 08:03:23 crc kubenswrapper[4799]: I0127 08:03:23.068955 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5" Jan 27 08:03:23 crc kubenswrapper[4799]: I0127 08:03:23.730937 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:03:23 crc kubenswrapper[4799]: I0127 08:03:23.731011 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:03:23 crc kubenswrapper[4799]: I0127 08:03:23.731067 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 08:03:23 crc kubenswrapper[4799]: I0127 08:03:23.731911 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"213c4fc7aacd3827bd72695593e20f0c8c733e85cf917988ad9ae8811f1be289"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 08:03:23 crc kubenswrapper[4799]: I0127 08:03:23.731996 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://213c4fc7aacd3827bd72695593e20f0c8c733e85cf917988ad9ae8811f1be289" gracePeriod=600 Jan 27 08:03:24 crc kubenswrapper[4799]: I0127 08:03:24.076861 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"213c4fc7aacd3827bd72695593e20f0c8c733e85cf917988ad9ae8811f1be289"} Jan 27 08:03:24 crc kubenswrapper[4799]: I0127 08:03:24.076862 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="213c4fc7aacd3827bd72695593e20f0c8c733e85cf917988ad9ae8811f1be289" exitCode=0 Jan 27 08:03:24 crc kubenswrapper[4799]: I0127 08:03:24.077198 4799 scope.go:117] "RemoveContainer" containerID="483ddb194043e569830cf4dd2964ecb2e41dd9cb022fa07f532bbee3b74029d4" Jan 27 08:03:24 crc kubenswrapper[4799]: I0127 08:03:24.077236 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"13185769064c6ec2b432a1350a219c32fc8634c50a20acef33753a2a5d7615d7"} Jan 27 08:03:25 crc kubenswrapper[4799]: I0127 08:03:25.691291 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z8fsl"] Jan 27 08:03:25 crc kubenswrapper[4799]: E0127 08:03:25.691835 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d3f13c-4213-485e-bbf6-88453f2abd8b" containerName="util" Jan 27 08:03:25 crc kubenswrapper[4799]: I0127 08:03:25.691847 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d3f13c-4213-485e-bbf6-88453f2abd8b" containerName="util" Jan 27 08:03:25 crc kubenswrapper[4799]: E0127 08:03:25.691861 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d3f13c-4213-485e-bbf6-88453f2abd8b" containerName="extract" Jan 27 08:03:25 crc kubenswrapper[4799]: I0127 08:03:25.691868 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d3f13c-4213-485e-bbf6-88453f2abd8b" containerName="extract" Jan 27 08:03:25 crc kubenswrapper[4799]: E0127 08:03:25.691877 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d3f13c-4213-485e-bbf6-88453f2abd8b" containerName="pull" Jan 27 08:03:25 crc kubenswrapper[4799]: I0127 08:03:25.691883 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d3f13c-4213-485e-bbf6-88453f2abd8b" containerName="pull" Jan 27 08:03:25 crc kubenswrapper[4799]: I0127 08:03:25.691992 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7d3f13c-4213-485e-bbf6-88453f2abd8b" containerName="extract" Jan 27 08:03:25 crc kubenswrapper[4799]: I0127 08:03:25.692394 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z8fsl" Jan 27 08:03:25 crc kubenswrapper[4799]: I0127 08:03:25.694977 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 27 08:03:25 crc kubenswrapper[4799]: I0127 08:03:25.695323 4799 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-qp9lg" Jan 27 08:03:25 crc kubenswrapper[4799]: I0127 08:03:25.695563 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 27 08:03:25 crc kubenswrapper[4799]: I0127 08:03:25.711477 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z8fsl"] Jan 27 08:03:25 crc kubenswrapper[4799]: I0127 08:03:25.794114 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cnhc\" (UniqueName: \"kubernetes.io/projected/3f0aef0c-bd70-4070-b9d7-9d274d29072b-kube-api-access-2cnhc\") pod \"cert-manager-operator-controller-manager-64cf6dff88-z8fsl\" (UID: \"3f0aef0c-bd70-4070-b9d7-9d274d29072b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z8fsl" Jan 27 08:03:25 crc kubenswrapper[4799]: I0127 08:03:25.794204 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3f0aef0c-bd70-4070-b9d7-9d274d29072b-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-z8fsl\" (UID: \"3f0aef0c-bd70-4070-b9d7-9d274d29072b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z8fsl" Jan 27 08:03:25 crc kubenswrapper[4799]: I0127 08:03:25.895123 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cnhc\" (UniqueName: \"kubernetes.io/projected/3f0aef0c-bd70-4070-b9d7-9d274d29072b-kube-api-access-2cnhc\") pod \"cert-manager-operator-controller-manager-64cf6dff88-z8fsl\" (UID: \"3f0aef0c-bd70-4070-b9d7-9d274d29072b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z8fsl" Jan 27 08:03:25 crc kubenswrapper[4799]: I0127 08:03:25.895172 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3f0aef0c-bd70-4070-b9d7-9d274d29072b-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-z8fsl\" (UID: \"3f0aef0c-bd70-4070-b9d7-9d274d29072b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z8fsl" Jan 27 08:03:25 crc kubenswrapper[4799]: I0127 08:03:25.895733 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3f0aef0c-bd70-4070-b9d7-9d274d29072b-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-z8fsl\" (UID: \"3f0aef0c-bd70-4070-b9d7-9d274d29072b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z8fsl" Jan 27 08:03:25 crc kubenswrapper[4799]: I0127 08:03:25.915189 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cnhc\" (UniqueName: \"kubernetes.io/projected/3f0aef0c-bd70-4070-b9d7-9d274d29072b-kube-api-access-2cnhc\") pod \"cert-manager-operator-controller-manager-64cf6dff88-z8fsl\" (UID: \"3f0aef0c-bd70-4070-b9d7-9d274d29072b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z8fsl" Jan 27 08:03:26 crc kubenswrapper[4799]: I0127 08:03:26.008480 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z8fsl" Jan 27 08:03:26 crc kubenswrapper[4799]: I0127 08:03:26.440992 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z8fsl"] Jan 27 08:03:27 crc kubenswrapper[4799]: I0127 08:03:27.097014 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z8fsl" event={"ID":"3f0aef0c-bd70-4070-b9d7-9d274d29072b","Type":"ContainerStarted","Data":"a2c0e648feb266fc8c150d6a688cd298a56e5339824a3a1a188287b1efffcc79"} Jan 27 08:03:35 crc kubenswrapper[4799]: I0127 08:03:35.146734 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z8fsl" event={"ID":"3f0aef0c-bd70-4070-b9d7-9d274d29072b","Type":"ContainerStarted","Data":"e667e0945ecc64274d6c31f92bf8f32325289afa2cece5c3fb18e1b245c1f5a7"} Jan 27 08:03:39 crc kubenswrapper[4799]: I0127 08:03:39.539621 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-z8fsl" podStartSLOduration=6.096420998 podStartE2EDuration="14.539599059s" podCreationTimestamp="2026-01-27 08:03:25 +0000 UTC" firstStartedPulling="2026-01-27 08:03:26.454746964 +0000 UTC m=+1072.765851039" lastFinishedPulling="2026-01-27 08:03:34.897925035 +0000 UTC m=+1081.209029100" observedRunningTime="2026-01-27 08:03:35.171639047 +0000 UTC m=+1081.482743112" watchObservedRunningTime="2026-01-27 08:03:39.539599059 +0000 UTC m=+1085.850703124" Jan 27 08:03:39 crc kubenswrapper[4799]: I0127 08:03:39.542471 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-4tz6c"] Jan 27 08:03:39 crc kubenswrapper[4799]: I0127 08:03:39.543340 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-4tz6c" Jan 27 08:03:39 crc kubenswrapper[4799]: I0127 08:03:39.545900 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 08:03:39 crc kubenswrapper[4799]: I0127 08:03:39.545950 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 08:03:39 crc kubenswrapper[4799]: I0127 08:03:39.546762 4799 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-sxjsm" Jan 27 08:03:39 crc kubenswrapper[4799]: I0127 08:03:39.563714 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-4tz6c"] Jan 27 08:03:39 crc kubenswrapper[4799]: I0127 08:03:39.625649 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d26751de-dc67-4950-8105-b3a479a70119-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-4tz6c\" (UID: \"d26751de-dc67-4950-8105-b3a479a70119\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-4tz6c" Jan 27 08:03:39 crc kubenswrapper[4799]: I0127 08:03:39.625807 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz26t\" (UniqueName: \"kubernetes.io/projected/d26751de-dc67-4950-8105-b3a479a70119-kube-api-access-rz26t\") pod \"cert-manager-webhook-f4fb5df64-4tz6c\" (UID: \"d26751de-dc67-4950-8105-b3a479a70119\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-4tz6c" Jan 27 08:03:39 crc kubenswrapper[4799]: I0127 08:03:39.727063 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz26t\" (UniqueName: \"kubernetes.io/projected/d26751de-dc67-4950-8105-b3a479a70119-kube-api-access-rz26t\") pod \"cert-manager-webhook-f4fb5df64-4tz6c\" (UID: \"d26751de-dc67-4950-8105-b3a479a70119\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-4tz6c" Jan 27 08:03:39 crc kubenswrapper[4799]: I0127 08:03:39.727136 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d26751de-dc67-4950-8105-b3a479a70119-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-4tz6c\" (UID: \"d26751de-dc67-4950-8105-b3a479a70119\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-4tz6c" Jan 27 08:03:39 crc kubenswrapper[4799]: I0127 08:03:39.755630 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz26t\" (UniqueName: \"kubernetes.io/projected/d26751de-dc67-4950-8105-b3a479a70119-kube-api-access-rz26t\") pod \"cert-manager-webhook-f4fb5df64-4tz6c\" (UID: \"d26751de-dc67-4950-8105-b3a479a70119\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-4tz6c" Jan 27 08:03:39 crc kubenswrapper[4799]: I0127 08:03:39.758819 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d26751de-dc67-4950-8105-b3a479a70119-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-4tz6c\" (UID: \"d26751de-dc67-4950-8105-b3a479a70119\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-4tz6c" Jan 27 08:03:39 crc kubenswrapper[4799]: I0127 08:03:39.862710 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-4tz6c" Jan 27 08:03:40 crc kubenswrapper[4799]: I0127 08:03:40.258836 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-4tz6c"] Jan 27 08:03:40 crc kubenswrapper[4799]: I0127 08:03:40.838974 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-gdhvd"] Jan 27 08:03:40 crc kubenswrapper[4799]: I0127 08:03:40.840174 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-gdhvd" Jan 27 08:03:40 crc kubenswrapper[4799]: I0127 08:03:40.843088 4799 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-m9q9q" Jan 27 08:03:40 crc kubenswrapper[4799]: I0127 08:03:40.863126 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-gdhvd"] Jan 27 08:03:40 crc kubenswrapper[4799]: I0127 08:03:40.941899 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2tz8\" (UniqueName: \"kubernetes.io/projected/778c26b2-0e97-445d-98bf-054a3457ff9b-kube-api-access-f2tz8\") pod \"cert-manager-cainjector-855d9ccff4-gdhvd\" (UID: \"778c26b2-0e97-445d-98bf-054a3457ff9b\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-gdhvd" Jan 27 08:03:40 crc kubenswrapper[4799]: I0127 08:03:40.941958 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/778c26b2-0e97-445d-98bf-054a3457ff9b-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-gdhvd\" (UID: \"778c26b2-0e97-445d-98bf-054a3457ff9b\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-gdhvd" Jan 27 08:03:41 crc kubenswrapper[4799]: I0127 08:03:41.043120 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2tz8\" (UniqueName: \"kubernetes.io/projected/778c26b2-0e97-445d-98bf-054a3457ff9b-kube-api-access-f2tz8\") pod \"cert-manager-cainjector-855d9ccff4-gdhvd\" (UID: \"778c26b2-0e97-445d-98bf-054a3457ff9b\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-gdhvd" Jan 27 08:03:41 crc kubenswrapper[4799]: I0127 08:03:41.043192 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/778c26b2-0e97-445d-98bf-054a3457ff9b-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-gdhvd\" (UID: \"778c26b2-0e97-445d-98bf-054a3457ff9b\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-gdhvd" Jan 27 08:03:41 crc kubenswrapper[4799]: I0127 08:03:41.064669 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/778c26b2-0e97-445d-98bf-054a3457ff9b-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-gdhvd\" (UID: \"778c26b2-0e97-445d-98bf-054a3457ff9b\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-gdhvd" Jan 27 08:03:41 crc kubenswrapper[4799]: I0127 08:03:41.069686 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2tz8\" (UniqueName: \"kubernetes.io/projected/778c26b2-0e97-445d-98bf-054a3457ff9b-kube-api-access-f2tz8\") pod \"cert-manager-cainjector-855d9ccff4-gdhvd\" (UID: \"778c26b2-0e97-445d-98bf-054a3457ff9b\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-gdhvd" Jan 27 08:03:41 crc kubenswrapper[4799]: I0127 08:03:41.166644 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-gdhvd" Jan 27 08:03:41 crc kubenswrapper[4799]: I0127 08:03:41.180784 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-4tz6c" event={"ID":"d26751de-dc67-4950-8105-b3a479a70119","Type":"ContainerStarted","Data":"2939c85c54682ffc7ba41bbb61c5e56ddda7a35850afeaf8c88135ed53a05bd1"} Jan 27 08:03:41 crc kubenswrapper[4799]: I0127 08:03:41.435981 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-gdhvd"] Jan 27 08:03:41 crc kubenswrapper[4799]: W0127 08:03:41.448562 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod778c26b2_0e97_445d_98bf_054a3457ff9b.slice/crio-39eb45273000b4bf919711ee2b6b1daade05b7e5e329276faa608fbf654dc1b2 WatchSource:0}: Error finding container 39eb45273000b4bf919711ee2b6b1daade05b7e5e329276faa608fbf654dc1b2: Status 404 returned error can't find the container with id 39eb45273000b4bf919711ee2b6b1daade05b7e5e329276faa608fbf654dc1b2 Jan 27 08:03:42 crc kubenswrapper[4799]: I0127 08:03:42.188052 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-gdhvd" event={"ID":"778c26b2-0e97-445d-98bf-054a3457ff9b","Type":"ContainerStarted","Data":"39eb45273000b4bf919711ee2b6b1daade05b7e5e329276faa608fbf654dc1b2"} Jan 27 08:03:49 crc kubenswrapper[4799]: I0127 08:03:49.244034 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-4tz6c" event={"ID":"d26751de-dc67-4950-8105-b3a479a70119","Type":"ContainerStarted","Data":"a397dfe214de6a8e7f028449c80ee2c718268d0b97a6764d70d5b14a178e64d4"} Jan 27 08:03:49 crc kubenswrapper[4799]: I0127 08:03:49.245696 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-4tz6c" Jan 27 08:03:49 crc kubenswrapper[4799]: I0127 08:03:49.246074 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-gdhvd" event={"ID":"778c26b2-0e97-445d-98bf-054a3457ff9b","Type":"ContainerStarted","Data":"bd7ed78952d354b661a63079e6eb8c7003ac4b6d1ad9ebce5b86e512105a294e"} Jan 27 08:03:49 crc kubenswrapper[4799]: I0127 08:03:49.275615 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-4tz6c" podStartSLOduration=2.179984157 podStartE2EDuration="10.275585276s" podCreationTimestamp="2026-01-27 08:03:39 +0000 UTC" firstStartedPulling="2026-01-27 08:03:40.283201233 +0000 UTC m=+1086.594305308" lastFinishedPulling="2026-01-27 08:03:48.378802342 +0000 UTC m=+1094.689906427" observedRunningTime="2026-01-27 08:03:49.262431374 +0000 UTC m=+1095.573535459" watchObservedRunningTime="2026-01-27 08:03:49.275585276 +0000 UTC m=+1095.586689361" Jan 27 08:03:49 crc kubenswrapper[4799]: I0127 08:03:49.281176 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-gdhvd" podStartSLOduration=2.320209412 podStartE2EDuration="9.281154879s" podCreationTimestamp="2026-01-27 08:03:40 +0000 UTC" firstStartedPulling="2026-01-27 08:03:41.450165232 +0000 UTC m=+1087.761269297" lastFinishedPulling="2026-01-27 08:03:48.411110689 +0000 UTC m=+1094.722214764" observedRunningTime="2026-01-27 08:03:49.27611914 +0000 UTC m=+1095.587223205" watchObservedRunningTime="2026-01-27 08:03:49.281154879 +0000 UTC m=+1095.592258964" Jan 27 08:03:54 crc kubenswrapper[4799]: I0127 08:03:54.865022 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-4tz6c" Jan 27 08:03:58 crc kubenswrapper[4799]: I0127 08:03:58.082594 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-88mbp"] Jan 27 08:03:58 crc kubenswrapper[4799]: I0127 08:03:58.086282 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-88mbp" Jan 27 08:03:58 crc kubenswrapper[4799]: I0127 08:03:58.088442 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-88mbp"] Jan 27 08:03:58 crc kubenswrapper[4799]: I0127 08:03:58.093059 4799 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-9mqfd" Jan 27 08:03:58 crc kubenswrapper[4799]: I0127 08:03:58.225274 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfkl5\" (UniqueName: \"kubernetes.io/projected/487704d6-26db-4d0c-9e50-443375daf632-kube-api-access-zfkl5\") pod \"cert-manager-86cb77c54b-88mbp\" (UID: \"487704d6-26db-4d0c-9e50-443375daf632\") " pod="cert-manager/cert-manager-86cb77c54b-88mbp" Jan 27 08:03:58 crc kubenswrapper[4799]: I0127 08:03:58.225403 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/487704d6-26db-4d0c-9e50-443375daf632-bound-sa-token\") pod \"cert-manager-86cb77c54b-88mbp\" (UID: \"487704d6-26db-4d0c-9e50-443375daf632\") " pod="cert-manager/cert-manager-86cb77c54b-88mbp" Jan 27 08:03:58 crc kubenswrapper[4799]: I0127 08:03:58.327439 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfkl5\" (UniqueName: \"kubernetes.io/projected/487704d6-26db-4d0c-9e50-443375daf632-kube-api-access-zfkl5\") pod \"cert-manager-86cb77c54b-88mbp\" (UID: \"487704d6-26db-4d0c-9e50-443375daf632\") " pod="cert-manager/cert-manager-86cb77c54b-88mbp" Jan 27 08:03:58 crc kubenswrapper[4799]: I0127 08:03:58.327515 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/487704d6-26db-4d0c-9e50-443375daf632-bound-sa-token\") pod \"cert-manager-86cb77c54b-88mbp\" (UID: \"487704d6-26db-4d0c-9e50-443375daf632\") " pod="cert-manager/cert-manager-86cb77c54b-88mbp" Jan 27 08:03:58 crc kubenswrapper[4799]: I0127 08:03:58.362167 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfkl5\" (UniqueName: \"kubernetes.io/projected/487704d6-26db-4d0c-9e50-443375daf632-kube-api-access-zfkl5\") pod \"cert-manager-86cb77c54b-88mbp\" (UID: \"487704d6-26db-4d0c-9e50-443375daf632\") " pod="cert-manager/cert-manager-86cb77c54b-88mbp" Jan 27 08:03:58 crc kubenswrapper[4799]: I0127 08:03:58.363089 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/487704d6-26db-4d0c-9e50-443375daf632-bound-sa-token\") pod \"cert-manager-86cb77c54b-88mbp\" (UID: \"487704d6-26db-4d0c-9e50-443375daf632\") " pod="cert-manager/cert-manager-86cb77c54b-88mbp" Jan 27 08:03:58 crc kubenswrapper[4799]: I0127 08:03:58.408123 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-88mbp" Jan 27 08:03:58 crc kubenswrapper[4799]: I0127 08:03:58.892696 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-88mbp"] Jan 27 08:03:58 crc kubenswrapper[4799]: W0127 08:03:58.901128 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod487704d6_26db_4d0c_9e50_443375daf632.slice/crio-57478e100eec8c7e3681958a664a80da47c313934f2855d172ee1716fd34d8fc WatchSource:0}: Error finding container 57478e100eec8c7e3681958a664a80da47c313934f2855d172ee1716fd34d8fc: Status 404 returned error can't find the container with id 57478e100eec8c7e3681958a664a80da47c313934f2855d172ee1716fd34d8fc Jan 27 08:03:59 crc kubenswrapper[4799]: I0127 08:03:59.330419 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-88mbp" event={"ID":"487704d6-26db-4d0c-9e50-443375daf632","Type":"ContainerStarted","Data":"88d134e1fa9268c6f09b6ba848844401d1e5b49df10ab309187064e613944acd"} Jan 27 08:03:59 crc kubenswrapper[4799]: I0127 08:03:59.330778 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-88mbp" event={"ID":"487704d6-26db-4d0c-9e50-443375daf632","Type":"ContainerStarted","Data":"57478e100eec8c7e3681958a664a80da47c313934f2855d172ee1716fd34d8fc"} Jan 27 08:03:59 crc kubenswrapper[4799]: I0127 08:03:59.359445 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-88mbp" podStartSLOduration=1.359416023 podStartE2EDuration="1.359416023s" podCreationTimestamp="2026-01-27 08:03:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:03:59.357286034 +0000 UTC m=+1105.668390109" watchObservedRunningTime="2026-01-27 08:03:59.359416023 +0000 UTC m=+1105.670520098" Jan 27 08:04:08 crc kubenswrapper[4799]: I0127 08:04:08.423020 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-tqlq9"] Jan 27 08:04:08 crc kubenswrapper[4799]: I0127 08:04:08.425044 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tqlq9" Jan 27 08:04:08 crc kubenswrapper[4799]: I0127 08:04:08.427217 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-lc7hk" Jan 27 08:04:08 crc kubenswrapper[4799]: I0127 08:04:08.427253 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 08:04:08 crc kubenswrapper[4799]: I0127 08:04:08.427568 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 08:04:08 crc kubenswrapper[4799]: I0127 08:04:08.446219 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tqlq9"] Jan 27 08:04:08 crc kubenswrapper[4799]: I0127 08:04:08.580688 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzttq\" (UniqueName: \"kubernetes.io/projected/e07a60e8-937a-4c1b-a28d-d8daf6f445b9-kube-api-access-lzttq\") pod \"openstack-operator-index-tqlq9\" (UID: \"e07a60e8-937a-4c1b-a28d-d8daf6f445b9\") " pod="openstack-operators/openstack-operator-index-tqlq9" Jan 27 08:04:08 crc kubenswrapper[4799]: I0127 08:04:08.682195 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzttq\" (UniqueName: \"kubernetes.io/projected/e07a60e8-937a-4c1b-a28d-d8daf6f445b9-kube-api-access-lzttq\") pod \"openstack-operator-index-tqlq9\" (UID: \"e07a60e8-937a-4c1b-a28d-d8daf6f445b9\") " pod="openstack-operators/openstack-operator-index-tqlq9" Jan 27 08:04:08 crc kubenswrapper[4799]: I0127 08:04:08.701030 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzttq\" (UniqueName: \"kubernetes.io/projected/e07a60e8-937a-4c1b-a28d-d8daf6f445b9-kube-api-access-lzttq\") pod \"openstack-operator-index-tqlq9\" (UID: \"e07a60e8-937a-4c1b-a28d-d8daf6f445b9\") " pod="openstack-operators/openstack-operator-index-tqlq9" Jan 27 08:04:08 crc kubenswrapper[4799]: I0127 08:04:08.750728 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tqlq9" Jan 27 08:04:08 crc kubenswrapper[4799]: I0127 08:04:08.988969 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tqlq9"] Jan 27 08:04:08 crc kubenswrapper[4799]: W0127 08:04:08.999507 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode07a60e8_937a_4c1b_a28d_d8daf6f445b9.slice/crio-5cd4060cad3fa90931a0fda19f1ebf45bf49a7fbd24a3867523673e225c6ebf9 WatchSource:0}: Error finding container 5cd4060cad3fa90931a0fda19f1ebf45bf49a7fbd24a3867523673e225c6ebf9: Status 404 returned error can't find the container with id 5cd4060cad3fa90931a0fda19f1ebf45bf49a7fbd24a3867523673e225c6ebf9 Jan 27 08:04:09 crc kubenswrapper[4799]: I0127 08:04:09.409371 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tqlq9" event={"ID":"e07a60e8-937a-4c1b-a28d-d8daf6f445b9","Type":"ContainerStarted","Data":"5cd4060cad3fa90931a0fda19f1ebf45bf49a7fbd24a3867523673e225c6ebf9"} Jan 27 08:04:12 crc kubenswrapper[4799]: I0127 08:04:12.383423 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-tqlq9"] Jan 27 08:04:12 crc kubenswrapper[4799]: I0127 08:04:12.437861 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tqlq9" event={"ID":"e07a60e8-937a-4c1b-a28d-d8daf6f445b9","Type":"ContainerStarted","Data":"77b16ad971749ad3aab6f2557b53b149c1d81ebae6f078face44222f9e98388a"} Jan 27 08:04:12 crc kubenswrapper[4799]: I0127 08:04:12.989143 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-tqlq9" podStartSLOduration=2.5563529000000003 podStartE2EDuration="4.989111983s" podCreationTimestamp="2026-01-27 08:04:08 +0000 UTC" firstStartedPulling="2026-01-27 08:04:09.00509241 +0000 UTC m=+1115.316196475" lastFinishedPulling="2026-01-27 08:04:11.437851483 +0000 UTC m=+1117.748955558" observedRunningTime="2026-01-27 08:04:12.463656693 +0000 UTC m=+1118.774760838" watchObservedRunningTime="2026-01-27 08:04:12.989111983 +0000 UTC m=+1119.300216088" Jan 27 08:04:12 crc kubenswrapper[4799]: I0127 08:04:12.997796 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-v88ds"] Jan 27 08:04:12 crc kubenswrapper[4799]: I0127 08:04:12.999572 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v88ds" Jan 27 08:04:13 crc kubenswrapper[4799]: I0127 08:04:13.026617 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v88ds"] Jan 27 08:04:13 crc kubenswrapper[4799]: I0127 08:04:13.146110 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drr5n\" (UniqueName: \"kubernetes.io/projected/b87380ed-955a-485e-9157-549df541f5d2-kube-api-access-drr5n\") pod \"openstack-operator-index-v88ds\" (UID: \"b87380ed-955a-485e-9157-549df541f5d2\") " pod="openstack-operators/openstack-operator-index-v88ds" Jan 27 08:04:13 crc kubenswrapper[4799]: I0127 08:04:13.247463 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drr5n\" (UniqueName: \"kubernetes.io/projected/b87380ed-955a-485e-9157-549df541f5d2-kube-api-access-drr5n\") pod \"openstack-operator-index-v88ds\" (UID: \"b87380ed-955a-485e-9157-549df541f5d2\") " pod="openstack-operators/openstack-operator-index-v88ds" Jan 27 08:04:13 crc kubenswrapper[4799]: I0127 08:04:13.284962 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drr5n\" (UniqueName: \"kubernetes.io/projected/b87380ed-955a-485e-9157-549df541f5d2-kube-api-access-drr5n\") pod \"openstack-operator-index-v88ds\" (UID: \"b87380ed-955a-485e-9157-549df541f5d2\") " pod="openstack-operators/openstack-operator-index-v88ds" Jan 27 08:04:13 crc kubenswrapper[4799]: I0127 08:04:13.335259 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v88ds" Jan 27 08:04:13 crc kubenswrapper[4799]: I0127 08:04:13.451451 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-tqlq9" podUID="e07a60e8-937a-4c1b-a28d-d8daf6f445b9" containerName="registry-server" containerID="cri-o://77b16ad971749ad3aab6f2557b53b149c1d81ebae6f078face44222f9e98388a" gracePeriod=2 Jan 27 08:04:13 crc kubenswrapper[4799]: I0127 08:04:13.749717 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v88ds"] Jan 27 08:04:13 crc kubenswrapper[4799]: W0127 08:04:13.753589 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb87380ed_955a_485e_9157_549df541f5d2.slice/crio-efbbd4966ac1614f812fc51a9ffc3478dc006aa6eae36adbc52da5490e44c523 WatchSource:0}: Error finding container efbbd4966ac1614f812fc51a9ffc3478dc006aa6eae36adbc52da5490e44c523: Status 404 returned error can't find the container with id efbbd4966ac1614f812fc51a9ffc3478dc006aa6eae36adbc52da5490e44c523 Jan 27 08:04:13 crc kubenswrapper[4799]: I0127 08:04:13.799068 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tqlq9" Jan 27 08:04:13 crc kubenswrapper[4799]: I0127 08:04:13.855644 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzttq\" (UniqueName: \"kubernetes.io/projected/e07a60e8-937a-4c1b-a28d-d8daf6f445b9-kube-api-access-lzttq\") pod \"e07a60e8-937a-4c1b-a28d-d8daf6f445b9\" (UID: \"e07a60e8-937a-4c1b-a28d-d8daf6f445b9\") " Jan 27 08:04:13 crc kubenswrapper[4799]: I0127 08:04:13.863895 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e07a60e8-937a-4c1b-a28d-d8daf6f445b9-kube-api-access-lzttq" (OuterVolumeSpecName: "kube-api-access-lzttq") pod "e07a60e8-937a-4c1b-a28d-d8daf6f445b9" (UID: "e07a60e8-937a-4c1b-a28d-d8daf6f445b9"). InnerVolumeSpecName "kube-api-access-lzttq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:04:13 crc kubenswrapper[4799]: I0127 08:04:13.957785 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzttq\" (UniqueName: \"kubernetes.io/projected/e07a60e8-937a-4c1b-a28d-d8daf6f445b9-kube-api-access-lzttq\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:14 crc kubenswrapper[4799]: I0127 08:04:14.461638 4799 generic.go:334] "Generic (PLEG): container finished" podID="e07a60e8-937a-4c1b-a28d-d8daf6f445b9" containerID="77b16ad971749ad3aab6f2557b53b149c1d81ebae6f078face44222f9e98388a" exitCode=0 Jan 27 08:04:14 crc kubenswrapper[4799]: I0127 08:04:14.461988 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tqlq9" Jan 27 08:04:14 crc kubenswrapper[4799]: I0127 08:04:14.470381 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tqlq9" event={"ID":"e07a60e8-937a-4c1b-a28d-d8daf6f445b9","Type":"ContainerDied","Data":"77b16ad971749ad3aab6f2557b53b149c1d81ebae6f078face44222f9e98388a"} Jan 27 08:04:14 crc kubenswrapper[4799]: I0127 08:04:14.470461 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tqlq9" event={"ID":"e07a60e8-937a-4c1b-a28d-d8daf6f445b9","Type":"ContainerDied","Data":"5cd4060cad3fa90931a0fda19f1ebf45bf49a7fbd24a3867523673e225c6ebf9"} Jan 27 08:04:14 crc kubenswrapper[4799]: I0127 08:04:14.470501 4799 scope.go:117] "RemoveContainer" containerID="77b16ad971749ad3aab6f2557b53b149c1d81ebae6f078face44222f9e98388a" Jan 27 08:04:14 crc kubenswrapper[4799]: I0127 08:04:14.471898 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v88ds" event={"ID":"b87380ed-955a-485e-9157-549df541f5d2","Type":"ContainerStarted","Data":"ef8a563c63e91eb5a9281532046ebff4483585e23821f3884865d4f38f01ff5c"} Jan 27 08:04:14 crc kubenswrapper[4799]: I0127 08:04:14.471946 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v88ds" event={"ID":"b87380ed-955a-485e-9157-549df541f5d2","Type":"ContainerStarted","Data":"efbbd4966ac1614f812fc51a9ffc3478dc006aa6eae36adbc52da5490e44c523"} Jan 27 08:04:14 crc kubenswrapper[4799]: I0127 08:04:14.502435 4799 scope.go:117] "RemoveContainer" containerID="77b16ad971749ad3aab6f2557b53b149c1d81ebae6f078face44222f9e98388a" Jan 27 08:04:14 crc kubenswrapper[4799]: E0127 08:04:14.502999 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77b16ad971749ad3aab6f2557b53b149c1d81ebae6f078face44222f9e98388a\": container with ID starting with 77b16ad971749ad3aab6f2557b53b149c1d81ebae6f078face44222f9e98388a not found: ID does not exist" containerID="77b16ad971749ad3aab6f2557b53b149c1d81ebae6f078face44222f9e98388a" Jan 27 08:04:14 crc kubenswrapper[4799]: I0127 08:04:14.503063 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77b16ad971749ad3aab6f2557b53b149c1d81ebae6f078face44222f9e98388a"} err="failed to get container status \"77b16ad971749ad3aab6f2557b53b149c1d81ebae6f078face44222f9e98388a\": rpc error: code = NotFound desc = could not find container \"77b16ad971749ad3aab6f2557b53b149c1d81ebae6f078face44222f9e98388a\": container with ID starting with 77b16ad971749ad3aab6f2557b53b149c1d81ebae6f078face44222f9e98388a not found: ID does not exist" Jan 27 08:04:14 crc kubenswrapper[4799]: I0127 08:04:14.516360 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-v88ds" podStartSLOduration=2.469365631 podStartE2EDuration="2.516293121s" podCreationTimestamp="2026-01-27 08:04:12 +0000 UTC" firstStartedPulling="2026-01-27 08:04:13.757898759 +0000 UTC m=+1120.069002824" lastFinishedPulling="2026-01-27 08:04:13.804826229 +0000 UTC m=+1120.115930314" observedRunningTime="2026-01-27 08:04:14.509939476 +0000 UTC m=+1120.821043611" watchObservedRunningTime="2026-01-27 08:04:14.516293121 +0000 UTC m=+1120.827397226" Jan 27 08:04:14 crc kubenswrapper[4799]: I0127 08:04:14.553585 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-tqlq9"] Jan 27 08:04:14 crc kubenswrapper[4799]: I0127 08:04:14.559496 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-tqlq9"] Jan 27 08:04:16 crc kubenswrapper[4799]: I0127 08:04:16.463408 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e07a60e8-937a-4c1b-a28d-d8daf6f445b9" path="/var/lib/kubelet/pods/e07a60e8-937a-4c1b-a28d-d8daf6f445b9/volumes" Jan 27 08:04:23 crc kubenswrapper[4799]: I0127 08:04:23.335937 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-v88ds" Jan 27 08:04:23 crc kubenswrapper[4799]: I0127 08:04:23.336209 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-v88ds" Jan 27 08:04:23 crc kubenswrapper[4799]: I0127 08:04:23.371289 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-v88ds" Jan 27 08:04:23 crc kubenswrapper[4799]: I0127 08:04:23.563438 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-v88ds" Jan 27 08:04:24 crc kubenswrapper[4799]: I0127 08:04:24.830858 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5"] Jan 27 08:04:24 crc kubenswrapper[4799]: E0127 08:04:24.831181 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e07a60e8-937a-4c1b-a28d-d8daf6f445b9" containerName="registry-server" Jan 27 08:04:24 crc kubenswrapper[4799]: I0127 08:04:24.831198 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e07a60e8-937a-4c1b-a28d-d8daf6f445b9" containerName="registry-server" Jan 27 08:04:24 crc kubenswrapper[4799]: I0127 08:04:24.831372 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e07a60e8-937a-4c1b-a28d-d8daf6f445b9" containerName="registry-server" Jan 27 08:04:24 crc kubenswrapper[4799]: I0127 08:04:24.832216 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5" Jan 27 08:04:24 crc kubenswrapper[4799]: I0127 08:04:24.834726 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hzw7r" Jan 27 08:04:24 crc kubenswrapper[4799]: I0127 08:04:24.840833 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5"] Jan 27 08:04:24 crc kubenswrapper[4799]: I0127 08:04:24.923408 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a47f41ba-039c-418f-b3aa-8f5f8f108187-bundle\") pod \"b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5\" (UID: \"a47f41ba-039c-418f-b3aa-8f5f8f108187\") " pod="openstack-operators/b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5" Jan 27 08:04:24 crc kubenswrapper[4799]: I0127 08:04:24.923494 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a47f41ba-039c-418f-b3aa-8f5f8f108187-util\") pod \"b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5\" (UID: \"a47f41ba-039c-418f-b3aa-8f5f8f108187\") " pod="openstack-operators/b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5" Jan 27 08:04:24 crc kubenswrapper[4799]: I0127 08:04:24.923543 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkxjx\" (UniqueName: \"kubernetes.io/projected/a47f41ba-039c-418f-b3aa-8f5f8f108187-kube-api-access-qkxjx\") pod \"b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5\" (UID: \"a47f41ba-039c-418f-b3aa-8f5f8f108187\") " pod="openstack-operators/b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5" Jan 27 08:04:25 crc kubenswrapper[4799]: I0127 08:04:25.024729 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a47f41ba-039c-418f-b3aa-8f5f8f108187-bundle\") pod \"b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5\" (UID: \"a47f41ba-039c-418f-b3aa-8f5f8f108187\") " pod="openstack-operators/b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5" Jan 27 08:04:25 crc kubenswrapper[4799]: I0127 08:04:25.024793 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a47f41ba-039c-418f-b3aa-8f5f8f108187-util\") pod \"b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5\" (UID: \"a47f41ba-039c-418f-b3aa-8f5f8f108187\") " pod="openstack-operators/b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5" Jan 27 08:04:25 crc kubenswrapper[4799]: I0127 08:04:25.024825 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkxjx\" (UniqueName: \"kubernetes.io/projected/a47f41ba-039c-418f-b3aa-8f5f8f108187-kube-api-access-qkxjx\") pod \"b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5\" (UID: \"a47f41ba-039c-418f-b3aa-8f5f8f108187\") " pod="openstack-operators/b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5" Jan 27 08:04:25 crc kubenswrapper[4799]: I0127 08:04:25.025460 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a47f41ba-039c-418f-b3aa-8f5f8f108187-bundle\") pod \"b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5\" (UID: \"a47f41ba-039c-418f-b3aa-8f5f8f108187\") " pod="openstack-operators/b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5" Jan 27 08:04:25 crc kubenswrapper[4799]: I0127 08:04:25.025504 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a47f41ba-039c-418f-b3aa-8f5f8f108187-util\") pod \"b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5\" (UID: \"a47f41ba-039c-418f-b3aa-8f5f8f108187\") " pod="openstack-operators/b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5" Jan 27 08:04:25 crc kubenswrapper[4799]: I0127 08:04:25.059515 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkxjx\" (UniqueName: \"kubernetes.io/projected/a47f41ba-039c-418f-b3aa-8f5f8f108187-kube-api-access-qkxjx\") pod \"b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5\" (UID: \"a47f41ba-039c-418f-b3aa-8f5f8f108187\") " pod="openstack-operators/b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5" Jan 27 08:04:25 crc kubenswrapper[4799]: I0127 08:04:25.147964 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5" Jan 27 08:04:25 crc kubenswrapper[4799]: I0127 08:04:25.362480 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5"] Jan 27 08:04:25 crc kubenswrapper[4799]: I0127 08:04:25.551174 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5" event={"ID":"a47f41ba-039c-418f-b3aa-8f5f8f108187","Type":"ContainerStarted","Data":"459d41354e8c3150860888dd2e36fe0f3d5402df9ec2b332d20b030a76a2a0a3"} Jan 27 08:04:25 crc kubenswrapper[4799]: I0127 08:04:25.551235 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5" event={"ID":"a47f41ba-039c-418f-b3aa-8f5f8f108187","Type":"ContainerStarted","Data":"bb020367022dd45968de4d7463ddd963cf054ee53be48a5483b9059470c27390"} Jan 27 08:04:26 crc kubenswrapper[4799]: I0127 08:04:26.557916 4799 generic.go:334] "Generic (PLEG): container finished" podID="a47f41ba-039c-418f-b3aa-8f5f8f108187" containerID="459d41354e8c3150860888dd2e36fe0f3d5402df9ec2b332d20b030a76a2a0a3" exitCode=0 Jan 27 08:04:26 crc kubenswrapper[4799]: I0127 08:04:26.557963 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5" event={"ID":"a47f41ba-039c-418f-b3aa-8f5f8f108187","Type":"ContainerDied","Data":"459d41354e8c3150860888dd2e36fe0f3d5402df9ec2b332d20b030a76a2a0a3"} Jan 27 08:04:27 crc kubenswrapper[4799]: I0127 08:04:27.566447 4799 generic.go:334] "Generic (PLEG): container finished" podID="a47f41ba-039c-418f-b3aa-8f5f8f108187" containerID="8ea730def5037670e8174bcc9d910cf8867ee04a3b20c8afc188488036701549" exitCode=0 Jan 27 08:04:27 crc kubenswrapper[4799]: I0127 08:04:27.566618 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5" event={"ID":"a47f41ba-039c-418f-b3aa-8f5f8f108187","Type":"ContainerDied","Data":"8ea730def5037670e8174bcc9d910cf8867ee04a3b20c8afc188488036701549"} Jan 27 08:04:28 crc kubenswrapper[4799]: I0127 08:04:28.580009 4799 generic.go:334] "Generic (PLEG): container finished" podID="a47f41ba-039c-418f-b3aa-8f5f8f108187" containerID="3d2886f4cd29a1afaed932f9be5f990493f6e84e2c34d2a58208e6692f2ef51e" exitCode=0 Jan 27 08:04:28 crc kubenswrapper[4799]: I0127 08:04:28.580144 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5" event={"ID":"a47f41ba-039c-418f-b3aa-8f5f8f108187","Type":"ContainerDied","Data":"3d2886f4cd29a1afaed932f9be5f990493f6e84e2c34d2a58208e6692f2ef51e"} Jan 27 08:04:29 crc kubenswrapper[4799]: I0127 08:04:29.850821 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5" Jan 27 08:04:29 crc kubenswrapper[4799]: I0127 08:04:29.997036 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a47f41ba-039c-418f-b3aa-8f5f8f108187-bundle\") pod \"a47f41ba-039c-418f-b3aa-8f5f8f108187\" (UID: \"a47f41ba-039c-418f-b3aa-8f5f8f108187\") " Jan 27 08:04:29 crc kubenswrapper[4799]: I0127 08:04:29.997333 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkxjx\" (UniqueName: \"kubernetes.io/projected/a47f41ba-039c-418f-b3aa-8f5f8f108187-kube-api-access-qkxjx\") pod \"a47f41ba-039c-418f-b3aa-8f5f8f108187\" (UID: \"a47f41ba-039c-418f-b3aa-8f5f8f108187\") " Jan 27 08:04:29 crc kubenswrapper[4799]: I0127 08:04:29.997456 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a47f41ba-039c-418f-b3aa-8f5f8f108187-util\") pod \"a47f41ba-039c-418f-b3aa-8f5f8f108187\" (UID: \"a47f41ba-039c-418f-b3aa-8f5f8f108187\") " Jan 27 08:04:29 crc kubenswrapper[4799]: I0127 08:04:29.998431 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a47f41ba-039c-418f-b3aa-8f5f8f108187-bundle" (OuterVolumeSpecName: "bundle") pod "a47f41ba-039c-418f-b3aa-8f5f8f108187" (UID: "a47f41ba-039c-418f-b3aa-8f5f8f108187"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:04:30 crc kubenswrapper[4799]: I0127 08:04:30.000212 4799 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a47f41ba-039c-418f-b3aa-8f5f8f108187-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:30 crc kubenswrapper[4799]: I0127 08:04:30.008782 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a47f41ba-039c-418f-b3aa-8f5f8f108187-kube-api-access-qkxjx" (OuterVolumeSpecName: "kube-api-access-qkxjx") pod "a47f41ba-039c-418f-b3aa-8f5f8f108187" (UID: "a47f41ba-039c-418f-b3aa-8f5f8f108187"). InnerVolumeSpecName "kube-api-access-qkxjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:04:30 crc kubenswrapper[4799]: I0127 08:04:30.017497 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a47f41ba-039c-418f-b3aa-8f5f8f108187-util" (OuterVolumeSpecName: "util") pod "a47f41ba-039c-418f-b3aa-8f5f8f108187" (UID: "a47f41ba-039c-418f-b3aa-8f5f8f108187"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:04:30 crc kubenswrapper[4799]: I0127 08:04:30.102442 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkxjx\" (UniqueName: \"kubernetes.io/projected/a47f41ba-039c-418f-b3aa-8f5f8f108187-kube-api-access-qkxjx\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:30 crc kubenswrapper[4799]: I0127 08:04:30.103004 4799 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a47f41ba-039c-418f-b3aa-8f5f8f108187-util\") on node \"crc\" DevicePath \"\"" Jan 27 08:04:30 crc kubenswrapper[4799]: I0127 08:04:30.597464 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5" event={"ID":"a47f41ba-039c-418f-b3aa-8f5f8f108187","Type":"ContainerDied","Data":"bb020367022dd45968de4d7463ddd963cf054ee53be48a5483b9059470c27390"} Jan 27 08:04:30 crc kubenswrapper[4799]: I0127 08:04:30.597534 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5" Jan 27 08:04:30 crc kubenswrapper[4799]: I0127 08:04:30.597565 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb020367022dd45968de4d7463ddd963cf054ee53be48a5483b9059470c27390" Jan 27 08:04:35 crc kubenswrapper[4799]: I0127 08:04:35.755522 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-54bd478649-rblcz"] Jan 27 08:04:35 crc kubenswrapper[4799]: E0127 08:04:35.756383 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a47f41ba-039c-418f-b3aa-8f5f8f108187" containerName="util" Jan 27 08:04:35 crc kubenswrapper[4799]: I0127 08:04:35.756402 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a47f41ba-039c-418f-b3aa-8f5f8f108187" containerName="util" Jan 27 08:04:35 crc kubenswrapper[4799]: E0127 08:04:35.756425 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a47f41ba-039c-418f-b3aa-8f5f8f108187" containerName="pull" Jan 27 08:04:35 crc kubenswrapper[4799]: I0127 08:04:35.756432 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a47f41ba-039c-418f-b3aa-8f5f8f108187" containerName="pull" Jan 27 08:04:35 crc kubenswrapper[4799]: E0127 08:04:35.756444 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a47f41ba-039c-418f-b3aa-8f5f8f108187" containerName="extract" Jan 27 08:04:35 crc kubenswrapper[4799]: I0127 08:04:35.756452 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a47f41ba-039c-418f-b3aa-8f5f8f108187" containerName="extract" Jan 27 08:04:35 crc kubenswrapper[4799]: I0127 08:04:35.756619 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a47f41ba-039c-418f-b3aa-8f5f8f108187" containerName="extract" Jan 27 08:04:35 crc kubenswrapper[4799]: I0127 08:04:35.757135 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-54bd478649-rblcz" Jan 27 08:04:35 crc kubenswrapper[4799]: I0127 08:04:35.759199 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-qlj2p" Jan 27 08:04:35 crc kubenswrapper[4799]: I0127 08:04:35.776196 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-54bd478649-rblcz"] Jan 27 08:04:35 crc kubenswrapper[4799]: I0127 08:04:35.888970 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sdq9\" (UniqueName: \"kubernetes.io/projected/a03715cb-387a-4bac-8dfe-55ce28fae844-kube-api-access-4sdq9\") pod \"openstack-operator-controller-init-54bd478649-rblcz\" (UID: \"a03715cb-387a-4bac-8dfe-55ce28fae844\") " pod="openstack-operators/openstack-operator-controller-init-54bd478649-rblcz" Jan 27 08:04:35 crc kubenswrapper[4799]: I0127 08:04:35.991104 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sdq9\" (UniqueName: \"kubernetes.io/projected/a03715cb-387a-4bac-8dfe-55ce28fae844-kube-api-access-4sdq9\") pod \"openstack-operator-controller-init-54bd478649-rblcz\" (UID: \"a03715cb-387a-4bac-8dfe-55ce28fae844\") " pod="openstack-operators/openstack-operator-controller-init-54bd478649-rblcz" Jan 27 08:04:36 crc kubenswrapper[4799]: I0127 08:04:36.016615 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sdq9\" (UniqueName: \"kubernetes.io/projected/a03715cb-387a-4bac-8dfe-55ce28fae844-kube-api-access-4sdq9\") pod \"openstack-operator-controller-init-54bd478649-rblcz\" (UID: \"a03715cb-387a-4bac-8dfe-55ce28fae844\") " pod="openstack-operators/openstack-operator-controller-init-54bd478649-rblcz" Jan 27 08:04:36 crc kubenswrapper[4799]: I0127 08:04:36.074966 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-54bd478649-rblcz" Jan 27 08:04:36 crc kubenswrapper[4799]: I0127 08:04:36.499290 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-54bd478649-rblcz"] Jan 27 08:04:36 crc kubenswrapper[4799]: W0127 08:04:36.511255 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda03715cb_387a_4bac_8dfe_55ce28fae844.slice/crio-e315735376d5d8f19fd2f5bbd07ba932eaced60846b0f0f48afcce222ba9527a WatchSource:0}: Error finding container e315735376d5d8f19fd2f5bbd07ba932eaced60846b0f0f48afcce222ba9527a: Status 404 returned error can't find the container with id e315735376d5d8f19fd2f5bbd07ba932eaced60846b0f0f48afcce222ba9527a Jan 27 08:04:36 crc kubenswrapper[4799]: I0127 08:04:36.640662 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-54bd478649-rblcz" event={"ID":"a03715cb-387a-4bac-8dfe-55ce28fae844","Type":"ContainerStarted","Data":"e315735376d5d8f19fd2f5bbd07ba932eaced60846b0f0f48afcce222ba9527a"} Jan 27 08:04:40 crc kubenswrapper[4799]: I0127 08:04:40.669182 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-54bd478649-rblcz" event={"ID":"a03715cb-387a-4bac-8dfe-55ce28fae844","Type":"ContainerStarted","Data":"d6dab9fd04dda0389c07eaddd25742fbe697ab0835ddb37f44e66214d2f4be8d"} Jan 27 08:04:40 crc kubenswrapper[4799]: I0127 08:04:40.669874 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-54bd478649-rblcz" Jan 27 08:04:40 crc kubenswrapper[4799]: I0127 08:04:40.697548 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-54bd478649-rblcz" podStartSLOduration=1.823385477 podStartE2EDuration="5.697521611s" podCreationTimestamp="2026-01-27 08:04:35 +0000 UTC" firstStartedPulling="2026-01-27 08:04:36.517411869 +0000 UTC m=+1142.828515974" lastFinishedPulling="2026-01-27 08:04:40.391548043 +0000 UTC m=+1146.702652108" observedRunningTime="2026-01-27 08:04:40.691049613 +0000 UTC m=+1147.002153688" watchObservedRunningTime="2026-01-27 08:04:40.697521611 +0000 UTC m=+1147.008625696" Jan 27 08:04:46 crc kubenswrapper[4799]: I0127 08:04:46.079021 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-54bd478649-rblcz" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.526016 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-vznwp"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.527330 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-vznwp" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.529338 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-jc2qd" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.531533 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-hzmgz"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.533550 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-hzmgz" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.536796 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-lpb8v" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.537013 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-9qhwd"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.540725 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-9qhwd" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.546223 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-fffnv" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.583557 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-vznwp"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.593176 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-dv7wf"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.596864 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-dv7wf" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.597983 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98s7q\" (UniqueName: \"kubernetes.io/projected/7a4d56c1-32dd-4e3b-9f11-e18d210aa5e8-kube-api-access-98s7q\") pod \"barbican-operator-controller-manager-65ff799cfd-9qhwd\" (UID: \"7a4d56c1-32dd-4e3b-9f11-e18d210aa5e8\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-9qhwd" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.598126 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vpwb\" (UniqueName: \"kubernetes.io/projected/d94d5e1a-ae08-488f-9d43-50c9d392bb64-kube-api-access-2vpwb\") pod \"cinder-operator-controller-manager-655bf9cfbb-vznwp\" (UID: \"d94d5e1a-ae08-488f-9d43-50c9d392bb64\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-vznwp" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.598208 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67975\" (UniqueName: \"kubernetes.io/projected/55e4a841-f81d-438b-adc5-e826eb530cfe-kube-api-access-67975\") pod \"designate-operator-controller-manager-77554cdc5c-hzmgz\" (UID: \"55e4a841-f81d-438b-adc5-e826eb530cfe\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-hzmgz" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.611421 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-t6chd"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.612248 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-t6chd" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.612481 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-c4sbf" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.614515 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-hzmgz"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.616237 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-kfw27" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.634381 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-dv7wf"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.656409 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-9qhwd"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.684744 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-z4lbd"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.685655 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-z4lbd" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.689779 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-87dv2" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.713134 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-nc7r7"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.713917 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-nc7r7" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.714237 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vpwb\" (UniqueName: \"kubernetes.io/projected/d94d5e1a-ae08-488f-9d43-50c9d392bb64-kube-api-access-2vpwb\") pod \"cinder-operator-controller-manager-655bf9cfbb-vznwp\" (UID: \"d94d5e1a-ae08-488f-9d43-50c9d392bb64\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-vznwp" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.714399 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67975\" (UniqueName: \"kubernetes.io/projected/55e4a841-f81d-438b-adc5-e826eb530cfe-kube-api-access-67975\") pod \"designate-operator-controller-manager-77554cdc5c-hzmgz\" (UID: \"55e4a841-f81d-438b-adc5-e826eb530cfe\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-hzmgz" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.714494 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfcrh\" (UniqueName: \"kubernetes.io/projected/8236753b-6720-430d-81cf-7b6c0de5a0ee-kube-api-access-pfcrh\") pod \"heat-operator-controller-manager-575ffb885b-t6chd\" (UID: \"8236753b-6720-430d-81cf-7b6c0de5a0ee\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-t6chd" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.714583 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb9cp\" (UniqueName: \"kubernetes.io/projected/4cd87a60-6daa-4298-bc64-ff1fb8782577-kube-api-access-jb9cp\") pod \"horizon-operator-controller-manager-77d5c5b54f-z4lbd\" (UID: \"4cd87a60-6daa-4298-bc64-ff1fb8782577\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-z4lbd" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.714676 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z98q\" (UniqueName: \"kubernetes.io/projected/e322d396-a1f7-4802-bba8-91bd472c24e3-kube-api-access-4z98q\") pod \"glance-operator-controller-manager-67dd55ff59-dv7wf\" (UID: \"e322d396-a1f7-4802-bba8-91bd472c24e3\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-dv7wf" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.714757 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98s7q\" (UniqueName: \"kubernetes.io/projected/7a4d56c1-32dd-4e3b-9f11-e18d210aa5e8-kube-api-access-98s7q\") pod \"barbican-operator-controller-manager-65ff799cfd-9qhwd\" (UID: \"7a4d56c1-32dd-4e3b-9f11-e18d210aa5e8\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-9qhwd" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.720423 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qj9fs" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.720481 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.723462 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-t6chd"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.752533 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-z4lbd"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.769526 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98s7q\" (UniqueName: \"kubernetes.io/projected/7a4d56c1-32dd-4e3b-9f11-e18d210aa5e8-kube-api-access-98s7q\") pod \"barbican-operator-controller-manager-65ff799cfd-9qhwd\" (UID: \"7a4d56c1-32dd-4e3b-9f11-e18d210aa5e8\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-9qhwd" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.770668 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67975\" (UniqueName: \"kubernetes.io/projected/55e4a841-f81d-438b-adc5-e826eb530cfe-kube-api-access-67975\") pod \"designate-operator-controller-manager-77554cdc5c-hzmgz\" (UID: \"55e4a841-f81d-438b-adc5-e826eb530cfe\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-hzmgz" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.771226 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vpwb\" (UniqueName: \"kubernetes.io/projected/d94d5e1a-ae08-488f-9d43-50c9d392bb64-kube-api-access-2vpwb\") pod \"cinder-operator-controller-manager-655bf9cfbb-vznwp\" (UID: \"d94d5e1a-ae08-488f-9d43-50c9d392bb64\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-vznwp" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.796364 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-nc7r7"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.823047 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfcrh\" (UniqueName: \"kubernetes.io/projected/8236753b-6720-430d-81cf-7b6c0de5a0ee-kube-api-access-pfcrh\") pod \"heat-operator-controller-manager-575ffb885b-t6chd\" (UID: \"8236753b-6720-430d-81cf-7b6c0de5a0ee\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-t6chd" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.823091 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb9cp\" (UniqueName: \"kubernetes.io/projected/4cd87a60-6daa-4298-bc64-ff1fb8782577-kube-api-access-jb9cp\") pod \"horizon-operator-controller-manager-77d5c5b54f-z4lbd\" (UID: \"4cd87a60-6daa-4298-bc64-ff1fb8782577\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-z4lbd" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.823137 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z98q\" (UniqueName: \"kubernetes.io/projected/e322d396-a1f7-4802-bba8-91bd472c24e3-kube-api-access-4z98q\") pod \"glance-operator-controller-manager-67dd55ff59-dv7wf\" (UID: \"e322d396-a1f7-4802-bba8-91bd472c24e3\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-dv7wf" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.823186 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34178e14-d22f-4fbb-80e8-2a18fd062606-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-nc7r7\" (UID: \"34178e14-d22f-4fbb-80e8-2a18fd062606\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-nc7r7" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.823219 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7crt\" (UniqueName: \"kubernetes.io/projected/34178e14-d22f-4fbb-80e8-2a18fd062606-kube-api-access-r7crt\") pod \"infra-operator-controller-manager-7d75bc88d5-nc7r7\" (UID: \"34178e14-d22f-4fbb-80e8-2a18fd062606\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-nc7r7" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.837362 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-cvlvn"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.838700 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-cvlvn" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.846646 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-4tk8w" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.859345 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-cvlvn"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.860023 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfcrh\" (UniqueName: \"kubernetes.io/projected/8236753b-6720-430d-81cf-7b6c0de5a0ee-kube-api-access-pfcrh\") pod \"heat-operator-controller-manager-575ffb885b-t6chd\" (UID: \"8236753b-6720-430d-81cf-7b6c0de5a0ee\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-t6chd" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.867410 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z98q\" (UniqueName: \"kubernetes.io/projected/e322d396-a1f7-4802-bba8-91bd472c24e3-kube-api-access-4z98q\") pod \"glance-operator-controller-manager-67dd55ff59-dv7wf\" (UID: \"e322d396-a1f7-4802-bba8-91bd472c24e3\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-dv7wf" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.873369 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-kww5k"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.874208 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kww5k" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.879453 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-btlhz" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.889183 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb9cp\" (UniqueName: \"kubernetes.io/projected/4cd87a60-6daa-4298-bc64-ff1fb8782577-kube-api-access-jb9cp\") pod \"horizon-operator-controller-manager-77d5c5b54f-z4lbd\" (UID: \"4cd87a60-6daa-4298-bc64-ff1fb8782577\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-z4lbd" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.894682 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-tl2kj"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.895553 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tl2kj" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.897698 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-j4skx" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.899282 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-vznwp" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.910681 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-kww5k"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.924975 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-596fs\" (UniqueName: \"kubernetes.io/projected/5aca3207-fa5d-485b-ac2c-a9c3e17081a4-kube-api-access-596fs\") pod \"keystone-operator-controller-manager-55f684fd56-kww5k\" (UID: \"5aca3207-fa5d-485b-ac2c-a9c3e17081a4\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kww5k" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.925055 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34178e14-d22f-4fbb-80e8-2a18fd062606-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-nc7r7\" (UID: \"34178e14-d22f-4fbb-80e8-2a18fd062606\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-nc7r7" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.925076 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7crt\" (UniqueName: \"kubernetes.io/projected/34178e14-d22f-4fbb-80e8-2a18fd062606-kube-api-access-r7crt\") pod \"infra-operator-controller-manager-7d75bc88d5-nc7r7\" (UID: \"34178e14-d22f-4fbb-80e8-2a18fd062606\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-nc7r7" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.925101 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28p4n\" (UniqueName: \"kubernetes.io/projected/3017331e-6f47-4b7e-b9ad-607c6be8c20e-kube-api-access-28p4n\") pod \"manila-operator-controller-manager-849fcfbb6b-tl2kj\" (UID: \"3017331e-6f47-4b7e-b9ad-607c6be8c20e\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tl2kj" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.925127 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snb96\" (UniqueName: \"kubernetes.io/projected/fad3c440-9e3f-4f25-b420-f1f1beb8976e-kube-api-access-snb96\") pod \"ironic-operator-controller-manager-768b776ffb-cvlvn\" (UID: \"fad3c440-9e3f-4f25-b420-f1f1beb8976e\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-cvlvn" Jan 27 08:05:04 crc kubenswrapper[4799]: E0127 08:05:04.925256 4799 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 08:05:04 crc kubenswrapper[4799]: E0127 08:05:04.925318 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34178e14-d22f-4fbb-80e8-2a18fd062606-cert podName:34178e14-d22f-4fbb-80e8-2a18fd062606 nodeName:}" failed. No retries permitted until 2026-01-27 08:05:05.425282913 +0000 UTC m=+1171.736386978 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/34178e14-d22f-4fbb-80e8-2a18fd062606-cert") pod "infra-operator-controller-manager-7d75bc88d5-nc7r7" (UID: "34178e14-d22f-4fbb-80e8-2a18fd062606") : secret "infra-operator-webhook-server-cert" not found Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.935399 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-tl2kj"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.945382 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-msdv6"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.946178 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-msdv6" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.946834 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-hzmgz" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.950641 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-d8bk6" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.956661 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-9qhwd" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.970341 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-dv7wf" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.970823 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-92h9x"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.973490 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-92h9x" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.973812 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-t6chd" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.988641 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-kmks8" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.994085 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gwzxq"] Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.994847 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gwzxq" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.997993 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-pb9f7" Jan 27 08:05:04 crc kubenswrapper[4799]: I0127 08:05:04.999141 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7crt\" (UniqueName: \"kubernetes.io/projected/34178e14-d22f-4fbb-80e8-2a18fd062606-kube-api-access-r7crt\") pod \"infra-operator-controller-manager-7d75bc88d5-nc7r7\" (UID: \"34178e14-d22f-4fbb-80e8-2a18fd062606\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-nc7r7" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.004283 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-z4lbd" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.010408 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-msdv6"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.028714 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28p4n\" (UniqueName: \"kubernetes.io/projected/3017331e-6f47-4b7e-b9ad-607c6be8c20e-kube-api-access-28p4n\") pod \"manila-operator-controller-manager-849fcfbb6b-tl2kj\" (UID: \"3017331e-6f47-4b7e-b9ad-607c6be8c20e\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tl2kj" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.028762 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snb96\" (UniqueName: \"kubernetes.io/projected/fad3c440-9e3f-4f25-b420-f1f1beb8976e-kube-api-access-snb96\") pod \"ironic-operator-controller-manager-768b776ffb-cvlvn\" (UID: \"fad3c440-9e3f-4f25-b420-f1f1beb8976e\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-cvlvn" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.028797 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtrm7\" (UniqueName: \"kubernetes.io/projected/6424eee1-bc8b-46e6-86d5-405a13b0ccc9-kube-api-access-dtrm7\") pod \"neutron-operator-controller-manager-7ffd8d76d4-gwzxq\" (UID: \"6424eee1-bc8b-46e6-86d5-405a13b0ccc9\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gwzxq" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.028848 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksfqk\" (UniqueName: \"kubernetes.io/projected/ab6077f2-cd61-4c1a-aa99-a5aa8afc7c3f-kube-api-access-ksfqk\") pod \"octavia-operator-controller-manager-7875d7675-92h9x\" (UID: \"ab6077f2-cd61-4c1a-aa99-a5aa8afc7c3f\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-92h9x" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.028883 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-596fs\" (UniqueName: \"kubernetes.io/projected/5aca3207-fa5d-485b-ac2c-a9c3e17081a4-kube-api-access-596fs\") pod \"keystone-operator-controller-manager-55f684fd56-kww5k\" (UID: \"5aca3207-fa5d-485b-ac2c-a9c3e17081a4\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kww5k" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.028913 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm5qz\" (UniqueName: \"kubernetes.io/projected/ff4f4931-e9c9-4b38-87e0-58a46c02b98d-kube-api-access-hm5qz\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-msdv6\" (UID: \"ff4f4931-e9c9-4b38-87e0-58a46c02b98d\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-msdv6" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.032363 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-92h9x"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.047463 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f54b7d6d4-phjqb"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.048277 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-phjqb" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.053492 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gwzxq"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.059196 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-596fs\" (UniqueName: \"kubernetes.io/projected/5aca3207-fa5d-485b-ac2c-a9c3e17081a4-kube-api-access-596fs\") pod \"keystone-operator-controller-manager-55f684fd56-kww5k\" (UID: \"5aca3207-fa5d-485b-ac2c-a9c3e17081a4\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kww5k" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.060021 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28p4n\" (UniqueName: \"kubernetes.io/projected/3017331e-6f47-4b7e-b9ad-607c6be8c20e-kube-api-access-28p4n\") pod \"manila-operator-controller-manager-849fcfbb6b-tl2kj\" (UID: \"3017331e-6f47-4b7e-b9ad-607c6be8c20e\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tl2kj" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.069363 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-nbwpn" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.078753 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-9hj5w"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.079698 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9hj5w" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.088759 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-xgwbl" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.090395 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snb96\" (UniqueName: \"kubernetes.io/projected/fad3c440-9e3f-4f25-b420-f1f1beb8976e-kube-api-access-snb96\") pod \"ironic-operator-controller-manager-768b776ffb-cvlvn\" (UID: \"fad3c440-9e3f-4f25-b420-f1f1beb8976e\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-cvlvn" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.092102 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-cwpd7"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.092955 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cwpd7" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.095820 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-bnmn5" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.106409 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f54b7d6d4-phjqb"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.125222 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kww5k" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.182402 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.182468 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7nwk\" (UniqueName: \"kubernetes.io/projected/0ecf0624-a24f-4ece-bc11-481d049df28e-kube-api-access-k7nwk\") pod \"nova-operator-controller-manager-7f54b7d6d4-phjqb\" (UID: \"0ecf0624-a24f-4ece-bc11-481d049df28e\") " pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-phjqb" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.185057 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtrm7\" (UniqueName: \"kubernetes.io/projected/6424eee1-bc8b-46e6-86d5-405a13b0ccc9-kube-api-access-dtrm7\") pod \"neutron-operator-controller-manager-7ffd8d76d4-gwzxq\" (UID: \"6424eee1-bc8b-46e6-86d5-405a13b0ccc9\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gwzxq" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.185111 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksfqk\" (UniqueName: \"kubernetes.io/projected/ab6077f2-cd61-4c1a-aa99-a5aa8afc7c3f-kube-api-access-ksfqk\") pod \"octavia-operator-controller-manager-7875d7675-92h9x\" (UID: \"ab6077f2-cd61-4c1a-aa99-a5aa8afc7c3f\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-92h9x" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.185144 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sj79\" (UniqueName: \"kubernetes.io/projected/d536c693-c313-4de1-a636-edf8d0e3504b-kube-api-access-2sj79\") pod \"ovn-operator-controller-manager-6f75f45d54-cwpd7\" (UID: \"d536c693-c313-4de1-a636-edf8d0e3504b\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cwpd7" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.185190 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm5qz\" (UniqueName: \"kubernetes.io/projected/ff4f4931-e9c9-4b38-87e0-58a46c02b98d-kube-api-access-hm5qz\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-msdv6\" (UID: \"ff4f4931-e9c9-4b38-87e0-58a46c02b98d\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-msdv6" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.185210 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pct56\" (UniqueName: \"kubernetes.io/projected/037daa10-fc4e-42d1-9ef8-7484fd944508-kube-api-access-pct56\") pod \"placement-operator-controller-manager-79d5ccc684-9hj5w\" (UID: \"037daa10-fc4e-42d1-9ef8-7484fd944508\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9hj5w" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.185268 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.189728 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-qcjnm" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.189913 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.217077 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksfqk\" (UniqueName: \"kubernetes.io/projected/ab6077f2-cd61-4c1a-aa99-a5aa8afc7c3f-kube-api-access-ksfqk\") pod \"octavia-operator-controller-manager-7875d7675-92h9x\" (UID: \"ab6077f2-cd61-4c1a-aa99-a5aa8afc7c3f\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-92h9x" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.218076 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtrm7\" (UniqueName: \"kubernetes.io/projected/6424eee1-bc8b-46e6-86d5-405a13b0ccc9-kube-api-access-dtrm7\") pod \"neutron-operator-controller-manager-7ffd8d76d4-gwzxq\" (UID: \"6424eee1-bc8b-46e6-86d5-405a13b0ccc9\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gwzxq" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.231213 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-cvlvn" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.231795 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tl2kj" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.232604 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm5qz\" (UniqueName: \"kubernetes.io/projected/ff4f4931-e9c9-4b38-87e0-58a46c02b98d-kube-api-access-hm5qz\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-msdv6\" (UID: \"ff4f4931-e9c9-4b38-87e0-58a46c02b98d\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-msdv6" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.253382 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-cwpd7"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.254753 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-msdv6" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.258345 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-9hj5w"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.265598 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-77ttp"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.266642 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-77ttp" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.272669 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-5skrb" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.273183 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.289113 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sj79\" (UniqueName: \"kubernetes.io/projected/d536c693-c313-4de1-a636-edf8d0e3504b-kube-api-access-2sj79\") pod \"ovn-operator-controller-manager-6f75f45d54-cwpd7\" (UID: \"d536c693-c313-4de1-a636-edf8d0e3504b\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cwpd7" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.289184 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7032d0e-676f-4153-87b6-0fce33337997-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb\" (UID: \"e7032d0e-676f-4153-87b6-0fce33337997\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.289212 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pct56\" (UniqueName: \"kubernetes.io/projected/037daa10-fc4e-42d1-9ef8-7484fd944508-kube-api-access-pct56\") pod \"placement-operator-controller-manager-79d5ccc684-9hj5w\" (UID: \"037daa10-fc4e-42d1-9ef8-7484fd944508\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9hj5w" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.289248 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lchc8\" (UniqueName: \"kubernetes.io/projected/e7032d0e-676f-4153-87b6-0fce33337997-kube-api-access-lchc8\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb\" (UID: \"e7032d0e-676f-4153-87b6-0fce33337997\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.289317 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7nwk\" (UniqueName: \"kubernetes.io/projected/0ecf0624-a24f-4ece-bc11-481d049df28e-kube-api-access-k7nwk\") pod \"nova-operator-controller-manager-7f54b7d6d4-phjqb\" (UID: \"0ecf0624-a24f-4ece-bc11-481d049df28e\") " pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-phjqb" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.307411 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-m5npz"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.308185 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-m5npz" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.311778 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-5xnrd" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.320550 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pct56\" (UniqueName: \"kubernetes.io/projected/037daa10-fc4e-42d1-9ef8-7484fd944508-kube-api-access-pct56\") pod \"placement-operator-controller-manager-79d5ccc684-9hj5w\" (UID: \"037daa10-fc4e-42d1-9ef8-7484fd944508\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9hj5w" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.322419 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sj79\" (UniqueName: \"kubernetes.io/projected/d536c693-c313-4de1-a636-edf8d0e3504b-kube-api-access-2sj79\") pod \"ovn-operator-controller-manager-6f75f45d54-cwpd7\" (UID: \"d536c693-c313-4de1-a636-edf8d0e3504b\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cwpd7" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.334482 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7nwk\" (UniqueName: \"kubernetes.io/projected/0ecf0624-a24f-4ece-bc11-481d049df28e-kube-api-access-k7nwk\") pod \"nova-operator-controller-manager-7f54b7d6d4-phjqb\" (UID: \"0ecf0624-a24f-4ece-bc11-481d049df28e\") " pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-phjqb" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.338331 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-77ttp"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.342682 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gwzxq" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.347780 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-m5npz"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.363311 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cwpd7" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.388538 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-kcrfp"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.390243 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kcrfp" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.390393 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lchc8\" (UniqueName: \"kubernetes.io/projected/e7032d0e-676f-4153-87b6-0fce33337997-kube-api-access-lchc8\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb\" (UID: \"e7032d0e-676f-4153-87b6-0fce33337997\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.390452 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbt78\" (UniqueName: \"kubernetes.io/projected/f5024a15-240a-410c-980d-109db1b46c03-kube-api-access-gbt78\") pod \"telemetry-operator-controller-manager-799bc87c89-m5npz\" (UID: \"f5024a15-240a-410c-980d-109db1b46c03\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-m5npz" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.390515 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qct7\" (UniqueName: \"kubernetes.io/projected/7b6ea7e6-0b30-432b-a1e2-c11570a47ee7-kube-api-access-6qct7\") pod \"swift-operator-controller-manager-547cbdb99f-77ttp\" (UID: \"7b6ea7e6-0b30-432b-a1e2-c11570a47ee7\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-77ttp" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.390548 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7032d0e-676f-4153-87b6-0fce33337997-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb\" (UID: \"e7032d0e-676f-4153-87b6-0fce33337997\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" Jan 27 08:05:05 crc kubenswrapper[4799]: E0127 08:05:05.390971 4799 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 08:05:05 crc kubenswrapper[4799]: E0127 08:05:05.391011 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7032d0e-676f-4153-87b6-0fce33337997-cert podName:e7032d0e-676f-4153-87b6-0fce33337997 nodeName:}" failed. No retries permitted until 2026-01-27 08:05:05.89099815 +0000 UTC m=+1172.202102215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e7032d0e-676f-4153-87b6-0fce33337997-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" (UID: "e7032d0e-676f-4153-87b6-0fce33337997") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.392605 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-nqnnz" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.404935 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-kcrfp"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.412938 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-92h9x" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.421048 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lchc8\" (UniqueName: \"kubernetes.io/projected/e7032d0e-676f-4153-87b6-0fce33337997-kube-api-access-lchc8\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb\" (UID: \"e7032d0e-676f-4153-87b6-0fce33337997\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.483249 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75db85654f-hs4t2"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.490483 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-hs4t2" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.491410 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z725t\" (UniqueName: \"kubernetes.io/projected/c62ef33b-0827-4909-b88a-a48396df7ddd-kube-api-access-z725t\") pod \"test-operator-controller-manager-69797bbcbd-kcrfp\" (UID: \"c62ef33b-0827-4909-b88a-a48396df7ddd\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kcrfp" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.491475 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34178e14-d22f-4fbb-80e8-2a18fd062606-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-nc7r7\" (UID: \"34178e14-d22f-4fbb-80e8-2a18fd062606\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-nc7r7" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.491500 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbt78\" (UniqueName: \"kubernetes.io/projected/f5024a15-240a-410c-980d-109db1b46c03-kube-api-access-gbt78\") pod \"telemetry-operator-controller-manager-799bc87c89-m5npz\" (UID: \"f5024a15-240a-410c-980d-109db1b46c03\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-m5npz" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.491560 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qct7\" (UniqueName: \"kubernetes.io/projected/7b6ea7e6-0b30-432b-a1e2-c11570a47ee7-kube-api-access-6qct7\") pod \"swift-operator-controller-manager-547cbdb99f-77ttp\" (UID: \"7b6ea7e6-0b30-432b-a1e2-c11570a47ee7\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-77ttp" Jan 27 08:05:05 crc kubenswrapper[4799]: E0127 08:05:05.492060 4799 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 08:05:05 crc kubenswrapper[4799]: E0127 08:05:05.492129 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34178e14-d22f-4fbb-80e8-2a18fd062606-cert podName:34178e14-d22f-4fbb-80e8-2a18fd062606 nodeName:}" failed. No retries permitted until 2026-01-27 08:05:06.492110439 +0000 UTC m=+1172.803214504 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/34178e14-d22f-4fbb-80e8-2a18fd062606-cert") pod "infra-operator-controller-manager-7d75bc88d5-nc7r7" (UID: "34178e14-d22f-4fbb-80e8-2a18fd062606") : secret "infra-operator-webhook-server-cert" not found Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.497360 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-sd7kq" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.522389 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75db85654f-hs4t2"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.528003 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qct7\" (UniqueName: \"kubernetes.io/projected/7b6ea7e6-0b30-432b-a1e2-c11570a47ee7-kube-api-access-6qct7\") pod \"swift-operator-controller-manager-547cbdb99f-77ttp\" (UID: \"7b6ea7e6-0b30-432b-a1e2-c11570a47ee7\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-77ttp" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.530180 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbt78\" (UniqueName: \"kubernetes.io/projected/f5024a15-240a-410c-980d-109db1b46c03-kube-api-access-gbt78\") pod \"telemetry-operator-controller-manager-799bc87c89-m5npz\" (UID: \"f5024a15-240a-410c-980d-109db1b46c03\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-m5npz" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.560060 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.561249 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.563608 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.563761 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-564bd" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.564012 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.568632 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.570595 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-phjqb" Jan 27 08:05:05 crc kubenswrapper[4799]: W0127 08:05:05.578707 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd94d5e1a_ae08_488f_9d43_50c9d392bb64.slice/crio-cc587778f5f4ece49ea8bcfa533f4b9e5bc4b5801ce3cd8f637456be842ce8a4 WatchSource:0}: Error finding container cc587778f5f4ece49ea8bcfa533f4b9e5bc4b5801ce3cd8f637456be842ce8a4: Status 404 returned error can't find the container with id cc587778f5f4ece49ea8bcfa533f4b9e5bc4b5801ce3cd8f637456be842ce8a4 Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.579112 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnl"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.580703 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnl" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.582159 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-l6jbx" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.584679 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.586266 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnl"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.592928 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l84qr\" (UniqueName: \"kubernetes.io/projected/48899916-aa13-4d02-89e3-11721dc22821-kube-api-access-l84qr\") pod \"watcher-operator-controller-manager-75db85654f-hs4t2\" (UID: \"48899916-aa13-4d02-89e3-11721dc22821\") " pod="openstack-operators/watcher-operator-controller-manager-75db85654f-hs4t2" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.592981 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z725t\" (UniqueName: \"kubernetes.io/projected/c62ef33b-0827-4909-b88a-a48396df7ddd-kube-api-access-z725t\") pod \"test-operator-controller-manager-69797bbcbd-kcrfp\" (UID: \"c62ef33b-0827-4909-b88a-a48396df7ddd\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kcrfp" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.600447 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9hj5w" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.614774 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z725t\" (UniqueName: \"kubernetes.io/projected/c62ef33b-0827-4909-b88a-a48396df7ddd-kube-api-access-z725t\") pod \"test-operator-controller-manager-69797bbcbd-kcrfp\" (UID: \"c62ef33b-0827-4909-b88a-a48396df7ddd\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kcrfp" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.616884 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-vznwp"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.657810 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-hzmgz"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.694279 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-metrics-certs\") pod \"openstack-operator-controller-manager-54bc44cbfd-w99km\" (UID: \"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb\") " pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.694336 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s884\" (UniqueName: \"kubernetes.io/projected/8ac41e51-af98-4db3-bdde-9d0d2d90767f-kube-api-access-7s884\") pod \"rabbitmq-cluster-operator-manager-668c99d594-v4vnl\" (UID: \"8ac41e51-af98-4db3-bdde-9d0d2d90767f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnl" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.694419 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-webhook-certs\") pod \"openstack-operator-controller-manager-54bc44cbfd-w99km\" (UID: \"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb\") " pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.694451 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l84qr\" (UniqueName: \"kubernetes.io/projected/48899916-aa13-4d02-89e3-11721dc22821-kube-api-access-l84qr\") pod \"watcher-operator-controller-manager-75db85654f-hs4t2\" (UID: \"48899916-aa13-4d02-89e3-11721dc22821\") " pod="openstack-operators/watcher-operator-controller-manager-75db85654f-hs4t2" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.694472 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpgp4\" (UniqueName: \"kubernetes.io/projected/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-kube-api-access-xpgp4\") pod \"openstack-operator-controller-manager-54bc44cbfd-w99km\" (UID: \"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb\") " pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.731275 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l84qr\" (UniqueName: \"kubernetes.io/projected/48899916-aa13-4d02-89e3-11721dc22821-kube-api-access-l84qr\") pod \"watcher-operator-controller-manager-75db85654f-hs4t2\" (UID: \"48899916-aa13-4d02-89e3-11721dc22821\") " pod="openstack-operators/watcher-operator-controller-manager-75db85654f-hs4t2" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.747909 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-77ttp" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.759977 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-m5npz" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.774620 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kcrfp" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.796991 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-webhook-certs\") pod \"openstack-operator-controller-manager-54bc44cbfd-w99km\" (UID: \"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb\") " pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.797054 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpgp4\" (UniqueName: \"kubernetes.io/projected/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-kube-api-access-xpgp4\") pod \"openstack-operator-controller-manager-54bc44cbfd-w99km\" (UID: \"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb\") " pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.797111 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-metrics-certs\") pod \"openstack-operator-controller-manager-54bc44cbfd-w99km\" (UID: \"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb\") " pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.797133 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s884\" (UniqueName: \"kubernetes.io/projected/8ac41e51-af98-4db3-bdde-9d0d2d90767f-kube-api-access-7s884\") pod \"rabbitmq-cluster-operator-manager-668c99d594-v4vnl\" (UID: \"8ac41e51-af98-4db3-bdde-9d0d2d90767f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnl" Jan 27 08:05:05 crc kubenswrapper[4799]: E0127 08:05:05.797170 4799 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 08:05:05 crc kubenswrapper[4799]: E0127 08:05:05.797242 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-webhook-certs podName:ac37a700-e0d3-4751-b72f-bc48bd3ef0cb nodeName:}" failed. No retries permitted until 2026-01-27 08:05:06.297223793 +0000 UTC m=+1172.608327858 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-webhook-certs") pod "openstack-operator-controller-manager-54bc44cbfd-w99km" (UID: "ac37a700-e0d3-4751-b72f-bc48bd3ef0cb") : secret "webhook-server-cert" not found Jan 27 08:05:05 crc kubenswrapper[4799]: E0127 08:05:05.797436 4799 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 08:05:05 crc kubenswrapper[4799]: E0127 08:05:05.797488 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-metrics-certs podName:ac37a700-e0d3-4751-b72f-bc48bd3ef0cb nodeName:}" failed. No retries permitted until 2026-01-27 08:05:06.297473341 +0000 UTC m=+1172.608577406 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-metrics-certs") pod "openstack-operator-controller-manager-54bc44cbfd-w99km" (UID: "ac37a700-e0d3-4751-b72f-bc48bd3ef0cb") : secret "metrics-server-cert" not found Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.809756 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-9qhwd"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.818493 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpgp4\" (UniqueName: \"kubernetes.io/projected/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-kube-api-access-xpgp4\") pod \"openstack-operator-controller-manager-54bc44cbfd-w99km\" (UID: \"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb\") " pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.822554 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-hs4t2" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.828415 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s884\" (UniqueName: \"kubernetes.io/projected/8ac41e51-af98-4db3-bdde-9d0d2d90767f-kube-api-access-7s884\") pod \"rabbitmq-cluster-operator-manager-668c99d594-v4vnl\" (UID: \"8ac41e51-af98-4db3-bdde-9d0d2d90767f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnl" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.870628 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-z4lbd"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.899019 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7032d0e-676f-4153-87b6-0fce33337997-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb\" (UID: \"e7032d0e-676f-4153-87b6-0fce33337997\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" Jan 27 08:05:05 crc kubenswrapper[4799]: E0127 08:05:05.899220 4799 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 08:05:05 crc kubenswrapper[4799]: E0127 08:05:05.899276 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7032d0e-676f-4153-87b6-0fce33337997-cert podName:e7032d0e-676f-4153-87b6-0fce33337997 nodeName:}" failed. No retries permitted until 2026-01-27 08:05:06.899257147 +0000 UTC m=+1173.210361212 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e7032d0e-676f-4153-87b6-0fce33337997-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" (UID: "e7032d0e-676f-4153-87b6-0fce33337997") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.911496 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnl" Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.925076 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-vznwp" event={"ID":"d94d5e1a-ae08-488f-9d43-50c9d392bb64","Type":"ContainerStarted","Data":"cc587778f5f4ece49ea8bcfa533f4b9e5bc4b5801ce3cd8f637456be842ce8a4"} Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.952106 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-t6chd"] Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.978880 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-hzmgz" event={"ID":"55e4a841-f81d-438b-adc5-e826eb530cfe","Type":"ContainerStarted","Data":"9188d2d29e4f095f1bc7c5d4dde578834f1bfc80e053291f86ebf8f8f17722ff"} Jan 27 08:05:05 crc kubenswrapper[4799]: W0127 08:05:05.979274 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8236753b_6720_430d_81cf_7b6c0de5a0ee.slice/crio-d4b852a0312255696ed4cceb475a5544320e143950d311be3048d7f8b3bb0db5 WatchSource:0}: Error finding container d4b852a0312255696ed4cceb475a5544320e143950d311be3048d7f8b3bb0db5: Status 404 returned error can't find the container with id d4b852a0312255696ed4cceb475a5544320e143950d311be3048d7f8b3bb0db5 Jan 27 08:05:05 crc kubenswrapper[4799]: I0127 08:05:05.981682 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-dv7wf"] Jan 27 08:05:06 crc kubenswrapper[4799]: I0127 08:05:06.239972 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-kww5k"] Jan 27 08:05:06 crc kubenswrapper[4799]: W0127 08:05:06.245464 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5aca3207_fa5d_485b_ac2c_a9c3e17081a4.slice/crio-d96f216cfd4bf97c5b8d4312bc388b6d75d5f622948b37ab259fc023f2020502 WatchSource:0}: Error finding container d96f216cfd4bf97c5b8d4312bc388b6d75d5f622948b37ab259fc023f2020502: Status 404 returned error can't find the container with id d96f216cfd4bf97c5b8d4312bc388b6d75d5f622948b37ab259fc023f2020502 Jan 27 08:05:06 crc kubenswrapper[4799]: I0127 08:05:06.306760 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-metrics-certs\") pod \"openstack-operator-controller-manager-54bc44cbfd-w99km\" (UID: \"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb\") " pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:06 crc kubenswrapper[4799]: I0127 08:05:06.306871 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-webhook-certs\") pod \"openstack-operator-controller-manager-54bc44cbfd-w99km\" (UID: \"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb\") " pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:06 crc kubenswrapper[4799]: E0127 08:05:06.306939 4799 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 08:05:06 crc kubenswrapper[4799]: E0127 08:05:06.306994 4799 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 08:05:06 crc kubenswrapper[4799]: E0127 08:05:06.307042 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-webhook-certs podName:ac37a700-e0d3-4751-b72f-bc48bd3ef0cb nodeName:}" failed. No retries permitted until 2026-01-27 08:05:07.307027913 +0000 UTC m=+1173.618131978 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-webhook-certs") pod "openstack-operator-controller-manager-54bc44cbfd-w99km" (UID: "ac37a700-e0d3-4751-b72f-bc48bd3ef0cb") : secret "webhook-server-cert" not found Jan 27 08:05:06 crc kubenswrapper[4799]: E0127 08:05:06.307057 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-metrics-certs podName:ac37a700-e0d3-4751-b72f-bc48bd3ef0cb nodeName:}" failed. No retries permitted until 2026-01-27 08:05:07.307050754 +0000 UTC m=+1173.618154819 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-metrics-certs") pod "openstack-operator-controller-manager-54bc44cbfd-w99km" (UID: "ac37a700-e0d3-4751-b72f-bc48bd3ef0cb") : secret "metrics-server-cert" not found Jan 27 08:05:06 crc kubenswrapper[4799]: I0127 08:05:06.310996 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-tl2kj"] Jan 27 08:05:06 crc kubenswrapper[4799]: I0127 08:05:06.320266 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-92h9x"] Jan 27 08:05:06 crc kubenswrapper[4799]: I0127 08:05:06.328514 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-cvlvn"] Jan 27 08:05:06 crc kubenswrapper[4799]: I0127 08:05:06.492596 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-m5npz"] Jan 27 08:05:06 crc kubenswrapper[4799]: I0127 08:05:06.500718 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75db85654f-hs4t2"] Jan 27 08:05:06 crc kubenswrapper[4799]: I0127 08:05:06.509553 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34178e14-d22f-4fbb-80e8-2a18fd062606-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-nc7r7\" (UID: \"34178e14-d22f-4fbb-80e8-2a18fd062606\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-nc7r7" Jan 27 08:05:06 crc kubenswrapper[4799]: E0127 08:05:06.510736 4799 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 08:05:06 crc kubenswrapper[4799]: E0127 08:05:06.510798 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34178e14-d22f-4fbb-80e8-2a18fd062606-cert podName:34178e14-d22f-4fbb-80e8-2a18fd062606 nodeName:}" failed. No retries permitted until 2026-01-27 08:05:08.510778823 +0000 UTC m=+1174.821882888 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/34178e14-d22f-4fbb-80e8-2a18fd062606-cert") pod "infra-operator-controller-manager-7d75bc88d5-nc7r7" (UID: "34178e14-d22f-4fbb-80e8-2a18fd062606") : secret "infra-operator-webhook-server-cert" not found Jan 27 08:05:06 crc kubenswrapper[4799]: I0127 08:05:06.515020 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f54b7d6d4-phjqb"] Jan 27 08:05:06 crc kubenswrapper[4799]: I0127 08:05:06.528727 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gwzxq"] Jan 27 08:05:06 crc kubenswrapper[4799]: W0127 08:05:06.531879 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ecf0624_a24f_4ece_bc11_481d049df28e.slice/crio-40fa8cfc6cb51f82ebf10e94f3a0e188adb55400c41e3bf45f09ee2281395322 WatchSource:0}: Error finding container 40fa8cfc6cb51f82ebf10e94f3a0e188adb55400c41e3bf45f09ee2281395322: Status 404 returned error can't find the container with id 40fa8cfc6cb51f82ebf10e94f3a0e188adb55400c41e3bf45f09ee2281395322 Jan 27 08:05:06 crc kubenswrapper[4799]: I0127 08:05:06.535600 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-msdv6"] Jan 27 08:05:06 crc kubenswrapper[4799]: W0127 08:05:06.544849 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6424eee1_bc8b_46e6_86d5_405a13b0ccc9.slice/crio-6d422cf1f92e1815a0def2757e89495c588ae2efff292f1da2d44d1102c05b52 WatchSource:0}: Error finding container 6d422cf1f92e1815a0def2757e89495c588ae2efff292f1da2d44d1102c05b52: Status 404 returned error can't find the container with id 6d422cf1f92e1815a0def2757e89495c588ae2efff292f1da2d44d1102c05b52 Jan 27 08:05:06 crc kubenswrapper[4799]: E0127 08:05:06.551510 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dtrm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7ffd8d76d4-gwzxq_openstack-operators(6424eee1-bc8b-46e6-86d5-405a13b0ccc9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 08:05:06 crc kubenswrapper[4799]: E0127 08:05:06.552730 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gwzxq" podUID="6424eee1-bc8b-46e6-86d5-405a13b0ccc9" Jan 27 08:05:06 crc kubenswrapper[4799]: I0127 08:05:06.695855 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-cwpd7"] Jan 27 08:05:06 crc kubenswrapper[4799]: I0127 08:05:06.701261 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-9hj5w"] Jan 27 08:05:06 crc kubenswrapper[4799]: W0127 08:05:06.702697 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod037daa10_fc4e_42d1_9ef8_7484fd944508.slice/crio-71d242cfc0e66550cd778acead6b0cf3be02e9412372cff6d7fbb5e4f09299b0 WatchSource:0}: Error finding container 71d242cfc0e66550cd778acead6b0cf3be02e9412372cff6d7fbb5e4f09299b0: Status 404 returned error can't find the container with id 71d242cfc0e66550cd778acead6b0cf3be02e9412372cff6d7fbb5e4f09299b0 Jan 27 08:05:06 crc kubenswrapper[4799]: I0127 08:05:06.710822 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-77ttp"] Jan 27 08:05:06 crc kubenswrapper[4799]: I0127 08:05:06.715840 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnl"] Jan 27 08:05:06 crc kubenswrapper[4799]: E0127 08:05:06.722985 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pct56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-9hj5w_openstack-operators(037daa10-fc4e-42d1-9ef8-7484fd944508): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 08:05:06 crc kubenswrapper[4799]: E0127 08:05:06.724340 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9hj5w" podUID="037daa10-fc4e-42d1-9ef8-7484fd944508" Jan 27 08:05:06 crc kubenswrapper[4799]: I0127 08:05:06.726092 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-kcrfp"] Jan 27 08:05:06 crc kubenswrapper[4799]: W0127 08:05:06.728091 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b6ea7e6_0b30_432b_a1e2_c11570a47ee7.slice/crio-feb7fdc27ff9e820abd7eb61a4accf479e298b45d75f6c93157d890672872719 WatchSource:0}: Error finding container feb7fdc27ff9e820abd7eb61a4accf479e298b45d75f6c93157d890672872719: Status 404 returned error can't find the container with id feb7fdc27ff9e820abd7eb61a4accf479e298b45d75f6c93157d890672872719 Jan 27 08:05:06 crc kubenswrapper[4799]: W0127 08:05:06.732250 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ac41e51_af98_4db3_bdde_9d0d2d90767f.slice/crio-794d7065145f92cd1b5ce9fab89f2ff8fbfe73b4ee6def5bb7aa593d3b16c629 WatchSource:0}: Error finding container 794d7065145f92cd1b5ce9fab89f2ff8fbfe73b4ee6def5bb7aa593d3b16c629: Status 404 returned error can't find the container with id 794d7065145f92cd1b5ce9fab89f2ff8fbfe73b4ee6def5bb7aa593d3b16c629 Jan 27 08:05:06 crc kubenswrapper[4799]: W0127 08:05:06.737324 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc62ef33b_0827_4909_b88a_a48396df7ddd.slice/crio-9d4de0b66f1166b0d3bf93b9a46b4c4d9f6e9a43230b0e8ee933f8dee923251f WatchSource:0}: Error finding container 9d4de0b66f1166b0d3bf93b9a46b4c4d9f6e9a43230b0e8ee933f8dee923251f: Status 404 returned error can't find the container with id 9d4de0b66f1166b0d3bf93b9a46b4c4d9f6e9a43230b0e8ee933f8dee923251f Jan 27 08:05:06 crc kubenswrapper[4799]: E0127 08:05:06.737530 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6qct7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-77ttp_openstack-operators(7b6ea7e6-0b30-432b-a1e2-c11570a47ee7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 08:05:06 crc kubenswrapper[4799]: E0127 08:05:06.738826 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-77ttp" podUID="7b6ea7e6-0b30-432b-a1e2-c11570a47ee7" Jan 27 08:05:06 crc kubenswrapper[4799]: E0127 08:05:06.739453 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z725t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-kcrfp_openstack-operators(c62ef33b-0827-4909-b88a-a48396df7ddd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 08:05:06 crc kubenswrapper[4799]: E0127 08:05:06.739953 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7s884,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-v4vnl_openstack-operators(8ac41e51-af98-4db3-bdde-9d0d2d90767f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 08:05:06 crc kubenswrapper[4799]: E0127 08:05:06.740870 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kcrfp" podUID="c62ef33b-0827-4909-b88a-a48396df7ddd" Jan 27 08:05:06 crc kubenswrapper[4799]: E0127 08:05:06.741082 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnl" podUID="8ac41e51-af98-4db3-bdde-9d0d2d90767f" Jan 27 08:05:06 crc kubenswrapper[4799]: I0127 08:05:06.914809 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7032d0e-676f-4153-87b6-0fce33337997-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb\" (UID: \"e7032d0e-676f-4153-87b6-0fce33337997\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" Jan 27 08:05:06 crc kubenswrapper[4799]: E0127 08:05:06.915062 4799 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 08:05:06 crc kubenswrapper[4799]: E0127 08:05:06.915153 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7032d0e-676f-4153-87b6-0fce33337997-cert podName:e7032d0e-676f-4153-87b6-0fce33337997 nodeName:}" failed. No retries permitted until 2026-01-27 08:05:08.915133175 +0000 UTC m=+1175.226237240 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e7032d0e-676f-4153-87b6-0fce33337997-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" (UID: "e7032d0e-676f-4153-87b6-0fce33337997") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 08:05:06 crc kubenswrapper[4799]: I0127 08:05:06.995936 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnl" event={"ID":"8ac41e51-af98-4db3-bdde-9d0d2d90767f","Type":"ContainerStarted","Data":"794d7065145f92cd1b5ce9fab89f2ff8fbfe73b4ee6def5bb7aa593d3b16c629"} Jan 27 08:05:06 crc kubenswrapper[4799]: E0127 08:05:06.998027 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnl" podUID="8ac41e51-af98-4db3-bdde-9d0d2d90767f" Jan 27 08:05:07 crc kubenswrapper[4799]: I0127 08:05:07.000745 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-cvlvn" event={"ID":"fad3c440-9e3f-4f25-b420-f1f1beb8976e","Type":"ContainerStarted","Data":"94edbe19b28bc321f2489b435889295ec14345c90529f959fdc8efbd25d0cdce"} Jan 27 08:05:07 crc kubenswrapper[4799]: I0127 08:05:07.002092 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cwpd7" event={"ID":"d536c693-c313-4de1-a636-edf8d0e3504b","Type":"ContainerStarted","Data":"9da4e490005a1e621f8d8d1b1e17772a864a99da7fab78d2f805a8b310db10d4"} Jan 27 08:05:07 crc kubenswrapper[4799]: I0127 08:05:07.005585 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-m5npz" event={"ID":"f5024a15-240a-410c-980d-109db1b46c03","Type":"ContainerStarted","Data":"e9bd2da6c5ee9c2d4ff467f348096fc94efd28d2b4c4b93e0caf2da9307f5b57"} Jan 27 08:05:07 crc kubenswrapper[4799]: I0127 08:05:07.006922 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-t6chd" event={"ID":"8236753b-6720-430d-81cf-7b6c0de5a0ee","Type":"ContainerStarted","Data":"d4b852a0312255696ed4cceb475a5544320e143950d311be3048d7f8b3bb0db5"} Jan 27 08:05:07 crc kubenswrapper[4799]: I0127 08:05:07.008040 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9hj5w" event={"ID":"037daa10-fc4e-42d1-9ef8-7484fd944508","Type":"ContainerStarted","Data":"71d242cfc0e66550cd778acead6b0cf3be02e9412372cff6d7fbb5e4f09299b0"} Jan 27 08:05:07 crc kubenswrapper[4799]: E0127 08:05:07.011646 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9hj5w" podUID="037daa10-fc4e-42d1-9ef8-7484fd944508" Jan 27 08:05:07 crc kubenswrapper[4799]: I0127 08:05:07.012722 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-dv7wf" event={"ID":"e322d396-a1f7-4802-bba8-91bd472c24e3","Type":"ContainerStarted","Data":"c94a0b6871b66fe78e9d750f1d0f0881656ba82bd67dd3fac04d0ef13811244a"} Jan 27 08:05:07 crc kubenswrapper[4799]: I0127 08:05:07.014082 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-msdv6" event={"ID":"ff4f4931-e9c9-4b38-87e0-58a46c02b98d","Type":"ContainerStarted","Data":"aa4f4c8c1b1e3e62b838b0793b6cbdfd15194d5fa657258484d611d0b1103ee2"} Jan 27 08:05:07 crc kubenswrapper[4799]: I0127 08:05:07.015650 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-hs4t2" event={"ID":"48899916-aa13-4d02-89e3-11721dc22821","Type":"ContainerStarted","Data":"b7913107371134fcf91e7b963b0a3f6a2fa85557b3293ed086127c125a7d3cd9"} Jan 27 08:05:07 crc kubenswrapper[4799]: I0127 08:05:07.018660 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-z4lbd" event={"ID":"4cd87a60-6daa-4298-bc64-ff1fb8782577","Type":"ContainerStarted","Data":"8e537f3f84d8cdfde08af4b93160bfde89d95992ae03a48e1456513bffdb6c6c"} Jan 27 08:05:07 crc kubenswrapper[4799]: I0127 08:05:07.020542 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-77ttp" event={"ID":"7b6ea7e6-0b30-432b-a1e2-c11570a47ee7","Type":"ContainerStarted","Data":"feb7fdc27ff9e820abd7eb61a4accf479e298b45d75f6c93157d890672872719"} Jan 27 08:05:07 crc kubenswrapper[4799]: E0127 08:05:07.023069 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-77ttp" podUID="7b6ea7e6-0b30-432b-a1e2-c11570a47ee7" Jan 27 08:05:07 crc kubenswrapper[4799]: I0127 08:05:07.023997 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-phjqb" event={"ID":"0ecf0624-a24f-4ece-bc11-481d049df28e","Type":"ContainerStarted","Data":"40fa8cfc6cb51f82ebf10e94f3a0e188adb55400c41e3bf45f09ee2281395322"} Jan 27 08:05:07 crc kubenswrapper[4799]: I0127 08:05:07.033638 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-92h9x" event={"ID":"ab6077f2-cd61-4c1a-aa99-a5aa8afc7c3f","Type":"ContainerStarted","Data":"81dfbd3813df7d0c6d5e0e3d7b0d81240d274cd5584f11b5e79522d7857ebe2a"} Jan 27 08:05:07 crc kubenswrapper[4799]: I0127 08:05:07.036456 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tl2kj" event={"ID":"3017331e-6f47-4b7e-b9ad-607c6be8c20e","Type":"ContainerStarted","Data":"f88d18709efbf7e56ba689c4bb8c581828488b30a06ae5d2a9683e1bd7151ea2"} Jan 27 08:05:07 crc kubenswrapper[4799]: I0127 08:05:07.037704 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kww5k" event={"ID":"5aca3207-fa5d-485b-ac2c-a9c3e17081a4","Type":"ContainerStarted","Data":"d96f216cfd4bf97c5b8d4312bc388b6d75d5f622948b37ab259fc023f2020502"} Jan 27 08:05:07 crc kubenswrapper[4799]: I0127 08:05:07.039004 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gwzxq" event={"ID":"6424eee1-bc8b-46e6-86d5-405a13b0ccc9","Type":"ContainerStarted","Data":"6d422cf1f92e1815a0def2757e89495c588ae2efff292f1da2d44d1102c05b52"} Jan 27 08:05:07 crc kubenswrapper[4799]: E0127 08:05:07.040931 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gwzxq" podUID="6424eee1-bc8b-46e6-86d5-405a13b0ccc9" Jan 27 08:05:07 crc kubenswrapper[4799]: I0127 08:05:07.041566 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-9qhwd" event={"ID":"7a4d56c1-32dd-4e3b-9f11-e18d210aa5e8","Type":"ContainerStarted","Data":"64a1e81827aded029a8b76174a3960211f91b8f2431cde4763998346b38a481b"} Jan 27 08:05:07 crc kubenswrapper[4799]: I0127 08:05:07.052197 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kcrfp" event={"ID":"c62ef33b-0827-4909-b88a-a48396df7ddd","Type":"ContainerStarted","Data":"9d4de0b66f1166b0d3bf93b9a46b4c4d9f6e9a43230b0e8ee933f8dee923251f"} Jan 27 08:05:07 crc kubenswrapper[4799]: E0127 08:05:07.057180 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kcrfp" podUID="c62ef33b-0827-4909-b88a-a48396df7ddd" Jan 27 08:05:07 crc kubenswrapper[4799]: I0127 08:05:07.321146 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-metrics-certs\") pod \"openstack-operator-controller-manager-54bc44cbfd-w99km\" (UID: \"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb\") " pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:07 crc kubenswrapper[4799]: I0127 08:05:07.321290 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-webhook-certs\") pod \"openstack-operator-controller-manager-54bc44cbfd-w99km\" (UID: \"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb\") " pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:07 crc kubenswrapper[4799]: E0127 08:05:07.321419 4799 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 08:05:07 crc kubenswrapper[4799]: E0127 08:05:07.321559 4799 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 08:05:07 crc kubenswrapper[4799]: E0127 08:05:07.321565 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-metrics-certs podName:ac37a700-e0d3-4751-b72f-bc48bd3ef0cb nodeName:}" failed. No retries permitted until 2026-01-27 08:05:09.321532272 +0000 UTC m=+1175.632636507 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-metrics-certs") pod "openstack-operator-controller-manager-54bc44cbfd-w99km" (UID: "ac37a700-e0d3-4751-b72f-bc48bd3ef0cb") : secret "metrics-server-cert" not found Jan 27 08:05:07 crc kubenswrapper[4799]: E0127 08:05:07.321713 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-webhook-certs podName:ac37a700-e0d3-4751-b72f-bc48bd3ef0cb nodeName:}" failed. No retries permitted until 2026-01-27 08:05:09.321682426 +0000 UTC m=+1175.632786671 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-webhook-certs") pod "openstack-operator-controller-manager-54bc44cbfd-w99km" (UID: "ac37a700-e0d3-4751-b72f-bc48bd3ef0cb") : secret "webhook-server-cert" not found Jan 27 08:05:08 crc kubenswrapper[4799]: E0127 08:05:08.066156 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kcrfp" podUID="c62ef33b-0827-4909-b88a-a48396df7ddd" Jan 27 08:05:08 crc kubenswrapper[4799]: E0127 08:05:08.066979 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnl" podUID="8ac41e51-af98-4db3-bdde-9d0d2d90767f" Jan 27 08:05:08 crc kubenswrapper[4799]: E0127 08:05:08.066974 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gwzxq" podUID="6424eee1-bc8b-46e6-86d5-405a13b0ccc9" Jan 27 08:05:08 crc kubenswrapper[4799]: E0127 08:05:08.067030 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-77ttp" podUID="7b6ea7e6-0b30-432b-a1e2-c11570a47ee7" Jan 27 08:05:08 crc kubenswrapper[4799]: E0127 08:05:08.068017 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9hj5w" podUID="037daa10-fc4e-42d1-9ef8-7484fd944508" Jan 27 08:05:08 crc kubenswrapper[4799]: I0127 08:05:08.539992 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34178e14-d22f-4fbb-80e8-2a18fd062606-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-nc7r7\" (UID: \"34178e14-d22f-4fbb-80e8-2a18fd062606\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-nc7r7" Jan 27 08:05:08 crc kubenswrapper[4799]: E0127 08:05:08.540148 4799 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 08:05:08 crc kubenswrapper[4799]: E0127 08:05:08.540230 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34178e14-d22f-4fbb-80e8-2a18fd062606-cert podName:34178e14-d22f-4fbb-80e8-2a18fd062606 nodeName:}" failed. No retries permitted until 2026-01-27 08:05:12.540210042 +0000 UTC m=+1178.851314107 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/34178e14-d22f-4fbb-80e8-2a18fd062606-cert") pod "infra-operator-controller-manager-7d75bc88d5-nc7r7" (UID: "34178e14-d22f-4fbb-80e8-2a18fd062606") : secret "infra-operator-webhook-server-cert" not found Jan 27 08:05:08 crc kubenswrapper[4799]: I0127 08:05:08.944955 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7032d0e-676f-4153-87b6-0fce33337997-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb\" (UID: \"e7032d0e-676f-4153-87b6-0fce33337997\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" Jan 27 08:05:08 crc kubenswrapper[4799]: E0127 08:05:08.945233 4799 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 08:05:08 crc kubenswrapper[4799]: E0127 08:05:08.945376 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7032d0e-676f-4153-87b6-0fce33337997-cert podName:e7032d0e-676f-4153-87b6-0fce33337997 nodeName:}" failed. No retries permitted until 2026-01-27 08:05:12.945328535 +0000 UTC m=+1179.256432600 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e7032d0e-676f-4153-87b6-0fce33337997-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" (UID: "e7032d0e-676f-4153-87b6-0fce33337997") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 08:05:09 crc kubenswrapper[4799]: I0127 08:05:09.349914 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-metrics-certs\") pod \"openstack-operator-controller-manager-54bc44cbfd-w99km\" (UID: \"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb\") " pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:09 crc kubenswrapper[4799]: I0127 08:05:09.350029 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-webhook-certs\") pod \"openstack-operator-controller-manager-54bc44cbfd-w99km\" (UID: \"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb\") " pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:09 crc kubenswrapper[4799]: E0127 08:05:09.350180 4799 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 08:05:09 crc kubenswrapper[4799]: E0127 08:05:09.350193 4799 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 08:05:09 crc kubenswrapper[4799]: E0127 08:05:09.350241 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-webhook-certs podName:ac37a700-e0d3-4751-b72f-bc48bd3ef0cb nodeName:}" failed. No retries permitted until 2026-01-27 08:05:13.350226292 +0000 UTC m=+1179.661330357 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-webhook-certs") pod "openstack-operator-controller-manager-54bc44cbfd-w99km" (UID: "ac37a700-e0d3-4751-b72f-bc48bd3ef0cb") : secret "webhook-server-cert" not found Jan 27 08:05:09 crc kubenswrapper[4799]: E0127 08:05:09.350274 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-metrics-certs podName:ac37a700-e0d3-4751-b72f-bc48bd3ef0cb nodeName:}" failed. No retries permitted until 2026-01-27 08:05:13.350254553 +0000 UTC m=+1179.661358618 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-metrics-certs") pod "openstack-operator-controller-manager-54bc44cbfd-w99km" (UID: "ac37a700-e0d3-4751-b72f-bc48bd3ef0cb") : secret "metrics-server-cert" not found Jan 27 08:05:12 crc kubenswrapper[4799]: I0127 08:05:12.605388 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34178e14-d22f-4fbb-80e8-2a18fd062606-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-nc7r7\" (UID: \"34178e14-d22f-4fbb-80e8-2a18fd062606\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-nc7r7" Jan 27 08:05:12 crc kubenswrapper[4799]: E0127 08:05:12.605846 4799 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 08:05:12 crc kubenswrapper[4799]: E0127 08:05:12.605893 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34178e14-d22f-4fbb-80e8-2a18fd062606-cert podName:34178e14-d22f-4fbb-80e8-2a18fd062606 nodeName:}" failed. No retries permitted until 2026-01-27 08:05:20.605878499 +0000 UTC m=+1186.916982564 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/34178e14-d22f-4fbb-80e8-2a18fd062606-cert") pod "infra-operator-controller-manager-7d75bc88d5-nc7r7" (UID: "34178e14-d22f-4fbb-80e8-2a18fd062606") : secret "infra-operator-webhook-server-cert" not found Jan 27 08:05:13 crc kubenswrapper[4799]: I0127 08:05:13.011186 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7032d0e-676f-4153-87b6-0fce33337997-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb\" (UID: \"e7032d0e-676f-4153-87b6-0fce33337997\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" Jan 27 08:05:13 crc kubenswrapper[4799]: E0127 08:05:13.011454 4799 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 08:05:13 crc kubenswrapper[4799]: E0127 08:05:13.012214 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7032d0e-676f-4153-87b6-0fce33337997-cert podName:e7032d0e-676f-4153-87b6-0fce33337997 nodeName:}" failed. No retries permitted until 2026-01-27 08:05:21.012183394 +0000 UTC m=+1187.323287499 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e7032d0e-676f-4153-87b6-0fce33337997-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" (UID: "e7032d0e-676f-4153-87b6-0fce33337997") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 08:05:13 crc kubenswrapper[4799]: I0127 08:05:13.415777 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-metrics-certs\") pod \"openstack-operator-controller-manager-54bc44cbfd-w99km\" (UID: \"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb\") " pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:13 crc kubenswrapper[4799]: I0127 08:05:13.415880 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-webhook-certs\") pod \"openstack-operator-controller-manager-54bc44cbfd-w99km\" (UID: \"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb\") " pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:13 crc kubenswrapper[4799]: E0127 08:05:13.416004 4799 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 08:05:13 crc kubenswrapper[4799]: E0127 08:05:13.416051 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-webhook-certs podName:ac37a700-e0d3-4751-b72f-bc48bd3ef0cb nodeName:}" failed. No retries permitted until 2026-01-27 08:05:21.416037163 +0000 UTC m=+1187.727141228 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-webhook-certs") pod "openstack-operator-controller-manager-54bc44cbfd-w99km" (UID: "ac37a700-e0d3-4751-b72f-bc48bd3ef0cb") : secret "webhook-server-cert" not found Jan 27 08:05:13 crc kubenswrapper[4799]: E0127 08:05:13.416369 4799 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 08:05:13 crc kubenswrapper[4799]: E0127 08:05:13.416395 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-metrics-certs podName:ac37a700-e0d3-4751-b72f-bc48bd3ef0cb nodeName:}" failed. No retries permitted until 2026-01-27 08:05:21.416387622 +0000 UTC m=+1187.727491687 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-metrics-certs") pod "openstack-operator-controller-manager-54bc44cbfd-w99km" (UID: "ac37a700-e0d3-4751-b72f-bc48bd3ef0cb") : secret "metrics-server-cert" not found Jan 27 08:05:20 crc kubenswrapper[4799]: E0127 08:05:20.187462 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487" Jan 27 08:05:20 crc kubenswrapper[4799]: E0127 08:05:20.188169 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-596fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-55f684fd56-kww5k_openstack-operators(5aca3207-fa5d-485b-ac2c-a9c3e17081a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 08:05:20 crc kubenswrapper[4799]: E0127 08:05:20.189400 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kww5k" podUID="5aca3207-fa5d-485b-ac2c-a9c3e17081a4" Jan 27 08:05:20 crc kubenswrapper[4799]: I0127 08:05:20.623442 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34178e14-d22f-4fbb-80e8-2a18fd062606-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-nc7r7\" (UID: \"34178e14-d22f-4fbb-80e8-2a18fd062606\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-nc7r7" Jan 27 08:05:20 crc kubenswrapper[4799]: I0127 08:05:20.631561 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34178e14-d22f-4fbb-80e8-2a18fd062606-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-nc7r7\" (UID: \"34178e14-d22f-4fbb-80e8-2a18fd062606\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-nc7r7" Jan 27 08:05:20 crc kubenswrapper[4799]: I0127 08:05:20.636405 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-nc7r7" Jan 27 08:05:21 crc kubenswrapper[4799]: I0127 08:05:21.029153 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7032d0e-676f-4153-87b6-0fce33337997-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb\" (UID: \"e7032d0e-676f-4153-87b6-0fce33337997\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" Jan 27 08:05:21 crc kubenswrapper[4799]: E0127 08:05:21.029555 4799 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 08:05:21 crc kubenswrapper[4799]: E0127 08:05:21.029966 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7032d0e-676f-4153-87b6-0fce33337997-cert podName:e7032d0e-676f-4153-87b6-0fce33337997 nodeName:}" failed. No retries permitted until 2026-01-27 08:05:37.029940537 +0000 UTC m=+1203.341044632 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e7032d0e-676f-4153-87b6-0fce33337997-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" (UID: "e7032d0e-676f-4153-87b6-0fce33337997") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 08:05:21 crc kubenswrapper[4799]: I0127 08:05:21.160373 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-hzmgz" event={"ID":"55e4a841-f81d-438b-adc5-e826eb530cfe","Type":"ContainerStarted","Data":"fe1737e465bc9329dfb9cec8d8d2c843d88a559bf9d92eaedf7c09a8874c1bb2"} Jan 27 08:05:21 crc kubenswrapper[4799]: I0127 08:05:21.160972 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-hzmgz" Jan 27 08:05:21 crc kubenswrapper[4799]: I0127 08:05:21.175859 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tl2kj" event={"ID":"3017331e-6f47-4b7e-b9ad-607c6be8c20e","Type":"ContainerStarted","Data":"583d5baf7ed56c9422e628e79540dcf7b8dc4cdbe95139e8b6f551e204b9df2b"} Jan 27 08:05:21 crc kubenswrapper[4799]: I0127 08:05:21.176008 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tl2kj" Jan 27 08:05:21 crc kubenswrapper[4799]: I0127 08:05:21.178398 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-hs4t2" event={"ID":"48899916-aa13-4d02-89e3-11721dc22821","Type":"ContainerStarted","Data":"94281a57841e6861612a698065ab696e675d6414284ad5b54c25c99e99817122"} Jan 27 08:05:21 crc kubenswrapper[4799]: I0127 08:05:21.178497 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-hs4t2" Jan 27 08:05:21 crc kubenswrapper[4799]: I0127 08:05:21.181655 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-9qhwd" event={"ID":"7a4d56c1-32dd-4e3b-9f11-e18d210aa5e8","Type":"ContainerStarted","Data":"afbd0c2bee8abfba2365e2decff6c63b33f6a9027e3c8e63356a849342c3e0c4"} Jan 27 08:05:21 crc kubenswrapper[4799]: I0127 08:05:21.181828 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-9qhwd" Jan 27 08:05:21 crc kubenswrapper[4799]: I0127 08:05:21.186011 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-92h9x" event={"ID":"ab6077f2-cd61-4c1a-aa99-a5aa8afc7c3f","Type":"ContainerStarted","Data":"f948f573b32f8ec8e7563d31fabf0bc2e40c6a7b3379fd55089b86bbfebff972"} Jan 27 08:05:21 crc kubenswrapper[4799]: I0127 08:05:21.186168 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-92h9x" Jan 27 08:05:21 crc kubenswrapper[4799]: E0127 08:05:21.188138 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kww5k" podUID="5aca3207-fa5d-485b-ac2c-a9c3e17081a4" Jan 27 08:05:21 crc kubenswrapper[4799]: I0127 08:05:21.201933 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-hzmgz" podStartSLOduration=3.262966173 podStartE2EDuration="17.201909083s" podCreationTimestamp="2026-01-27 08:05:04 +0000 UTC" firstStartedPulling="2026-01-27 08:05:05.691867858 +0000 UTC m=+1172.002971923" lastFinishedPulling="2026-01-27 08:05:19.630810778 +0000 UTC m=+1185.941914833" observedRunningTime="2026-01-27 08:05:21.19017472 +0000 UTC m=+1187.501278795" watchObservedRunningTime="2026-01-27 08:05:21.201909083 +0000 UTC m=+1187.513013158" Jan 27 08:05:21 crc kubenswrapper[4799]: I0127 08:05:21.217199 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-92h9x" podStartSLOduration=3.385254254 podStartE2EDuration="17.217180632s" podCreationTimestamp="2026-01-27 08:05:04 +0000 UTC" firstStartedPulling="2026-01-27 08:05:06.332422532 +0000 UTC m=+1172.643526597" lastFinishedPulling="2026-01-27 08:05:20.16434891 +0000 UTC m=+1186.475452975" observedRunningTime="2026-01-27 08:05:21.210288483 +0000 UTC m=+1187.521392558" watchObservedRunningTime="2026-01-27 08:05:21.217180632 +0000 UTC m=+1187.528284697" Jan 27 08:05:21 crc kubenswrapper[4799]: I0127 08:05:21.252167 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-9qhwd" podStartSLOduration=3.508348276 podStartE2EDuration="17.252148293s" podCreationTimestamp="2026-01-27 08:05:04 +0000 UTC" firstStartedPulling="2026-01-27 08:05:05.886950839 +0000 UTC m=+1172.198054904" lastFinishedPulling="2026-01-27 08:05:19.630750856 +0000 UTC m=+1185.941854921" observedRunningTime="2026-01-27 08:05:21.245431869 +0000 UTC m=+1187.556535944" watchObservedRunningTime="2026-01-27 08:05:21.252148293 +0000 UTC m=+1187.563252358" Jan 27 08:05:21 crc kubenswrapper[4799]: I0127 08:05:21.268506 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tl2kj" podStartSLOduration=3.4385982090000002 podStartE2EDuration="17.268482682s" podCreationTimestamp="2026-01-27 08:05:04 +0000 UTC" firstStartedPulling="2026-01-27 08:05:06.334600711 +0000 UTC m=+1172.645704776" lastFinishedPulling="2026-01-27 08:05:20.164485164 +0000 UTC m=+1186.475589249" observedRunningTime="2026-01-27 08:05:21.266926069 +0000 UTC m=+1187.578030134" watchObservedRunningTime="2026-01-27 08:05:21.268482682 +0000 UTC m=+1187.579586757" Jan 27 08:05:21 crc kubenswrapper[4799]: I0127 08:05:21.291281 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-hs4t2" podStartSLOduration=3.155848009 podStartE2EDuration="16.291259638s" podCreationTimestamp="2026-01-27 08:05:05 +0000 UTC" firstStartedPulling="2026-01-27 08:05:06.532122959 +0000 UTC m=+1172.843227024" lastFinishedPulling="2026-01-27 08:05:19.667534588 +0000 UTC m=+1185.978638653" observedRunningTime="2026-01-27 08:05:21.288524823 +0000 UTC m=+1187.599628888" watchObservedRunningTime="2026-01-27 08:05:21.291259638 +0000 UTC m=+1187.602363703" Jan 27 08:05:21 crc kubenswrapper[4799]: I0127 08:05:21.434647 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-metrics-certs\") pod \"openstack-operator-controller-manager-54bc44cbfd-w99km\" (UID: \"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb\") " pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:21 crc kubenswrapper[4799]: E0127 08:05:21.434911 4799 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 08:05:21 crc kubenswrapper[4799]: I0127 08:05:21.435169 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-webhook-certs\") pod \"openstack-operator-controller-manager-54bc44cbfd-w99km\" (UID: \"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb\") " pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:21 crc kubenswrapper[4799]: E0127 08:05:21.435280 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-metrics-certs podName:ac37a700-e0d3-4751-b72f-bc48bd3ef0cb nodeName:}" failed. No retries permitted until 2026-01-27 08:05:37.435247535 +0000 UTC m=+1203.746351770 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-metrics-certs") pod "openstack-operator-controller-manager-54bc44cbfd-w99km" (UID: "ac37a700-e0d3-4751-b72f-bc48bd3ef0cb") : secret "metrics-server-cert" not found Jan 27 08:05:21 crc kubenswrapper[4799]: E0127 08:05:21.435397 4799 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 08:05:21 crc kubenswrapper[4799]: E0127 08:05:21.435498 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-webhook-certs podName:ac37a700-e0d3-4751-b72f-bc48bd3ef0cb nodeName:}" failed. No retries permitted until 2026-01-27 08:05:37.43545803 +0000 UTC m=+1203.746562095 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-webhook-certs") pod "openstack-operator-controller-manager-54bc44cbfd-w99km" (UID: "ac37a700-e0d3-4751-b72f-bc48bd3ef0cb") : secret "webhook-server-cert" not found Jan 27 08:05:21 crc kubenswrapper[4799]: I0127 08:05:21.831563 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-nc7r7"] Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.206321 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-cvlvn" event={"ID":"fad3c440-9e3f-4f25-b420-f1f1beb8976e","Type":"ContainerStarted","Data":"91716964ace510bd02fe65ee574cdd2cc06c3787f8ec21218579253fa982e38b"} Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.207514 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-cvlvn" Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.212847 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-dv7wf" event={"ID":"e322d396-a1f7-4802-bba8-91bd472c24e3","Type":"ContainerStarted","Data":"a60622cfacae699a058656af3c0c57d1f1505b9b80041f810dabda9bc83dc568"} Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.213052 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-dv7wf" Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.232553 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-phjqb" event={"ID":"0ecf0624-a24f-4ece-bc11-481d049df28e","Type":"ContainerStarted","Data":"5c42ab8be7b2d45fccf8e3dcca4a903c7e0f6613340e3bba44a218192bd78326"} Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.233572 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-phjqb" Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.239917 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cwpd7" event={"ID":"d536c693-c313-4de1-a636-edf8d0e3504b","Type":"ContainerStarted","Data":"dbbf077e7d7e947005abfdcf6d054a3318c0e3bdb0e93f8662f97aef7904d226"} Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.241124 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cwpd7" Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.245346 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-nc7r7" event={"ID":"34178e14-d22f-4fbb-80e8-2a18fd062606","Type":"ContainerStarted","Data":"41c4aa240bc5e628d70f74565b4ab0d51cfb2dfcbf82fa2c404e4585191ed241"} Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.256075 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-cvlvn" podStartSLOduration=4.961024935 podStartE2EDuration="18.25604597s" podCreationTimestamp="2026-01-27 08:05:04 +0000 UTC" firstStartedPulling="2026-01-27 08:05:06.335940018 +0000 UTC m=+1172.647044083" lastFinishedPulling="2026-01-27 08:05:19.630961053 +0000 UTC m=+1185.942065118" observedRunningTime="2026-01-27 08:05:22.233843781 +0000 UTC m=+1188.544947846" watchObservedRunningTime="2026-01-27 08:05:22.25604597 +0000 UTC m=+1188.567150035" Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.256956 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kcrfp" event={"ID":"c62ef33b-0827-4909-b88a-a48396df7ddd","Type":"ContainerStarted","Data":"5dcec6bc5218703c02d7d24d960c72ad014784d8e3f33073a47db6f7ccbec942"} Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.257919 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kcrfp" Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.260545 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-dv7wf" podStartSLOduration=4.074677201 podStartE2EDuration="18.260534835s" podCreationTimestamp="2026-01-27 08:05:04 +0000 UTC" firstStartedPulling="2026-01-27 08:05:05.990767043 +0000 UTC m=+1172.301871108" lastFinishedPulling="2026-01-27 08:05:20.176624677 +0000 UTC m=+1186.487728742" observedRunningTime="2026-01-27 08:05:22.249114771 +0000 UTC m=+1188.560218856" watchObservedRunningTime="2026-01-27 08:05:22.260534835 +0000 UTC m=+1188.571638900" Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.264912 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-vznwp" event={"ID":"d94d5e1a-ae08-488f-9d43-50c9d392bb64","Type":"ContainerStarted","Data":"3d8dbb230b9812ce7a0a6f0331c67627ec65687f4e4fd1d146fb06fd910cac8c"} Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.265791 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-vznwp" Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.273264 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-phjqb" podStartSLOduration=4.582496925 podStartE2EDuration="18.273249634s" podCreationTimestamp="2026-01-27 08:05:04 +0000 UTC" firstStartedPulling="2026-01-27 08:05:06.542774612 +0000 UTC m=+1172.853878677" lastFinishedPulling="2026-01-27 08:05:20.233527311 +0000 UTC m=+1186.544631386" observedRunningTime="2026-01-27 08:05:22.270786976 +0000 UTC m=+1188.581891041" watchObservedRunningTime="2026-01-27 08:05:22.273249634 +0000 UTC m=+1188.584353719" Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.274345 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-msdv6" event={"ID":"ff4f4931-e9c9-4b38-87e0-58a46c02b98d","Type":"ContainerStarted","Data":"0ab899c2890590138c030840d0d8d0f9223fb5e5e490fc605c283a78c987763a"} Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.275236 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-msdv6" Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.290154 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-z4lbd" event={"ID":"4cd87a60-6daa-4298-bc64-ff1fb8782577","Type":"ContainerStarted","Data":"8f2375285b8d9c3ea9aff743f09b253d41ea8093939e71da5e9a264526f434b5"} Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.290828 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-z4lbd" Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.324088 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-m5npz" event={"ID":"f5024a15-240a-410c-980d-109db1b46c03","Type":"ContainerStarted","Data":"8ec54b40999f06b90ddf170a3af43f87f74b3119d1fb02af0f68bd7f66b3a617"} Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.324641 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-m5npz" Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.327216 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kcrfp" podStartSLOduration=2.7595671790000003 podStartE2EDuration="17.327186016s" podCreationTimestamp="2026-01-27 08:05:05 +0000 UTC" firstStartedPulling="2026-01-27 08:05:06.739292182 +0000 UTC m=+1173.050396247" lastFinishedPulling="2026-01-27 08:05:21.306911019 +0000 UTC m=+1187.618015084" observedRunningTime="2026-01-27 08:05:22.295831084 +0000 UTC m=+1188.606935149" watchObservedRunningTime="2026-01-27 08:05:22.327186016 +0000 UTC m=+1188.638290081" Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.329386 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-vznwp" podStartSLOduration=3.747487429 podStartE2EDuration="18.329296604s" podCreationTimestamp="2026-01-27 08:05:04 +0000 UTC" firstStartedPulling="2026-01-27 08:05:05.583731867 +0000 UTC m=+1171.894835932" lastFinishedPulling="2026-01-27 08:05:20.165541042 +0000 UTC m=+1186.476645107" observedRunningTime="2026-01-27 08:05:22.322765744 +0000 UTC m=+1188.633869809" watchObservedRunningTime="2026-01-27 08:05:22.329296604 +0000 UTC m=+1188.640400669" Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.330041 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-t6chd" event={"ID":"8236753b-6720-430d-81cf-7b6c0de5a0ee","Type":"ContainerStarted","Data":"6cf35ab237b16b199ac8d36a65c1e1bf7de4c4d47d76abd881f1a0b3d25a2bf6"} Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.331326 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-t6chd" Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.364769 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cwpd7" podStartSLOduration=4.914053555 podStartE2EDuration="18.364748168s" podCreationTimestamp="2026-01-27 08:05:04 +0000 UTC" firstStartedPulling="2026-01-27 08:05:06.713967776 +0000 UTC m=+1173.025071841" lastFinishedPulling="2026-01-27 08:05:20.164662389 +0000 UTC m=+1186.475766454" observedRunningTime="2026-01-27 08:05:22.363815382 +0000 UTC m=+1188.674919447" watchObservedRunningTime="2026-01-27 08:05:22.364748168 +0000 UTC m=+1188.675852233" Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.393464 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-m5npz" podStartSLOduration=4.760079574 podStartE2EDuration="18.393438787s" podCreationTimestamp="2026-01-27 08:05:04 +0000 UTC" firstStartedPulling="2026-01-27 08:05:06.529543178 +0000 UTC m=+1172.840647243" lastFinishedPulling="2026-01-27 08:05:20.162902391 +0000 UTC m=+1186.474006456" observedRunningTime="2026-01-27 08:05:22.388902022 +0000 UTC m=+1188.700006107" watchObservedRunningTime="2026-01-27 08:05:22.393438787 +0000 UTC m=+1188.704542852" Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.413221 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-msdv6" podStartSLOduration=4.755641852 podStartE2EDuration="18.413171479s" podCreationTimestamp="2026-01-27 08:05:04 +0000 UTC" firstStartedPulling="2026-01-27 08:05:06.52963093 +0000 UTC m=+1172.840734995" lastFinishedPulling="2026-01-27 08:05:20.187160537 +0000 UTC m=+1186.498264622" observedRunningTime="2026-01-27 08:05:22.410583068 +0000 UTC m=+1188.721687133" watchObservedRunningTime="2026-01-27 08:05:22.413171479 +0000 UTC m=+1188.724275544" Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.451048 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-t6chd" podStartSLOduration=4.280469034 podStartE2EDuration="18.451027769s" podCreationTimestamp="2026-01-27 08:05:04 +0000 UTC" firstStartedPulling="2026-01-27 08:05:05.991244135 +0000 UTC m=+1172.302348190" lastFinishedPulling="2026-01-27 08:05:20.16180286 +0000 UTC m=+1186.472906925" observedRunningTime="2026-01-27 08:05:22.438277329 +0000 UTC m=+1188.749381394" watchObservedRunningTime="2026-01-27 08:05:22.451027769 +0000 UTC m=+1188.762131834" Jan 27 08:05:22 crc kubenswrapper[4799]: I0127 08:05:22.459968 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-z4lbd" podStartSLOduration=4.144619451 podStartE2EDuration="18.459955464s" podCreationTimestamp="2026-01-27 08:05:04 +0000 UTC" firstStartedPulling="2026-01-27 08:05:05.875137285 +0000 UTC m=+1172.186241350" lastFinishedPulling="2026-01-27 08:05:20.190473298 +0000 UTC m=+1186.501577363" observedRunningTime="2026-01-27 08:05:22.458606997 +0000 UTC m=+1188.769711082" watchObservedRunningTime="2026-01-27 08:05:22.459955464 +0000 UTC m=+1188.771059529" Jan 27 08:05:25 crc kubenswrapper[4799]: I0127 08:05:25.235207 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tl2kj" Jan 27 08:05:25 crc kubenswrapper[4799]: I0127 08:05:25.416121 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-92h9x" Jan 27 08:05:25 crc kubenswrapper[4799]: I0127 08:05:25.828347 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-hs4t2" Jan 27 08:05:28 crc kubenswrapper[4799]: I0127 08:05:28.412757 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnl" event={"ID":"8ac41e51-af98-4db3-bdde-9d0d2d90767f","Type":"ContainerStarted","Data":"5971cf7e744cef3e7ab716306678b7dae8c7495ee2748767e2c7e58475d72e27"} Jan 27 08:05:28 crc kubenswrapper[4799]: I0127 08:05:28.414357 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-77ttp" event={"ID":"7b6ea7e6-0b30-432b-a1e2-c11570a47ee7","Type":"ContainerStarted","Data":"ebbc0855241d7eba7976f03e71be4548a52db7f15669f26dfc5708a3f9627285"} Jan 27 08:05:28 crc kubenswrapper[4799]: I0127 08:05:28.414553 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-77ttp" Jan 27 08:05:28 crc kubenswrapper[4799]: I0127 08:05:28.415895 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-nc7r7" event={"ID":"34178e14-d22f-4fbb-80e8-2a18fd062606","Type":"ContainerStarted","Data":"d2d791c709c887c75f389d1434b7dd7ade93c71491534aedeccc9027cb87f723"} Jan 27 08:05:28 crc kubenswrapper[4799]: I0127 08:05:28.416033 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-nc7r7" Jan 27 08:05:28 crc kubenswrapper[4799]: I0127 08:05:28.417489 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9hj5w" event={"ID":"037daa10-fc4e-42d1-9ef8-7484fd944508","Type":"ContainerStarted","Data":"da30eaf1969fabfd06bfb0d893e41e37f4bbabab367e53192a7c31125df825a0"} Jan 27 08:05:28 crc kubenswrapper[4799]: I0127 08:05:28.417649 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9hj5w" Jan 27 08:05:28 crc kubenswrapper[4799]: I0127 08:05:28.418630 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gwzxq" event={"ID":"6424eee1-bc8b-46e6-86d5-405a13b0ccc9","Type":"ContainerStarted","Data":"9c6577dcfa958e3ac49068acbd214fb7539f1812e96a029a748bf6efd4932883"} Jan 27 08:05:28 crc kubenswrapper[4799]: I0127 08:05:28.418801 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gwzxq" Jan 27 08:05:28 crc kubenswrapper[4799]: I0127 08:05:28.435522 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnl" podStartSLOduration=2.118773521 podStartE2EDuration="23.435501755s" podCreationTimestamp="2026-01-27 08:05:05 +0000 UTC" firstStartedPulling="2026-01-27 08:05:06.739853148 +0000 UTC m=+1173.050957213" lastFinishedPulling="2026-01-27 08:05:28.056581382 +0000 UTC m=+1194.367685447" observedRunningTime="2026-01-27 08:05:28.433555052 +0000 UTC m=+1194.744659127" watchObservedRunningTime="2026-01-27 08:05:28.435501755 +0000 UTC m=+1194.746605820" Jan 27 08:05:28 crc kubenswrapper[4799]: I0127 08:05:28.451978 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9hj5w" podStartSLOduration=3.154254415 podStartE2EDuration="24.451961168s" podCreationTimestamp="2026-01-27 08:05:04 +0000 UTC" firstStartedPulling="2026-01-27 08:05:06.72285934 +0000 UTC m=+1173.033963405" lastFinishedPulling="2026-01-27 08:05:28.020566093 +0000 UTC m=+1194.331670158" observedRunningTime="2026-01-27 08:05:28.448541183 +0000 UTC m=+1194.759645258" watchObservedRunningTime="2026-01-27 08:05:28.451961168 +0000 UTC m=+1194.763065233" Jan 27 08:05:28 crc kubenswrapper[4799]: I0127 08:05:28.465999 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-77ttp" podStartSLOduration=3.16496084 podStartE2EDuration="24.465981043s" podCreationTimestamp="2026-01-27 08:05:04 +0000 UTC" firstStartedPulling="2026-01-27 08:05:06.737368259 +0000 UTC m=+1173.048472324" lastFinishedPulling="2026-01-27 08:05:28.038388452 +0000 UTC m=+1194.349492527" observedRunningTime="2026-01-27 08:05:28.464357919 +0000 UTC m=+1194.775461994" watchObservedRunningTime="2026-01-27 08:05:28.465981043 +0000 UTC m=+1194.777085108" Jan 27 08:05:28 crc kubenswrapper[4799]: I0127 08:05:28.491358 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-nc7r7" podStartSLOduration=18.297364037 podStartE2EDuration="24.49133541s" podCreationTimestamp="2026-01-27 08:05:04 +0000 UTC" firstStartedPulling="2026-01-27 08:05:21.83206578 +0000 UTC m=+1188.143169845" lastFinishedPulling="2026-01-27 08:05:28.026037163 +0000 UTC m=+1194.337141218" observedRunningTime="2026-01-27 08:05:28.482340722 +0000 UTC m=+1194.793444797" watchObservedRunningTime="2026-01-27 08:05:28.49133541 +0000 UTC m=+1194.802439485" Jan 27 08:05:28 crc kubenswrapper[4799]: I0127 08:05:28.501812 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gwzxq" podStartSLOduration=3.032680355 podStartE2EDuration="24.501796937s" podCreationTimestamp="2026-01-27 08:05:04 +0000 UTC" firstStartedPulling="2026-01-27 08:05:06.551373168 +0000 UTC m=+1172.862477233" lastFinishedPulling="2026-01-27 08:05:28.02048975 +0000 UTC m=+1194.331593815" observedRunningTime="2026-01-27 08:05:28.499588597 +0000 UTC m=+1194.810692662" watchObservedRunningTime="2026-01-27 08:05:28.501796937 +0000 UTC m=+1194.812901002" Jan 27 08:05:34 crc kubenswrapper[4799]: I0127 08:05:34.902456 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-vznwp" Jan 27 08:05:34 crc kubenswrapper[4799]: I0127 08:05:34.955512 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-hzmgz" Jan 27 08:05:34 crc kubenswrapper[4799]: I0127 08:05:34.960210 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-9qhwd" Jan 27 08:05:34 crc kubenswrapper[4799]: I0127 08:05:34.973089 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-dv7wf" Jan 27 08:05:34 crc kubenswrapper[4799]: I0127 08:05:34.976205 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-t6chd" Jan 27 08:05:35 crc kubenswrapper[4799]: I0127 08:05:35.007809 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-z4lbd" Jan 27 08:05:35 crc kubenswrapper[4799]: I0127 08:05:35.235795 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-cvlvn" Jan 27 08:05:35 crc kubenswrapper[4799]: I0127 08:05:35.269137 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-msdv6" Jan 27 08:05:35 crc kubenswrapper[4799]: I0127 08:05:35.345726 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-gwzxq" Jan 27 08:05:35 crc kubenswrapper[4799]: I0127 08:05:35.366695 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cwpd7" Jan 27 08:05:45 crc kubenswrapper[4799]: I0127 08:05:35.573885 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-phjqb" Jan 27 08:05:45 crc kubenswrapper[4799]: I0127 08:05:35.603542 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-9hj5w" Jan 27 08:05:45 crc kubenswrapper[4799]: I0127 08:05:35.751272 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-77ttp" Jan 27 08:05:45 crc kubenswrapper[4799]: I0127 08:05:35.763851 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-m5npz" Jan 27 08:05:45 crc kubenswrapper[4799]: I0127 08:05:35.780336 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kcrfp" Jan 27 08:05:45 crc kubenswrapper[4799]: I0127 08:05:37.097869 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7032d0e-676f-4153-87b6-0fce33337997-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb\" (UID: \"e7032d0e-676f-4153-87b6-0fce33337997\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" Jan 27 08:05:45 crc kubenswrapper[4799]: I0127 08:05:37.105180 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7032d0e-676f-4153-87b6-0fce33337997-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb\" (UID: \"e7032d0e-676f-4153-87b6-0fce33337997\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" Jan 27 08:05:45 crc kubenswrapper[4799]: I0127 08:05:37.229529 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-qcjnm" Jan 27 08:05:45 crc kubenswrapper[4799]: I0127 08:05:37.235870 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" Jan 27 08:05:45 crc kubenswrapper[4799]: I0127 08:05:37.504701 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-webhook-certs\") pod \"openstack-operator-controller-manager-54bc44cbfd-w99km\" (UID: \"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb\") " pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:45 crc kubenswrapper[4799]: I0127 08:05:37.505317 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-metrics-certs\") pod \"openstack-operator-controller-manager-54bc44cbfd-w99km\" (UID: \"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb\") " pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:45 crc kubenswrapper[4799]: I0127 08:05:37.510392 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-webhook-certs\") pod \"openstack-operator-controller-manager-54bc44cbfd-w99km\" (UID: \"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb\") " pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:45 crc kubenswrapper[4799]: I0127 08:05:37.522965 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac37a700-e0d3-4751-b72f-bc48bd3ef0cb-metrics-certs\") pod \"openstack-operator-controller-manager-54bc44cbfd-w99km\" (UID: \"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb\") " pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:45 crc kubenswrapper[4799]: I0127 08:05:37.688820 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-564bd" Jan 27 08:05:45 crc kubenswrapper[4799]: I0127 08:05:37.697124 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:45 crc kubenswrapper[4799]: I0127 08:05:40.643488 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-nc7r7" Jan 27 08:05:45 crc kubenswrapper[4799]: I0127 08:05:45.560118 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kww5k" event={"ID":"5aca3207-fa5d-485b-ac2c-a9c3e17081a4","Type":"ContainerStarted","Data":"d0bdba46a0c01b341ff90080ce4ef6fb8915469a4ac06d7b0d14103f0a468067"} Jan 27 08:05:45 crc kubenswrapper[4799]: I0127 08:05:45.560905 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kww5k" Jan 27 08:05:45 crc kubenswrapper[4799]: I0127 08:05:45.584192 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km"] Jan 27 08:05:45 crc kubenswrapper[4799]: W0127 08:05:45.585916 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac37a700_e0d3_4751_b72f_bc48bd3ef0cb.slice/crio-5f5569f887f6033ec924e4e36353484a369a3cf58fda693c6d133fb42f7cb066 WatchSource:0}: Error finding container 5f5569f887f6033ec924e4e36353484a369a3cf58fda693c6d133fb42f7cb066: Status 404 returned error can't find the container with id 5f5569f887f6033ec924e4e36353484a369a3cf58fda693c6d133fb42f7cb066 Jan 27 08:05:45 crc kubenswrapper[4799]: I0127 08:05:45.589388 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kww5k" podStartSLOduration=2.6282082 podStartE2EDuration="41.589365662s" podCreationTimestamp="2026-01-27 08:05:04 +0000 UTC" firstStartedPulling="2026-01-27 08:05:06.252942837 +0000 UTC m=+1172.564046902" lastFinishedPulling="2026-01-27 08:05:45.214100259 +0000 UTC m=+1211.525204364" observedRunningTime="2026-01-27 08:05:45.575553003 +0000 UTC m=+1211.886657068" watchObservedRunningTime="2026-01-27 08:05:45.589365662 +0000 UTC m=+1211.900469737" Jan 27 08:05:45 crc kubenswrapper[4799]: W0127 08:05:45.592443 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7032d0e_676f_4153_87b6_0fce33337997.slice/crio-ec035a41465d296a6d7ea5d15988840603e58fa6853b0c2073495f9409076efb WatchSource:0}: Error finding container ec035a41465d296a6d7ea5d15988840603e58fa6853b0c2073495f9409076efb: Status 404 returned error can't find the container with id ec035a41465d296a6d7ea5d15988840603e58fa6853b0c2073495f9409076efb Jan 27 08:05:45 crc kubenswrapper[4799]: I0127 08:05:45.598677 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb"] Jan 27 08:05:46 crc kubenswrapper[4799]: I0127 08:05:46.569111 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" event={"ID":"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb","Type":"ContainerStarted","Data":"ec9d0f3f82831adf50ab49e88e0b48d10510fad1b040cccc8d53424f4c5aa2b3"} Jan 27 08:05:46 crc kubenswrapper[4799]: I0127 08:05:46.569156 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" event={"ID":"ac37a700-e0d3-4751-b72f-bc48bd3ef0cb","Type":"ContainerStarted","Data":"5f5569f887f6033ec924e4e36353484a369a3cf58fda693c6d133fb42f7cb066"} Jan 27 08:05:46 crc kubenswrapper[4799]: I0127 08:05:46.569257 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:05:46 crc kubenswrapper[4799]: I0127 08:05:46.571622 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" event={"ID":"e7032d0e-676f-4153-87b6-0fce33337997","Type":"ContainerStarted","Data":"ec035a41465d296a6d7ea5d15988840603e58fa6853b0c2073495f9409076efb"} Jan 27 08:05:46 crc kubenswrapper[4799]: I0127 08:05:46.599982 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" podStartSLOduration=41.599963114 podStartE2EDuration="41.599963114s" podCreationTimestamp="2026-01-27 08:05:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:05:46.599767818 +0000 UTC m=+1212.910871893" watchObservedRunningTime="2026-01-27 08:05:46.599963114 +0000 UTC m=+1212.911067179" Jan 27 08:05:47 crc kubenswrapper[4799]: I0127 08:05:47.583046 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" event={"ID":"e7032d0e-676f-4153-87b6-0fce33337997","Type":"ContainerStarted","Data":"35ceba79b9eeb201ca252456ef606673a56a9d6cdec1490ec2036b5807461dc3"} Jan 27 08:05:47 crc kubenswrapper[4799]: I0127 08:05:47.583545 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" Jan 27 08:05:47 crc kubenswrapper[4799]: I0127 08:05:47.621291 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" podStartSLOduration=42.125524396 podStartE2EDuration="43.62126509s" podCreationTimestamp="2026-01-27 08:05:04 +0000 UTC" firstStartedPulling="2026-01-27 08:05:45.594818652 +0000 UTC m=+1211.905922717" lastFinishedPulling="2026-01-27 08:05:47.090559306 +0000 UTC m=+1213.401663411" observedRunningTime="2026-01-27 08:05:47.621170368 +0000 UTC m=+1213.932274463" watchObservedRunningTime="2026-01-27 08:05:47.62126509 +0000 UTC m=+1213.932369175" Jan 27 08:05:53 crc kubenswrapper[4799]: I0127 08:05:53.731720 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:05:53 crc kubenswrapper[4799]: I0127 08:05:53.732590 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:05:55 crc kubenswrapper[4799]: I0127 08:05:55.185588 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-kww5k" Jan 27 08:05:57 crc kubenswrapper[4799]: I0127 08:05:57.243588 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb" Jan 27 08:05:57 crc kubenswrapper[4799]: I0127 08:05:57.704567 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-54bc44cbfd-w99km" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.423797 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hf2rs"] Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.425977 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hf2rs" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.428044 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.429321 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.429410 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-kdbrh" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.429682 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.434771 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hf2rs"] Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.499781 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mmhkq"] Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.501166 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mmhkq" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.511373 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.514806 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mmhkq"] Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.560681 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnfst\" (UniqueName: \"kubernetes.io/projected/86eb2480-c0ab-4904-94ba-eee2985dfe17-kube-api-access-nnfst\") pod \"dnsmasq-dns-78dd6ddcc-mmhkq\" (UID: \"86eb2480-c0ab-4904-94ba-eee2985dfe17\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mmhkq" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.560764 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw74h\" (UniqueName: \"kubernetes.io/projected/26abd42f-9a39-41aa-a729-03fa7b62d72e-kube-api-access-dw74h\") pod \"dnsmasq-dns-675f4bcbfc-hf2rs\" (UID: \"26abd42f-9a39-41aa-a729-03fa7b62d72e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hf2rs" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.560882 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86eb2480-c0ab-4904-94ba-eee2985dfe17-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mmhkq\" (UID: \"86eb2480-c0ab-4904-94ba-eee2985dfe17\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mmhkq" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.560905 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86eb2480-c0ab-4904-94ba-eee2985dfe17-config\") pod \"dnsmasq-dns-78dd6ddcc-mmhkq\" (UID: \"86eb2480-c0ab-4904-94ba-eee2985dfe17\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mmhkq" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.560926 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26abd42f-9a39-41aa-a729-03fa7b62d72e-config\") pod \"dnsmasq-dns-675f4bcbfc-hf2rs\" (UID: \"26abd42f-9a39-41aa-a729-03fa7b62d72e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hf2rs" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.661725 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86eb2480-c0ab-4904-94ba-eee2985dfe17-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mmhkq\" (UID: \"86eb2480-c0ab-4904-94ba-eee2985dfe17\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mmhkq" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.661766 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86eb2480-c0ab-4904-94ba-eee2985dfe17-config\") pod \"dnsmasq-dns-78dd6ddcc-mmhkq\" (UID: \"86eb2480-c0ab-4904-94ba-eee2985dfe17\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mmhkq" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.661787 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26abd42f-9a39-41aa-a729-03fa7b62d72e-config\") pod \"dnsmasq-dns-675f4bcbfc-hf2rs\" (UID: \"26abd42f-9a39-41aa-a729-03fa7b62d72e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hf2rs" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.661814 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnfst\" (UniqueName: \"kubernetes.io/projected/86eb2480-c0ab-4904-94ba-eee2985dfe17-kube-api-access-nnfst\") pod \"dnsmasq-dns-78dd6ddcc-mmhkq\" (UID: \"86eb2480-c0ab-4904-94ba-eee2985dfe17\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mmhkq" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.661842 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw74h\" (UniqueName: \"kubernetes.io/projected/26abd42f-9a39-41aa-a729-03fa7b62d72e-kube-api-access-dw74h\") pod \"dnsmasq-dns-675f4bcbfc-hf2rs\" (UID: \"26abd42f-9a39-41aa-a729-03fa7b62d72e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hf2rs" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.662791 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86eb2480-c0ab-4904-94ba-eee2985dfe17-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mmhkq\" (UID: \"86eb2480-c0ab-4904-94ba-eee2985dfe17\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mmhkq" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.663045 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86eb2480-c0ab-4904-94ba-eee2985dfe17-config\") pod \"dnsmasq-dns-78dd6ddcc-mmhkq\" (UID: \"86eb2480-c0ab-4904-94ba-eee2985dfe17\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mmhkq" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.664058 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26abd42f-9a39-41aa-a729-03fa7b62d72e-config\") pod \"dnsmasq-dns-675f4bcbfc-hf2rs\" (UID: \"26abd42f-9a39-41aa-a729-03fa7b62d72e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hf2rs" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.680933 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnfst\" (UniqueName: \"kubernetes.io/projected/86eb2480-c0ab-4904-94ba-eee2985dfe17-kube-api-access-nnfst\") pod \"dnsmasq-dns-78dd6ddcc-mmhkq\" (UID: \"86eb2480-c0ab-4904-94ba-eee2985dfe17\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mmhkq" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.681434 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw74h\" (UniqueName: \"kubernetes.io/projected/26abd42f-9a39-41aa-a729-03fa7b62d72e-kube-api-access-dw74h\") pod \"dnsmasq-dns-675f4bcbfc-hf2rs\" (UID: \"26abd42f-9a39-41aa-a729-03fa7b62d72e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hf2rs" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.752908 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hf2rs" Jan 27 08:06:12 crc kubenswrapper[4799]: I0127 08:06:12.828288 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mmhkq" Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.069294 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hf2rs"] Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.097580 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hjbx9"] Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.098819 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hjbx9" Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.110725 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hjbx9"] Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.168201 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmhbw\" (UniqueName: \"kubernetes.io/projected/4a2c8316-558f-409a-a144-1fef0a1b2a46-kube-api-access-dmhbw\") pod \"dnsmasq-dns-666b6646f7-hjbx9\" (UID: \"4a2c8316-558f-409a-a144-1fef0a1b2a46\") " pod="openstack/dnsmasq-dns-666b6646f7-hjbx9" Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.168318 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a2c8316-558f-409a-a144-1fef0a1b2a46-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hjbx9\" (UID: \"4a2c8316-558f-409a-a144-1fef0a1b2a46\") " pod="openstack/dnsmasq-dns-666b6646f7-hjbx9" Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.168343 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a2c8316-558f-409a-a144-1fef0a1b2a46-config\") pod \"dnsmasq-dns-666b6646f7-hjbx9\" (UID: \"4a2c8316-558f-409a-a144-1fef0a1b2a46\") " pod="openstack/dnsmasq-dns-666b6646f7-hjbx9" Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.219593 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hf2rs"] Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.269264 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a2c8316-558f-409a-a144-1fef0a1b2a46-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hjbx9\" (UID: \"4a2c8316-558f-409a-a144-1fef0a1b2a46\") " pod="openstack/dnsmasq-dns-666b6646f7-hjbx9" Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.269344 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a2c8316-558f-409a-a144-1fef0a1b2a46-config\") pod \"dnsmasq-dns-666b6646f7-hjbx9\" (UID: \"4a2c8316-558f-409a-a144-1fef0a1b2a46\") " pod="openstack/dnsmasq-dns-666b6646f7-hjbx9" Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.269412 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmhbw\" (UniqueName: \"kubernetes.io/projected/4a2c8316-558f-409a-a144-1fef0a1b2a46-kube-api-access-dmhbw\") pod \"dnsmasq-dns-666b6646f7-hjbx9\" (UID: \"4a2c8316-558f-409a-a144-1fef0a1b2a46\") " pod="openstack/dnsmasq-dns-666b6646f7-hjbx9" Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.270202 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a2c8316-558f-409a-a144-1fef0a1b2a46-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hjbx9\" (UID: \"4a2c8316-558f-409a-a144-1fef0a1b2a46\") " pod="openstack/dnsmasq-dns-666b6646f7-hjbx9" Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.270207 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a2c8316-558f-409a-a144-1fef0a1b2a46-config\") pod \"dnsmasq-dns-666b6646f7-hjbx9\" (UID: \"4a2c8316-558f-409a-a144-1fef0a1b2a46\") " pod="openstack/dnsmasq-dns-666b6646f7-hjbx9" Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.290440 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmhbw\" (UniqueName: \"kubernetes.io/projected/4a2c8316-558f-409a-a144-1fef0a1b2a46-kube-api-access-dmhbw\") pod \"dnsmasq-dns-666b6646f7-hjbx9\" (UID: \"4a2c8316-558f-409a-a144-1fef0a1b2a46\") " pod="openstack/dnsmasq-dns-666b6646f7-hjbx9" Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.352933 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mmhkq"] Jan 27 08:06:13 crc kubenswrapper[4799]: W0127 08:06:13.360023 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86eb2480_c0ab_4904_94ba_eee2985dfe17.slice/crio-6c1623fa3c327095197b1caa70b5ecb2425787c876124ce66e13c145a8509bf9 WatchSource:0}: Error finding container 6c1623fa3c327095197b1caa70b5ecb2425787c876124ce66e13c145a8509bf9: Status 404 returned error can't find the container with id 6c1623fa3c327095197b1caa70b5ecb2425787c876124ce66e13c145a8509bf9 Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.421249 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hjbx9" Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.823479 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-mmhkq" event={"ID":"86eb2480-c0ab-4904-94ba-eee2985dfe17","Type":"ContainerStarted","Data":"6c1623fa3c327095197b1caa70b5ecb2425787c876124ce66e13c145a8509bf9"} Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.832531 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-hf2rs" event={"ID":"26abd42f-9a39-41aa-a729-03fa7b62d72e","Type":"ContainerStarted","Data":"6ec26aaa904589d2cc1964b5f56f49880a01db6fd459e7ee2c01c81cbfe27384"} Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.832885 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hjbx9"] Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.861201 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mmhkq"] Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.923271 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bd7c7"] Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.924394 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bd7c7" Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.981651 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b69d2028-2c20-45c0-8cb3-3dc2e3003902-config\") pod \"dnsmasq-dns-57d769cc4f-bd7c7\" (UID: \"b69d2028-2c20-45c0-8cb3-3dc2e3003902\") " pod="openstack/dnsmasq-dns-57d769cc4f-bd7c7" Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.981705 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b69d2028-2c20-45c0-8cb3-3dc2e3003902-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-bd7c7\" (UID: \"b69d2028-2c20-45c0-8cb3-3dc2e3003902\") " pod="openstack/dnsmasq-dns-57d769cc4f-bd7c7" Jan 27 08:06:13 crc kubenswrapper[4799]: I0127 08:06:13.981740 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97vtq\" (UniqueName: \"kubernetes.io/projected/b69d2028-2c20-45c0-8cb3-3dc2e3003902-kube-api-access-97vtq\") pod \"dnsmasq-dns-57d769cc4f-bd7c7\" (UID: \"b69d2028-2c20-45c0-8cb3-3dc2e3003902\") " pod="openstack/dnsmasq-dns-57d769cc4f-bd7c7" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.028619 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bd7c7"] Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.083690 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b69d2028-2c20-45c0-8cb3-3dc2e3003902-config\") pod \"dnsmasq-dns-57d769cc4f-bd7c7\" (UID: \"b69d2028-2c20-45c0-8cb3-3dc2e3003902\") " pod="openstack/dnsmasq-dns-57d769cc4f-bd7c7" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.084488 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b69d2028-2c20-45c0-8cb3-3dc2e3003902-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-bd7c7\" (UID: \"b69d2028-2c20-45c0-8cb3-3dc2e3003902\") " pod="openstack/dnsmasq-dns-57d769cc4f-bd7c7" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.084553 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97vtq\" (UniqueName: \"kubernetes.io/projected/b69d2028-2c20-45c0-8cb3-3dc2e3003902-kube-api-access-97vtq\") pod \"dnsmasq-dns-57d769cc4f-bd7c7\" (UID: \"b69d2028-2c20-45c0-8cb3-3dc2e3003902\") " pod="openstack/dnsmasq-dns-57d769cc4f-bd7c7" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.085792 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b69d2028-2c20-45c0-8cb3-3dc2e3003902-config\") pod \"dnsmasq-dns-57d769cc4f-bd7c7\" (UID: \"b69d2028-2c20-45c0-8cb3-3dc2e3003902\") " pod="openstack/dnsmasq-dns-57d769cc4f-bd7c7" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.085790 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b69d2028-2c20-45c0-8cb3-3dc2e3003902-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-bd7c7\" (UID: \"b69d2028-2c20-45c0-8cb3-3dc2e3003902\") " pod="openstack/dnsmasq-dns-57d769cc4f-bd7c7" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.116942 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97vtq\" (UniqueName: \"kubernetes.io/projected/b69d2028-2c20-45c0-8cb3-3dc2e3003902-kube-api-access-97vtq\") pod \"dnsmasq-dns-57d769cc4f-bd7c7\" (UID: \"b69d2028-2c20-45c0-8cb3-3dc2e3003902\") " pod="openstack/dnsmasq-dns-57d769cc4f-bd7c7" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.229473 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.258162 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.265963 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.267187 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.267441 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.267733 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.267875 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.268095 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.268182 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-n7xjx" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.272731 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.281934 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bd7c7" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.390994 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.391085 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.391137 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-config-data\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.391233 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.391263 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.391319 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.391348 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4c2h\" (UniqueName: \"kubernetes.io/projected/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-kube-api-access-z4c2h\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.391417 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.391498 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.391568 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.391604 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.495324 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.495726 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-config-data\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.495784 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.495806 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.496774 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-config-data\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.497059 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.497109 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.497146 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4c2h\" (UniqueName: \"kubernetes.io/projected/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-kube-api-access-z4c2h\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.497205 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.497233 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.497285 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.497354 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.497431 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.497461 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.500882 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.503165 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.503638 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.506998 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.511077 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.512571 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.513042 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.527909 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4c2h\" (UniqueName: \"kubernetes.io/projected/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-kube-api-access-z4c2h\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.556330 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.596030 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.818747 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bd7c7"] Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.852771 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 08:06:14 crc kubenswrapper[4799]: I0127 08:06:14.874022 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hjbx9" event={"ID":"4a2c8316-558f-409a-a144-1fef0a1b2a46","Type":"ContainerStarted","Data":"f56a90402ecda50087f8119a26299d4d2a916e9a2b12bc6123e31dee8dd47591"} Jan 27 08:06:14 crc kubenswrapper[4799]: W0127 08:06:14.874271 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d822fe6_f547_4b8f_a6e4_c7256e1b2ace.slice/crio-c75dd90023ac23b7428b0c415868224ee6c738dcd4b11a7831e7889365354e98 WatchSource:0}: Error finding container c75dd90023ac23b7428b0c415868224ee6c738dcd4b11a7831e7889365354e98: Status 404 returned error can't find the container with id c75dd90023ac23b7428b0c415868224ee6c738dcd4b11a7831e7889365354e98 Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.121357 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.123189 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.129951 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.129984 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.130033 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.130030 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xq5rk" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.130070 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.130114 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.130857 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.136107 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.215219 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.215254 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.215286 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.215429 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wrsz\" (UniqueName: \"kubernetes.io/projected/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-kube-api-access-5wrsz\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.215478 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.215559 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.215584 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.215633 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.215770 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.215908 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.215957 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.317169 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.317235 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.317279 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.317328 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wrsz\" (UniqueName: \"kubernetes.io/projected/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-kube-api-access-5wrsz\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.317354 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.317387 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.317408 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.321379 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.322547 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.322723 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.322850 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.322899 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.323104 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.323939 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.324805 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.325633 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.326085 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.326467 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.337367 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.340813 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.343596 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wrsz\" (UniqueName: \"kubernetes.io/projected/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-kube-api-access-5wrsz\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.347849 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.372802 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.464479 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.894970 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace","Type":"ContainerStarted","Data":"c75dd90023ac23b7428b0c415868224ee6c738dcd4b11a7831e7889365354e98"} Jan 27 08:06:15 crc kubenswrapper[4799]: I0127 08:06:15.896727 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-bd7c7" event={"ID":"b69d2028-2c20-45c0-8cb3-3dc2e3003902","Type":"ContainerStarted","Data":"be7436f9528e5fa99d2a4e9147b41d00fae79a7356ee3830afbb42de22e6f1e3"} Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.080121 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.411071 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.415131 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.417022 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.418861 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.418861 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-m5h2p" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.420134 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.425432 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.441386 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.550150 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.550245 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eff64e6c-4e67-435e-9f12-2d0e77530da3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.550270 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqmkv\" (UniqueName: \"kubernetes.io/projected/eff64e6c-4e67-435e-9f12-2d0e77530da3-kube-api-access-cqmkv\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.550387 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eff64e6c-4e67-435e-9f12-2d0e77530da3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.550412 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eff64e6c-4e67-435e-9f12-2d0e77530da3-config-data-default\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.550446 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eff64e6c-4e67-435e-9f12-2d0e77530da3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.550501 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eff64e6c-4e67-435e-9f12-2d0e77530da3-kolla-config\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.550521 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff64e6c-4e67-435e-9f12-2d0e77530da3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.652178 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eff64e6c-4e67-435e-9f12-2d0e77530da3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.652236 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eff64e6c-4e67-435e-9f12-2d0e77530da3-kolla-config\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.652253 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff64e6c-4e67-435e-9f12-2d0e77530da3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.652329 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.652355 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eff64e6c-4e67-435e-9f12-2d0e77530da3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.652370 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqmkv\" (UniqueName: \"kubernetes.io/projected/eff64e6c-4e67-435e-9f12-2d0e77530da3-kube-api-access-cqmkv\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.652406 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eff64e6c-4e67-435e-9f12-2d0e77530da3-config-data-default\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.652430 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eff64e6c-4e67-435e-9f12-2d0e77530da3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.656243 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eff64e6c-4e67-435e-9f12-2d0e77530da3-config-data-default\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.656543 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eff64e6c-4e67-435e-9f12-2d0e77530da3-kolla-config\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.656604 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eff64e6c-4e67-435e-9f12-2d0e77530da3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.657011 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eff64e6c-4e67-435e-9f12-2d0e77530da3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.657816 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.668239 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eff64e6c-4e67-435e-9f12-2d0e77530da3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.675396 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff64e6c-4e67-435e-9f12-2d0e77530da3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.675422 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqmkv\" (UniqueName: \"kubernetes.io/projected/eff64e6c-4e67-435e-9f12-2d0e77530da3-kube-api-access-cqmkv\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.689706 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.735844 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 08:06:16 crc kubenswrapper[4799]: I0127 08:06:16.911575 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0","Type":"ContainerStarted","Data":"98ee0c6e03cf0b63151da3373ee9831ee36f5330872278b158b020ad2f402eee"} Jan 27 08:06:17 crc kubenswrapper[4799]: I0127 08:06:17.438785 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 08:06:17 crc kubenswrapper[4799]: W0127 08:06:17.481175 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeff64e6c_4e67_435e_9f12_2d0e77530da3.slice/crio-464d2ad70eb9a8e4e82c08e06b5e82f558f4c2eb2ee4f1ea7f5f6feee954dca9 WatchSource:0}: Error finding container 464d2ad70eb9a8e4e82c08e06b5e82f558f4c2eb2ee4f1ea7f5f6feee954dca9: Status 404 returned error can't find the container with id 464d2ad70eb9a8e4e82c08e06b5e82f558f4c2eb2ee4f1ea7f5f6feee954dca9 Jan 27 08:06:17 crc kubenswrapper[4799]: I0127 08:06:17.922072 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 08:06:17 crc kubenswrapper[4799]: I0127 08:06:17.926522 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:17 crc kubenswrapper[4799]: I0127 08:06:17.935165 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 27 08:06:17 crc kubenswrapper[4799]: I0127 08:06:17.935604 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-5qgkv" Jan 27 08:06:17 crc kubenswrapper[4799]: I0127 08:06:17.935841 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 27 08:06:17 crc kubenswrapper[4799]: I0127 08:06:17.936093 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 27 08:06:17 crc kubenswrapper[4799]: I0127 08:06:17.949248 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 08:06:17 crc kubenswrapper[4799]: I0127 08:06:17.971961 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eff64e6c-4e67-435e-9f12-2d0e77530da3","Type":"ContainerStarted","Data":"464d2ad70eb9a8e4e82c08e06b5e82f558f4c2eb2ee4f1ea7f5f6feee954dca9"} Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.003079 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/26e17670-568e-498f-be09-ffb1406c3152-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.003136 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/26e17670-568e-498f-be09-ffb1406c3152-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.003202 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e17670-568e-498f-be09-ffb1406c3152-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.003233 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e17670-568e-498f-be09-ffb1406c3152-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.003274 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/26e17670-568e-498f-be09-ffb1406c3152-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.003318 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e17670-568e-498f-be09-ffb1406c3152-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.003362 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.003391 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ssps\" (UniqueName: \"kubernetes.io/projected/26e17670-568e-498f-be09-ffb1406c3152-kube-api-access-8ssps\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.009594 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.012396 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.020200 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.025222 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-ltpsb" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.025654 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.025877 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.106025 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/963110c4-038a-4208-b712-f66e885aff69-kolla-config\") pod \"memcached-0\" (UID: \"963110c4-038a-4208-b712-f66e885aff69\") " pod="openstack/memcached-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.106086 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/26e17670-568e-498f-be09-ffb1406c3152-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.106178 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/26e17670-568e-498f-be09-ffb1406c3152-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.106278 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg4fc\" (UniqueName: \"kubernetes.io/projected/963110c4-038a-4208-b712-f66e885aff69-kube-api-access-vg4fc\") pod \"memcached-0\" (UID: \"963110c4-038a-4208-b712-f66e885aff69\") " pod="openstack/memcached-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.106328 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e17670-568e-498f-be09-ffb1406c3152-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.106355 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e17670-568e-498f-be09-ffb1406c3152-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.106392 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/963110c4-038a-4208-b712-f66e885aff69-memcached-tls-certs\") pod \"memcached-0\" (UID: \"963110c4-038a-4208-b712-f66e885aff69\") " pod="openstack/memcached-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.106421 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/26e17670-568e-498f-be09-ffb1406c3152-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.106455 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e17670-568e-498f-be09-ffb1406c3152-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.106480 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.106514 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ssps\" (UniqueName: \"kubernetes.io/projected/26e17670-568e-498f-be09-ffb1406c3152-kube-api-access-8ssps\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.106596 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/963110c4-038a-4208-b712-f66e885aff69-combined-ca-bundle\") pod \"memcached-0\" (UID: \"963110c4-038a-4208-b712-f66e885aff69\") " pod="openstack/memcached-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.106620 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/963110c4-038a-4208-b712-f66e885aff69-config-data\") pod \"memcached-0\" (UID: \"963110c4-038a-4208-b712-f66e885aff69\") " pod="openstack/memcached-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.106681 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/26e17670-568e-498f-be09-ffb1406c3152-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.106898 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.106948 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/26e17670-568e-498f-be09-ffb1406c3152-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.107334 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/26e17670-568e-498f-be09-ffb1406c3152-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.108254 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e17670-568e-498f-be09-ffb1406c3152-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.115525 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e17670-568e-498f-be09-ffb1406c3152-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.134217 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ssps\" (UniqueName: \"kubernetes.io/projected/26e17670-568e-498f-be09-ffb1406c3152-kube-api-access-8ssps\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.136380 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.145320 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e17670-568e-498f-be09-ffb1406c3152-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.210666 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/963110c4-038a-4208-b712-f66e885aff69-combined-ca-bundle\") pod \"memcached-0\" (UID: \"963110c4-038a-4208-b712-f66e885aff69\") " pod="openstack/memcached-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.210717 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/963110c4-038a-4208-b712-f66e885aff69-config-data\") pod \"memcached-0\" (UID: \"963110c4-038a-4208-b712-f66e885aff69\") " pod="openstack/memcached-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.210771 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/963110c4-038a-4208-b712-f66e885aff69-kolla-config\") pod \"memcached-0\" (UID: \"963110c4-038a-4208-b712-f66e885aff69\") " pod="openstack/memcached-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.210818 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg4fc\" (UniqueName: \"kubernetes.io/projected/963110c4-038a-4208-b712-f66e885aff69-kube-api-access-vg4fc\") pod \"memcached-0\" (UID: \"963110c4-038a-4208-b712-f66e885aff69\") " pod="openstack/memcached-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.210850 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/963110c4-038a-4208-b712-f66e885aff69-memcached-tls-certs\") pod \"memcached-0\" (UID: \"963110c4-038a-4208-b712-f66e885aff69\") " pod="openstack/memcached-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.213243 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/963110c4-038a-4208-b712-f66e885aff69-kolla-config\") pod \"memcached-0\" (UID: \"963110c4-038a-4208-b712-f66e885aff69\") " pod="openstack/memcached-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.213448 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/963110c4-038a-4208-b712-f66e885aff69-config-data\") pod \"memcached-0\" (UID: \"963110c4-038a-4208-b712-f66e885aff69\") " pod="openstack/memcached-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.226921 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/963110c4-038a-4208-b712-f66e885aff69-combined-ca-bundle\") pod \"memcached-0\" (UID: \"963110c4-038a-4208-b712-f66e885aff69\") " pod="openstack/memcached-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.234497 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/963110c4-038a-4208-b712-f66e885aff69-memcached-tls-certs\") pod \"memcached-0\" (UID: \"963110c4-038a-4208-b712-f66e885aff69\") " pod="openstack/memcached-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.237395 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg4fc\" (UniqueName: \"kubernetes.io/projected/963110c4-038a-4208-b712-f66e885aff69-kube-api-access-vg4fc\") pod \"memcached-0\" (UID: \"963110c4-038a-4208-b712-f66e885aff69\") " pod="openstack/memcached-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.288199 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.349396 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.662393 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.949459 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 08:06:18 crc kubenswrapper[4799]: I0127 08:06:18.987249 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"26e17670-568e-498f-be09-ffb1406c3152","Type":"ContainerStarted","Data":"5db7784bd31ea60c5e7d19657594f084e52f4dccc06e5be9cb330076ec46e324"} Jan 27 08:06:19 crc kubenswrapper[4799]: I0127 08:06:19.811373 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 08:06:19 crc kubenswrapper[4799]: I0127 08:06:19.820996 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 08:06:19 crc kubenswrapper[4799]: I0127 08:06:19.824977 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-grgcr" Jan 27 08:06:19 crc kubenswrapper[4799]: I0127 08:06:19.865987 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 08:06:19 crc kubenswrapper[4799]: I0127 08:06:19.941382 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb996\" (UniqueName: \"kubernetes.io/projected/c20f20a7-a62c-4138-92dc-e34db63251fa-kube-api-access-hb996\") pod \"kube-state-metrics-0\" (UID: \"c20f20a7-a62c-4138-92dc-e34db63251fa\") " pod="openstack/kube-state-metrics-0" Jan 27 08:06:20 crc kubenswrapper[4799]: I0127 08:06:20.042606 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb996\" (UniqueName: \"kubernetes.io/projected/c20f20a7-a62c-4138-92dc-e34db63251fa-kube-api-access-hb996\") pod \"kube-state-metrics-0\" (UID: \"c20f20a7-a62c-4138-92dc-e34db63251fa\") " pod="openstack/kube-state-metrics-0" Jan 27 08:06:20 crc kubenswrapper[4799]: I0127 08:06:20.064949 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb996\" (UniqueName: \"kubernetes.io/projected/c20f20a7-a62c-4138-92dc-e34db63251fa-kube-api-access-hb996\") pod \"kube-state-metrics-0\" (UID: \"c20f20a7-a62c-4138-92dc-e34db63251fa\") " pod="openstack/kube-state-metrics-0" Jan 27 08:06:20 crc kubenswrapper[4799]: I0127 08:06:20.197856 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.573463 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.574589 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.576537 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.577144 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-vb5fb" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.577329 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.577460 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.578204 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.591276 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.720716 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b7da0a-2774-4bae-ba2f-3b943e027082-config\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.720759 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6b7da0a-2774-4bae-ba2f-3b943e027082-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.720811 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e6b7da0a-2774-4bae-ba2f-3b943e027082-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.720842 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tmcz\" (UniqueName: \"kubernetes.io/projected/e6b7da0a-2774-4bae-ba2f-3b943e027082-kube-api-access-7tmcz\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.720871 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b7da0a-2774-4bae-ba2f-3b943e027082-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.721007 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b7da0a-2774-4bae-ba2f-3b943e027082-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.721092 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b7da0a-2774-4bae-ba2f-3b943e027082-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.721156 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.730716 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.730758 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.822891 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b7da0a-2774-4bae-ba2f-3b943e027082-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.822941 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b7da0a-2774-4bae-ba2f-3b943e027082-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.822968 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b7da0a-2774-4bae-ba2f-3b943e027082-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.822993 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.823024 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b7da0a-2774-4bae-ba2f-3b943e027082-config\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.823048 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6b7da0a-2774-4bae-ba2f-3b943e027082-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.823096 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e6b7da0a-2774-4bae-ba2f-3b943e027082-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.823128 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tmcz\" (UniqueName: \"kubernetes.io/projected/e6b7da0a-2774-4bae-ba2f-3b943e027082-kube-api-access-7tmcz\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.823433 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.823569 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e6b7da0a-2774-4bae-ba2f-3b943e027082-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.824047 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b7da0a-2774-4bae-ba2f-3b943e027082-config\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.824381 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6b7da0a-2774-4bae-ba2f-3b943e027082-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.828386 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b7da0a-2774-4bae-ba2f-3b943e027082-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.828821 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b7da0a-2774-4bae-ba2f-3b943e027082-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.837058 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b7da0a-2774-4bae-ba2f-3b943e027082-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.841242 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tmcz\" (UniqueName: \"kubernetes.io/projected/e6b7da0a-2774-4bae-ba2f-3b943e027082-kube-api-access-7tmcz\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.845340 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:23 crc kubenswrapper[4799]: I0127 08:06:23.905797 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.027807 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"963110c4-038a-4208-b712-f66e885aff69","Type":"ContainerStarted","Data":"b53f152c6711522f8a575f1c61b868522bc55151f6f63b08e2cad87bbfc69bdb"} Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.275116 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-lx6nr"] Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.276133 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.278168 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-ndww9" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.280745 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.286623 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-zct2j"] Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.288122 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.289942 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.302125 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lx6nr"] Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.315101 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-zct2j"] Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.333839 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-var-run\") pod \"ovn-controller-ovs-zct2j\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.333924 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c92846fc-e305-4af9-816a-4067b79d2403-combined-ca-bundle\") pod \"ovn-controller-lx6nr\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.333958 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c92846fc-e305-4af9-816a-4067b79d2403-var-run\") pod \"ovn-controller-lx6nr\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.333996 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82b996cd-10af-493c-9972-bb6d9bedc711-scripts\") pod \"ovn-controller-ovs-zct2j\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.334115 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-var-lib\") pod \"ovn-controller-ovs-zct2j\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.334313 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-etc-ovs\") pod \"ovn-controller-ovs-zct2j\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.334392 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btb6k\" (UniqueName: \"kubernetes.io/projected/c92846fc-e305-4af9-816a-4067b79d2403-kube-api-access-btb6k\") pod \"ovn-controller-lx6nr\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.334459 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c92846fc-e305-4af9-816a-4067b79d2403-scripts\") pod \"ovn-controller-lx6nr\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.334507 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct4f2\" (UniqueName: \"kubernetes.io/projected/82b996cd-10af-493c-9972-bb6d9bedc711-kube-api-access-ct4f2\") pod \"ovn-controller-ovs-zct2j\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.334532 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c92846fc-e305-4af9-816a-4067b79d2403-ovn-controller-tls-certs\") pod \"ovn-controller-lx6nr\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.334564 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c92846fc-e305-4af9-816a-4067b79d2403-var-log-ovn\") pod \"ovn-controller-lx6nr\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.334635 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-var-log\") pod \"ovn-controller-ovs-zct2j\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.334691 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c92846fc-e305-4af9-816a-4067b79d2403-var-run-ovn\") pod \"ovn-controller-lx6nr\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.436648 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-var-run\") pod \"ovn-controller-ovs-zct2j\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.436719 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c92846fc-e305-4af9-816a-4067b79d2403-combined-ca-bundle\") pod \"ovn-controller-lx6nr\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.436753 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c92846fc-e305-4af9-816a-4067b79d2403-var-run\") pod \"ovn-controller-lx6nr\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.436789 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82b996cd-10af-493c-9972-bb6d9bedc711-scripts\") pod \"ovn-controller-ovs-zct2j\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.436820 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-var-lib\") pod \"ovn-controller-ovs-zct2j\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.436855 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-etc-ovs\") pod \"ovn-controller-ovs-zct2j\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.436892 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btb6k\" (UniqueName: \"kubernetes.io/projected/c92846fc-e305-4af9-816a-4067b79d2403-kube-api-access-btb6k\") pod \"ovn-controller-lx6nr\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.436919 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c92846fc-e305-4af9-816a-4067b79d2403-scripts\") pod \"ovn-controller-lx6nr\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.436951 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct4f2\" (UniqueName: \"kubernetes.io/projected/82b996cd-10af-493c-9972-bb6d9bedc711-kube-api-access-ct4f2\") pod \"ovn-controller-ovs-zct2j\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.436969 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c92846fc-e305-4af9-816a-4067b79d2403-ovn-controller-tls-certs\") pod \"ovn-controller-lx6nr\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.436986 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c92846fc-e305-4af9-816a-4067b79d2403-var-log-ovn\") pod \"ovn-controller-lx6nr\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.437012 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-var-log\") pod \"ovn-controller-ovs-zct2j\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.437035 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c92846fc-e305-4af9-816a-4067b79d2403-var-run-ovn\") pod \"ovn-controller-lx6nr\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.437161 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-var-run\") pod \"ovn-controller-ovs-zct2j\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.437224 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c92846fc-e305-4af9-816a-4067b79d2403-var-run-ovn\") pod \"ovn-controller-lx6nr\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.439551 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c92846fc-e305-4af9-816a-4067b79d2403-scripts\") pod \"ovn-controller-lx6nr\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.443738 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c92846fc-e305-4af9-816a-4067b79d2403-var-log-ovn\") pod \"ovn-controller-lx6nr\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.443876 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-var-log\") pod \"ovn-controller-ovs-zct2j\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.444030 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c92846fc-e305-4af9-816a-4067b79d2403-var-run\") pod \"ovn-controller-lx6nr\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.444199 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-var-lib\") pod \"ovn-controller-ovs-zct2j\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.444233 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c92846fc-e305-4af9-816a-4067b79d2403-ovn-controller-tls-certs\") pod \"ovn-controller-lx6nr\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.444372 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-etc-ovs\") pod \"ovn-controller-ovs-zct2j\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.445312 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82b996cd-10af-493c-9972-bb6d9bedc711-scripts\") pod \"ovn-controller-ovs-zct2j\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.451317 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c92846fc-e305-4af9-816a-4067b79d2403-combined-ca-bundle\") pod \"ovn-controller-lx6nr\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.455879 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btb6k\" (UniqueName: \"kubernetes.io/projected/c92846fc-e305-4af9-816a-4067b79d2403-kube-api-access-btb6k\") pod \"ovn-controller-lx6nr\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.455863 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct4f2\" (UniqueName: \"kubernetes.io/projected/82b996cd-10af-493c-9972-bb6d9bedc711-kube-api-access-ct4f2\") pod \"ovn-controller-ovs-zct2j\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.599690 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:24 crc kubenswrapper[4799]: I0127 08:06:24.608390 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.142483 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.144581 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.149323 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.151477 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.151842 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.159433 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.160085 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-cm8l6" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.195568 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/617cc655-aae2-4918-ba79-05e346cf9200-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.195640 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.195695 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/617cc655-aae2-4918-ba79-05e346cf9200-config\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.195840 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/617cc655-aae2-4918-ba79-05e346cf9200-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.196147 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/617cc655-aae2-4918-ba79-05e346cf9200-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.196244 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/617cc655-aae2-4918-ba79-05e346cf9200-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.196399 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slg5x\" (UniqueName: \"kubernetes.io/projected/617cc655-aae2-4918-ba79-05e346cf9200-kube-api-access-slg5x\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.196446 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/617cc655-aae2-4918-ba79-05e346cf9200-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.298592 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/617cc655-aae2-4918-ba79-05e346cf9200-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.298668 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/617cc655-aae2-4918-ba79-05e346cf9200-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.298715 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slg5x\" (UniqueName: \"kubernetes.io/projected/617cc655-aae2-4918-ba79-05e346cf9200-kube-api-access-slg5x\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.298756 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/617cc655-aae2-4918-ba79-05e346cf9200-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.298818 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/617cc655-aae2-4918-ba79-05e346cf9200-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.298862 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.299382 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/617cc655-aae2-4918-ba79-05e346cf9200-config\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.299426 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/617cc655-aae2-4918-ba79-05e346cf9200-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.299771 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/617cc655-aae2-4918-ba79-05e346cf9200-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.300437 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.301334 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/617cc655-aae2-4918-ba79-05e346cf9200-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.304987 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/617cc655-aae2-4918-ba79-05e346cf9200-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.307247 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/617cc655-aae2-4918-ba79-05e346cf9200-config\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.313837 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/617cc655-aae2-4918-ba79-05e346cf9200-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.313871 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/617cc655-aae2-4918-ba79-05e346cf9200-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.320216 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slg5x\" (UniqueName: \"kubernetes.io/projected/617cc655-aae2-4918-ba79-05e346cf9200-kube-api-access-slg5x\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.327827 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:27 crc kubenswrapper[4799]: I0127 08:06:27.463200 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:42 crc kubenswrapper[4799]: E0127 08:06:42.676267 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 27 08:06:42 crc kubenswrapper[4799]: E0127 08:06:42.677046 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqmkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(eff64e6c-4e67-435e-9f12-2d0e77530da3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 08:06:42 crc kubenswrapper[4799]: E0127 08:06:42.678753 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="eff64e6c-4e67-435e-9f12-2d0e77530da3" Jan 27 08:06:43 crc kubenswrapper[4799]: E0127 08:06:43.165819 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="eff64e6c-4e67-435e-9f12-2d0e77530da3" Jan 27 08:06:43 crc kubenswrapper[4799]: E0127 08:06:43.931827 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 27 08:06:43 crc kubenswrapper[4799]: E0127 08:06:43.932060 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ssps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(26e17670-568e-498f-be09-ffb1406c3152): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 08:06:43 crc kubenswrapper[4799]: E0127 08:06:43.933266 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="26e17670-568e-498f-be09-ffb1406c3152" Jan 27 08:06:43 crc kubenswrapper[4799]: E0127 08:06:43.956263 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 27 08:06:43 crc kubenswrapper[4799]: E0127 08:06:43.956529 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5wrsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 08:06:43 crc kubenswrapper[4799]: E0127 08:06:43.957745 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" Jan 27 08:06:43 crc kubenswrapper[4799]: E0127 08:06:43.983796 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 27 08:06:43 crc kubenswrapper[4799]: E0127 08:06:43.983996 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4c2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(8d822fe6-f547-4b8f-a6e4-c7256e1b2ace): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 08:06:43 crc kubenswrapper[4799]: E0127 08:06:43.985200 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="8d822fe6-f547-4b8f-a6e4-c7256e1b2ace" Jan 27 08:06:44 crc kubenswrapper[4799]: E0127 08:06:44.171024 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" Jan 27 08:06:44 crc kubenswrapper[4799]: E0127 08:06:44.171071 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="8d822fe6-f547-4b8f-a6e4-c7256e1b2ace" Jan 27 08:06:44 crc kubenswrapper[4799]: E0127 08:06:44.171071 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="26e17670-568e-498f-be09-ffb1406c3152" Jan 27 08:06:44 crc kubenswrapper[4799]: E0127 08:06:44.850726 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 08:06:44 crc kubenswrapper[4799]: E0127 08:06:44.851821 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nnfst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-mmhkq_openstack(86eb2480-c0ab-4904-94ba-eee2985dfe17): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 08:06:44 crc kubenswrapper[4799]: E0127 08:06:44.853030 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-mmhkq" podUID="86eb2480-c0ab-4904-94ba-eee2985dfe17" Jan 27 08:06:44 crc kubenswrapper[4799]: E0127 08:06:44.944531 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 08:06:44 crc kubenswrapper[4799]: E0127 08:06:44.945130 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dw74h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-hf2rs_openstack(26abd42f-9a39-41aa-a729-03fa7b62d72e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 08:06:44 crc kubenswrapper[4799]: E0127 08:06:44.946658 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-hf2rs" podUID="26abd42f-9a39-41aa-a729-03fa7b62d72e" Jan 27 08:06:44 crc kubenswrapper[4799]: E0127 08:06:44.948610 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 08:06:44 crc kubenswrapper[4799]: E0127 08:06:44.948732 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-97vtq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-bd7c7_openstack(b69d2028-2c20-45c0-8cb3-3dc2e3003902): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 08:06:44 crc kubenswrapper[4799]: E0127 08:06:44.950329 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-bd7c7" podUID="b69d2028-2c20-45c0-8cb3-3dc2e3003902" Jan 27 08:06:44 crc kubenswrapper[4799]: E0127 08:06:44.973733 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 08:06:44 crc kubenswrapper[4799]: E0127 08:06:44.973879 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmhbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-hjbx9_openstack(4a2c8316-558f-409a-a144-1fef0a1b2a46): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 08:06:44 crc kubenswrapper[4799]: E0127 08:06:44.974973 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-hjbx9" podUID="4a2c8316-558f-409a-a144-1fef0a1b2a46" Jan 27 08:06:45 crc kubenswrapper[4799]: E0127 08:06:45.181710 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-hjbx9" podUID="4a2c8316-558f-409a-a144-1fef0a1b2a46" Jan 27 08:06:45 crc kubenswrapper[4799]: E0127 08:06:45.186385 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-bd7c7" podUID="b69d2028-2c20-45c0-8cb3-3dc2e3003902" Jan 27 08:06:45 crc kubenswrapper[4799]: E0127 08:06:45.815074 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 27 08:06:45 crc kubenswrapper[4799]: E0127 08:06:45.815522 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:nc4h548h666h7fh598h65h5f4h5f8h64dh54ch648hb5h5bfh696h54dh5dbh5ch59ch555h588h58dhd6hbch564h688h68h96h89h564hc9h68bh9bq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vg4fc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(963110c4-038a-4208-b712-f66e885aff69): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 08:06:45 crc kubenswrapper[4799]: E0127 08:06:45.818249 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="963110c4-038a-4208-b712-f66e885aff69" Jan 27 08:06:45 crc kubenswrapper[4799]: I0127 08:06:45.928097 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hf2rs" Jan 27 08:06:45 crc kubenswrapper[4799]: I0127 08:06:45.959714 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mmhkq" Jan 27 08:06:45 crc kubenswrapper[4799]: I0127 08:06:45.968557 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw74h\" (UniqueName: \"kubernetes.io/projected/26abd42f-9a39-41aa-a729-03fa7b62d72e-kube-api-access-dw74h\") pod \"26abd42f-9a39-41aa-a729-03fa7b62d72e\" (UID: \"26abd42f-9a39-41aa-a729-03fa7b62d72e\") " Jan 27 08:06:45 crc kubenswrapper[4799]: I0127 08:06:45.968656 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26abd42f-9a39-41aa-a729-03fa7b62d72e-config\") pod \"26abd42f-9a39-41aa-a729-03fa7b62d72e\" (UID: \"26abd42f-9a39-41aa-a729-03fa7b62d72e\") " Jan 27 08:06:45 crc kubenswrapper[4799]: I0127 08:06:45.969576 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26abd42f-9a39-41aa-a729-03fa7b62d72e-config" (OuterVolumeSpecName: "config") pod "26abd42f-9a39-41aa-a729-03fa7b62d72e" (UID: "26abd42f-9a39-41aa-a729-03fa7b62d72e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:06:45 crc kubenswrapper[4799]: I0127 08:06:45.977502 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26abd42f-9a39-41aa-a729-03fa7b62d72e-kube-api-access-dw74h" (OuterVolumeSpecName: "kube-api-access-dw74h") pod "26abd42f-9a39-41aa-a729-03fa7b62d72e" (UID: "26abd42f-9a39-41aa-a729-03fa7b62d72e"). InnerVolumeSpecName "kube-api-access-dw74h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.069823 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnfst\" (UniqueName: \"kubernetes.io/projected/86eb2480-c0ab-4904-94ba-eee2985dfe17-kube-api-access-nnfst\") pod \"86eb2480-c0ab-4904-94ba-eee2985dfe17\" (UID: \"86eb2480-c0ab-4904-94ba-eee2985dfe17\") " Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.069921 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86eb2480-c0ab-4904-94ba-eee2985dfe17-dns-svc\") pod \"86eb2480-c0ab-4904-94ba-eee2985dfe17\" (UID: \"86eb2480-c0ab-4904-94ba-eee2985dfe17\") " Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.069979 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86eb2480-c0ab-4904-94ba-eee2985dfe17-config\") pod \"86eb2480-c0ab-4904-94ba-eee2985dfe17\" (UID: \"86eb2480-c0ab-4904-94ba-eee2985dfe17\") " Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.070790 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26abd42f-9a39-41aa-a729-03fa7b62d72e-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.070809 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw74h\" (UniqueName: \"kubernetes.io/projected/26abd42f-9a39-41aa-a729-03fa7b62d72e-kube-api-access-dw74h\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.071264 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86eb2480-c0ab-4904-94ba-eee2985dfe17-config" (OuterVolumeSpecName: "config") pod "86eb2480-c0ab-4904-94ba-eee2985dfe17" (UID: "86eb2480-c0ab-4904-94ba-eee2985dfe17"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.071647 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86eb2480-c0ab-4904-94ba-eee2985dfe17-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "86eb2480-c0ab-4904-94ba-eee2985dfe17" (UID: "86eb2480-c0ab-4904-94ba-eee2985dfe17"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.077006 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86eb2480-c0ab-4904-94ba-eee2985dfe17-kube-api-access-nnfst" (OuterVolumeSpecName: "kube-api-access-nnfst") pod "86eb2480-c0ab-4904-94ba-eee2985dfe17" (UID: "86eb2480-c0ab-4904-94ba-eee2985dfe17"). InnerVolumeSpecName "kube-api-access-nnfst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.119129 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.172494 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnfst\" (UniqueName: \"kubernetes.io/projected/86eb2480-c0ab-4904-94ba-eee2985dfe17-kube-api-access-nnfst\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.172526 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86eb2480-c0ab-4904-94ba-eee2985dfe17-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.172536 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86eb2480-c0ab-4904-94ba-eee2985dfe17-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.182604 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c20f20a7-a62c-4138-92dc-e34db63251fa","Type":"ContainerStarted","Data":"dd076b51607d0964990c4d24374be42b87582fca08ad4c0c8f30cecdefcfcbba"} Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.183647 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-hf2rs" event={"ID":"26abd42f-9a39-41aa-a729-03fa7b62d72e","Type":"ContainerDied","Data":"6ec26aaa904589d2cc1964b5f56f49880a01db6fd459e7ee2c01c81cbfe27384"} Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.183711 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hf2rs" Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.192095 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mmhkq" Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.192106 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-mmhkq" event={"ID":"86eb2480-c0ab-4904-94ba-eee2985dfe17","Type":"ContainerDied","Data":"6c1623fa3c327095197b1caa70b5ecb2425787c876124ce66e13c145a8509bf9"} Jan 27 08:06:46 crc kubenswrapper[4799]: E0127 08:06:46.193423 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="963110c4-038a-4208-b712-f66e885aff69" Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.275947 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hf2rs"] Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.294753 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hf2rs"] Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.312531 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mmhkq"] Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.319158 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mmhkq"] Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.327291 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.448984 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lx6nr"] Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.486996 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26abd42f-9a39-41aa-a729-03fa7b62d72e" path="/var/lib/kubelet/pods/26abd42f-9a39-41aa-a729-03fa7b62d72e/volumes" Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.487361 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86eb2480-c0ab-4904-94ba-eee2985dfe17" path="/var/lib/kubelet/pods/86eb2480-c0ab-4904-94ba-eee2985dfe17/volumes" Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.535271 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 08:06:46 crc kubenswrapper[4799]: W0127 08:06:46.539089 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod617cc655_aae2_4918_ba79_05e346cf9200.slice/crio-ab4ed8a7cf947fe16abe4581853510f2754cc1aa328e7f070e0d1a6ecd3f307c WatchSource:0}: Error finding container ab4ed8a7cf947fe16abe4581853510f2754cc1aa328e7f070e0d1a6ecd3f307c: Status 404 returned error can't find the container with id ab4ed8a7cf947fe16abe4581853510f2754cc1aa328e7f070e0d1a6ecd3f307c Jan 27 08:06:46 crc kubenswrapper[4799]: I0127 08:06:46.687086 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-zct2j"] Jan 27 08:06:46 crc kubenswrapper[4799]: W0127 08:06:46.700861 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82b996cd_10af_493c_9972_bb6d9bedc711.slice/crio-fdec8e1c21f0e8cb550396747c7e6f7c5caf17702f25535c340d1c434fd49346 WatchSource:0}: Error finding container fdec8e1c21f0e8cb550396747c7e6f7c5caf17702f25535c340d1c434fd49346: Status 404 returned error can't find the container with id fdec8e1c21f0e8cb550396747c7e6f7c5caf17702f25535c340d1c434fd49346 Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.199319 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lx6nr" event={"ID":"c92846fc-e305-4af9-816a-4067b79d2403","Type":"ContainerStarted","Data":"90f54639820210985b8db6a1ac08dca9fcb1bd24e3e1719337175bbe0a116f89"} Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.201083 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zct2j" event={"ID":"82b996cd-10af-493c-9972-bb6d9bedc711","Type":"ContainerStarted","Data":"fdec8e1c21f0e8cb550396747c7e6f7c5caf17702f25535c340d1c434fd49346"} Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.202872 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e6b7da0a-2774-4bae-ba2f-3b943e027082","Type":"ContainerStarted","Data":"02a2eae437335800ff988d22c88c6fd3208704b5e6630014b4fe86153ce6795e"} Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.203992 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"617cc655-aae2-4918-ba79-05e346cf9200","Type":"ContainerStarted","Data":"ab4ed8a7cf947fe16abe4581853510f2754cc1aa328e7f070e0d1a6ecd3f307c"} Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.432561 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-dt4kd"] Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.446278 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dt4kd"] Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.446409 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dt4kd" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.448977 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.499413 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-ovs-rundir\") pod \"ovn-controller-metrics-dt4kd\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " pod="openstack/ovn-controller-metrics-dt4kd" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.499506 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dt4kd\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " pod="openstack/ovn-controller-metrics-dt4kd" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.499812 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-ovn-rundir\") pod \"ovn-controller-metrics-dt4kd\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " pod="openstack/ovn-controller-metrics-dt4kd" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.500019 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkljb\" (UniqueName: \"kubernetes.io/projected/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-kube-api-access-rkljb\") pod \"ovn-controller-metrics-dt4kd\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " pod="openstack/ovn-controller-metrics-dt4kd" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.500062 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-combined-ca-bundle\") pod \"ovn-controller-metrics-dt4kd\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " pod="openstack/ovn-controller-metrics-dt4kd" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.500118 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-config\") pod \"ovn-controller-metrics-dt4kd\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " pod="openstack/ovn-controller-metrics-dt4kd" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.608220 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-config\") pod \"ovn-controller-metrics-dt4kd\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " pod="openstack/ovn-controller-metrics-dt4kd" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.608318 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-ovs-rundir\") pod \"ovn-controller-metrics-dt4kd\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " pod="openstack/ovn-controller-metrics-dt4kd" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.608343 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dt4kd\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " pod="openstack/ovn-controller-metrics-dt4kd" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.608369 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-ovn-rundir\") pod \"ovn-controller-metrics-dt4kd\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " pod="openstack/ovn-controller-metrics-dt4kd" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.608421 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkljb\" (UniqueName: \"kubernetes.io/projected/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-kube-api-access-rkljb\") pod \"ovn-controller-metrics-dt4kd\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " pod="openstack/ovn-controller-metrics-dt4kd" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.608438 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-combined-ca-bundle\") pod \"ovn-controller-metrics-dt4kd\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " pod="openstack/ovn-controller-metrics-dt4kd" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.617382 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-ovs-rundir\") pod \"ovn-controller-metrics-dt4kd\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " pod="openstack/ovn-controller-metrics-dt4kd" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.617751 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-ovn-rundir\") pod \"ovn-controller-metrics-dt4kd\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " pod="openstack/ovn-controller-metrics-dt4kd" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.617828 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-config\") pod \"ovn-controller-metrics-dt4kd\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " pod="openstack/ovn-controller-metrics-dt4kd" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.619589 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bd7c7"] Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.623670 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dt4kd\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " pod="openstack/ovn-controller-metrics-dt4kd" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.633817 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-combined-ca-bundle\") pod \"ovn-controller-metrics-dt4kd\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " pod="openstack/ovn-controller-metrics-dt4kd" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.644217 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hb57w"] Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.645411 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.651561 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkljb\" (UniqueName: \"kubernetes.io/projected/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-kube-api-access-rkljb\") pod \"ovn-controller-metrics-dt4kd\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " pod="openstack/ovn-controller-metrics-dt4kd" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.651849 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.686145 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hb57w"] Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.709494 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2k5p\" (UniqueName: \"kubernetes.io/projected/73af45c0-86a5-47bb-8239-ca973fece66c-kube-api-access-d2k5p\") pod \"dnsmasq-dns-7fd796d7df-hb57w\" (UID: \"73af45c0-86a5-47bb-8239-ca973fece66c\") " pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.709568 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73af45c0-86a5-47bb-8239-ca973fece66c-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-hb57w\" (UID: \"73af45c0-86a5-47bb-8239-ca973fece66c\") " pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.709596 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73af45c0-86a5-47bb-8239-ca973fece66c-config\") pod \"dnsmasq-dns-7fd796d7df-hb57w\" (UID: \"73af45c0-86a5-47bb-8239-ca973fece66c\") " pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.709652 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73af45c0-86a5-47bb-8239-ca973fece66c-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-hb57w\" (UID: \"73af45c0-86a5-47bb-8239-ca973fece66c\") " pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.801673 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dt4kd" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.817406 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2k5p\" (UniqueName: \"kubernetes.io/projected/73af45c0-86a5-47bb-8239-ca973fece66c-kube-api-access-d2k5p\") pod \"dnsmasq-dns-7fd796d7df-hb57w\" (UID: \"73af45c0-86a5-47bb-8239-ca973fece66c\") " pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.817469 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73af45c0-86a5-47bb-8239-ca973fece66c-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-hb57w\" (UID: \"73af45c0-86a5-47bb-8239-ca973fece66c\") " pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.817499 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73af45c0-86a5-47bb-8239-ca973fece66c-config\") pod \"dnsmasq-dns-7fd796d7df-hb57w\" (UID: \"73af45c0-86a5-47bb-8239-ca973fece66c\") " pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.817537 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73af45c0-86a5-47bb-8239-ca973fece66c-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-hb57w\" (UID: \"73af45c0-86a5-47bb-8239-ca973fece66c\") " pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.818404 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73af45c0-86a5-47bb-8239-ca973fece66c-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-hb57w\" (UID: \"73af45c0-86a5-47bb-8239-ca973fece66c\") " pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.818570 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73af45c0-86a5-47bb-8239-ca973fece66c-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-hb57w\" (UID: \"73af45c0-86a5-47bb-8239-ca973fece66c\") " pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.819863 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73af45c0-86a5-47bb-8239-ca973fece66c-config\") pod \"dnsmasq-dns-7fd796d7df-hb57w\" (UID: \"73af45c0-86a5-47bb-8239-ca973fece66c\") " pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.839365 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hjbx9"] Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.854447 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7tq2z"] Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.855701 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.870606 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.872385 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2k5p\" (UniqueName: \"kubernetes.io/projected/73af45c0-86a5-47bb-8239-ca973fece66c-kube-api-access-d2k5p\") pod \"dnsmasq-dns-7fd796d7df-hb57w\" (UID: \"73af45c0-86a5-47bb-8239-ca973fece66c\") " pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.884684 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7tq2z"] Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.919144 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7tq2z\" (UID: \"a14543e4-52bb-497f-bec7-d986ec4545e5\") " pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.919263 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7tq2z\" (UID: \"a14543e4-52bb-497f-bec7-d986ec4545e5\") " pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.919291 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7tq2z\" (UID: \"a14543e4-52bb-497f-bec7-d986ec4545e5\") " pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.919331 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-config\") pod \"dnsmasq-dns-86db49b7ff-7tq2z\" (UID: \"a14543e4-52bb-497f-bec7-d986ec4545e5\") " pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" Jan 27 08:06:47 crc kubenswrapper[4799]: I0127 08:06:47.919362 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnbcq\" (UniqueName: \"kubernetes.io/projected/a14543e4-52bb-497f-bec7-d986ec4545e5-kube-api-access-mnbcq\") pod \"dnsmasq-dns-86db49b7ff-7tq2z\" (UID: \"a14543e4-52bb-497f-bec7-d986ec4545e5\") " pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.011140 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.021095 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7tq2z\" (UID: \"a14543e4-52bb-497f-bec7-d986ec4545e5\") " pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.021225 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7tq2z\" (UID: \"a14543e4-52bb-497f-bec7-d986ec4545e5\") " pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.021252 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7tq2z\" (UID: \"a14543e4-52bb-497f-bec7-d986ec4545e5\") " pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.021269 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-config\") pod \"dnsmasq-dns-86db49b7ff-7tq2z\" (UID: \"a14543e4-52bb-497f-bec7-d986ec4545e5\") " pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.021365 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnbcq\" (UniqueName: \"kubernetes.io/projected/a14543e4-52bb-497f-bec7-d986ec4545e5-kube-api-access-mnbcq\") pod \"dnsmasq-dns-86db49b7ff-7tq2z\" (UID: \"a14543e4-52bb-497f-bec7-d986ec4545e5\") " pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.022382 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7tq2z\" (UID: \"a14543e4-52bb-497f-bec7-d986ec4545e5\") " pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.022479 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7tq2z\" (UID: \"a14543e4-52bb-497f-bec7-d986ec4545e5\") " pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.022522 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-config\") pod \"dnsmasq-dns-86db49b7ff-7tq2z\" (UID: \"a14543e4-52bb-497f-bec7-d986ec4545e5\") " pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.022757 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7tq2z\" (UID: \"a14543e4-52bb-497f-bec7-d986ec4545e5\") " pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.039840 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnbcq\" (UniqueName: \"kubernetes.io/projected/a14543e4-52bb-497f-bec7-d986ec4545e5-kube-api-access-mnbcq\") pod \"dnsmasq-dns-86db49b7ff-7tq2z\" (UID: \"a14543e4-52bb-497f-bec7-d986ec4545e5\") " pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.128026 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bd7c7" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.212576 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.214271 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-bd7c7" event={"ID":"b69d2028-2c20-45c0-8cb3-3dc2e3003902","Type":"ContainerDied","Data":"be7436f9528e5fa99d2a4e9147b41d00fae79a7356ee3830afbb42de22e6f1e3"} Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.214386 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bd7c7" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.232182 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97vtq\" (UniqueName: \"kubernetes.io/projected/b69d2028-2c20-45c0-8cb3-3dc2e3003902-kube-api-access-97vtq\") pod \"b69d2028-2c20-45c0-8cb3-3dc2e3003902\" (UID: \"b69d2028-2c20-45c0-8cb3-3dc2e3003902\") " Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.232349 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b69d2028-2c20-45c0-8cb3-3dc2e3003902-config\") pod \"b69d2028-2c20-45c0-8cb3-3dc2e3003902\" (UID: \"b69d2028-2c20-45c0-8cb3-3dc2e3003902\") " Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.232490 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b69d2028-2c20-45c0-8cb3-3dc2e3003902-dns-svc\") pod \"b69d2028-2c20-45c0-8cb3-3dc2e3003902\" (UID: \"b69d2028-2c20-45c0-8cb3-3dc2e3003902\") " Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.232994 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b69d2028-2c20-45c0-8cb3-3dc2e3003902-config" (OuterVolumeSpecName: "config") pod "b69d2028-2c20-45c0-8cb3-3dc2e3003902" (UID: "b69d2028-2c20-45c0-8cb3-3dc2e3003902"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.233224 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b69d2028-2c20-45c0-8cb3-3dc2e3003902-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b69d2028-2c20-45c0-8cb3-3dc2e3003902" (UID: "b69d2028-2c20-45c0-8cb3-3dc2e3003902"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.247336 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b69d2028-2c20-45c0-8cb3-3dc2e3003902-kube-api-access-97vtq" (OuterVolumeSpecName: "kube-api-access-97vtq") pod "b69d2028-2c20-45c0-8cb3-3dc2e3003902" (UID: "b69d2028-2c20-45c0-8cb3-3dc2e3003902"). InnerVolumeSpecName "kube-api-access-97vtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.303527 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hjbx9" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.334188 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97vtq\" (UniqueName: \"kubernetes.io/projected/b69d2028-2c20-45c0-8cb3-3dc2e3003902-kube-api-access-97vtq\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.334220 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b69d2028-2c20-45c0-8cb3-3dc2e3003902-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.334229 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b69d2028-2c20-45c0-8cb3-3dc2e3003902-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.435333 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmhbw\" (UniqueName: \"kubernetes.io/projected/4a2c8316-558f-409a-a144-1fef0a1b2a46-kube-api-access-dmhbw\") pod \"4a2c8316-558f-409a-a144-1fef0a1b2a46\" (UID: \"4a2c8316-558f-409a-a144-1fef0a1b2a46\") " Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.435830 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a2c8316-558f-409a-a144-1fef0a1b2a46-dns-svc\") pod \"4a2c8316-558f-409a-a144-1fef0a1b2a46\" (UID: \"4a2c8316-558f-409a-a144-1fef0a1b2a46\") " Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.435886 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a2c8316-558f-409a-a144-1fef0a1b2a46-config\") pod \"4a2c8316-558f-409a-a144-1fef0a1b2a46\" (UID: \"4a2c8316-558f-409a-a144-1fef0a1b2a46\") " Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.436290 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a2c8316-558f-409a-a144-1fef0a1b2a46-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4a2c8316-558f-409a-a144-1fef0a1b2a46" (UID: "4a2c8316-558f-409a-a144-1fef0a1b2a46"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.436821 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a2c8316-558f-409a-a144-1fef0a1b2a46-config" (OuterVolumeSpecName: "config") pod "4a2c8316-558f-409a-a144-1fef0a1b2a46" (UID: "4a2c8316-558f-409a-a144-1fef0a1b2a46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.444171 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a2c8316-558f-409a-a144-1fef0a1b2a46-kube-api-access-dmhbw" (OuterVolumeSpecName: "kube-api-access-dmhbw") pod "4a2c8316-558f-409a-a144-1fef0a1b2a46" (UID: "4a2c8316-558f-409a-a144-1fef0a1b2a46"). InnerVolumeSpecName "kube-api-access-dmhbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.539504 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a2c8316-558f-409a-a144-1fef0a1b2a46-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.539539 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a2c8316-558f-409a-a144-1fef0a1b2a46-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.539552 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmhbw\" (UniqueName: \"kubernetes.io/projected/4a2c8316-558f-409a-a144-1fef0a1b2a46-kube-api-access-dmhbw\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.587579 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bd7c7"] Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.601844 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bd7c7"] Jan 27 08:06:48 crc kubenswrapper[4799]: I0127 08:06:48.826940 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dt4kd"] Jan 27 08:06:49 crc kubenswrapper[4799]: W0127 08:06:49.043633 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3a68e3d_78f4_4a7a_9915_0801f0ffeed6.slice/crio-423e0ff5622dedf1b5358e30c57f3108bb225da07c9d45dea4a0517cd808c0a7 WatchSource:0}: Error finding container 423e0ff5622dedf1b5358e30c57f3108bb225da07c9d45dea4a0517cd808c0a7: Status 404 returned error can't find the container with id 423e0ff5622dedf1b5358e30c57f3108bb225da07c9d45dea4a0517cd808c0a7 Jan 27 08:06:49 crc kubenswrapper[4799]: I0127 08:06:49.226728 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hjbx9" Jan 27 08:06:49 crc kubenswrapper[4799]: I0127 08:06:49.226745 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hjbx9" event={"ID":"4a2c8316-558f-409a-a144-1fef0a1b2a46","Type":"ContainerDied","Data":"f56a90402ecda50087f8119a26299d4d2a916e9a2b12bc6123e31dee8dd47591"} Jan 27 08:06:49 crc kubenswrapper[4799]: I0127 08:06:49.232562 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dt4kd" event={"ID":"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6","Type":"ContainerStarted","Data":"423e0ff5622dedf1b5358e30c57f3108bb225da07c9d45dea4a0517cd808c0a7"} Jan 27 08:06:49 crc kubenswrapper[4799]: I0127 08:06:49.305932 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hjbx9"] Jan 27 08:06:49 crc kubenswrapper[4799]: I0127 08:06:49.316591 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hjbx9"] Jan 27 08:06:49 crc kubenswrapper[4799]: I0127 08:06:49.481233 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hb57w"] Jan 27 08:06:49 crc kubenswrapper[4799]: I0127 08:06:49.680235 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7tq2z"] Jan 27 08:06:50 crc kubenswrapper[4799]: I0127 08:06:50.246112 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c20f20a7-a62c-4138-92dc-e34db63251fa","Type":"ContainerStarted","Data":"090d01c6769707631cf51f040360ac39c4037b96b52b3ff88c6c9990989e0f30"} Jan 27 08:06:50 crc kubenswrapper[4799]: I0127 08:06:50.247409 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 08:06:50 crc kubenswrapper[4799]: I0127 08:06:50.265903 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=28.276552926 podStartE2EDuration="31.265883714s" podCreationTimestamp="2026-01-27 08:06:19 +0000 UTC" firstStartedPulling="2026-01-27 08:06:46.129716953 +0000 UTC m=+1272.440821018" lastFinishedPulling="2026-01-27 08:06:49.119047741 +0000 UTC m=+1275.430151806" observedRunningTime="2026-01-27 08:06:50.259849668 +0000 UTC m=+1276.570953743" watchObservedRunningTime="2026-01-27 08:06:50.265883714 +0000 UTC m=+1276.576987779" Jan 27 08:06:50 crc kubenswrapper[4799]: I0127 08:06:50.465852 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a2c8316-558f-409a-a144-1fef0a1b2a46" path="/var/lib/kubelet/pods/4a2c8316-558f-409a-a144-1fef0a1b2a46/volumes" Jan 27 08:06:50 crc kubenswrapper[4799]: I0127 08:06:50.466657 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b69d2028-2c20-45c0-8cb3-3dc2e3003902" path="/var/lib/kubelet/pods/b69d2028-2c20-45c0-8cb3-3dc2e3003902/volumes" Jan 27 08:06:50 crc kubenswrapper[4799]: W0127 08:06:50.797983 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda14543e4_52bb_497f_bec7_d986ec4545e5.slice/crio-b637b61c035d3a90f8430cf86cb4b6491f545134bcdedc38a338ad7c3fe1d4fa WatchSource:0}: Error finding container b637b61c035d3a90f8430cf86cb4b6491f545134bcdedc38a338ad7c3fe1d4fa: Status 404 returned error can't find the container with id b637b61c035d3a90f8430cf86cb4b6491f545134bcdedc38a338ad7c3fe1d4fa Jan 27 08:06:50 crc kubenswrapper[4799]: W0127 08:06:50.805089 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73af45c0_86a5_47bb_8239_ca973fece66c.slice/crio-49caef6c045d5c37916ef8a92777257b987af68d3480bfd4953c78b60cb3b76e WatchSource:0}: Error finding container 49caef6c045d5c37916ef8a92777257b987af68d3480bfd4953c78b60cb3b76e: Status 404 returned error can't find the container with id 49caef6c045d5c37916ef8a92777257b987af68d3480bfd4953c78b60cb3b76e Jan 27 08:06:51 crc kubenswrapper[4799]: I0127 08:06:51.254736 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" event={"ID":"a14543e4-52bb-497f-bec7-d986ec4545e5","Type":"ContainerStarted","Data":"b637b61c035d3a90f8430cf86cb4b6491f545134bcdedc38a338ad7c3fe1d4fa"} Jan 27 08:06:51 crc kubenswrapper[4799]: I0127 08:06:51.256053 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" event={"ID":"73af45c0-86a5-47bb-8239-ca973fece66c","Type":"ContainerStarted","Data":"49caef6c045d5c37916ef8a92777257b987af68d3480bfd4953c78b60cb3b76e"} Jan 27 08:06:52 crc kubenswrapper[4799]: I0127 08:06:52.263087 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e6b7da0a-2774-4bae-ba2f-3b943e027082","Type":"ContainerStarted","Data":"ab6cec124f22dcd31e62ec157b18fc36096228b880c78321a66eca8e1a726508"} Jan 27 08:06:52 crc kubenswrapper[4799]: I0127 08:06:52.265028 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"617cc655-aae2-4918-ba79-05e346cf9200","Type":"ContainerStarted","Data":"8b889d06f7ebe01917c15ab23bb6f82de1d3280d85886ca49cee0080e8046c73"} Jan 27 08:06:52 crc kubenswrapper[4799]: I0127 08:06:52.266333 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lx6nr" event={"ID":"c92846fc-e305-4af9-816a-4067b79d2403","Type":"ContainerStarted","Data":"dba41f475ea813cae4d687243584f94982d501ee64961725b7a4d2f0b2272bd9"} Jan 27 08:06:52 crc kubenswrapper[4799]: I0127 08:06:52.267261 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-lx6nr" Jan 27 08:06:52 crc kubenswrapper[4799]: I0127 08:06:52.269053 4799 generic.go:334] "Generic (PLEG): container finished" podID="a14543e4-52bb-497f-bec7-d986ec4545e5" containerID="7c6c2f5fcdc9ba28ed0f22244ff75a7f412c6822d9d23eb91ca28beb2c719adf" exitCode=0 Jan 27 08:06:52 crc kubenswrapper[4799]: I0127 08:06:52.269119 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" event={"ID":"a14543e4-52bb-497f-bec7-d986ec4545e5","Type":"ContainerDied","Data":"7c6c2f5fcdc9ba28ed0f22244ff75a7f412c6822d9d23eb91ca28beb2c719adf"} Jan 27 08:06:52 crc kubenswrapper[4799]: I0127 08:06:52.270622 4799 generic.go:334] "Generic (PLEG): container finished" podID="73af45c0-86a5-47bb-8239-ca973fece66c" containerID="31bc6f62bfa11fe0898da7e849febd2cbf021672bff14d0060c3fb5162e47bbb" exitCode=0 Jan 27 08:06:52 crc kubenswrapper[4799]: I0127 08:06:52.270832 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" event={"ID":"73af45c0-86a5-47bb-8239-ca973fece66c","Type":"ContainerDied","Data":"31bc6f62bfa11fe0898da7e849febd2cbf021672bff14d0060c3fb5162e47bbb"} Jan 27 08:06:52 crc kubenswrapper[4799]: I0127 08:06:52.272744 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zct2j" event={"ID":"82b996cd-10af-493c-9972-bb6d9bedc711","Type":"ContainerStarted","Data":"9273d444c90e860b425f55a10394dc9bc4ec4a919c765d16a707028eb5a0d9d7"} Jan 27 08:06:52 crc kubenswrapper[4799]: I0127 08:06:52.293873 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-lx6nr" podStartSLOduration=22.967563277 podStartE2EDuration="28.293853424s" podCreationTimestamp="2026-01-27 08:06:24 +0000 UTC" firstStartedPulling="2026-01-27 08:06:46.467066006 +0000 UTC m=+1272.778170071" lastFinishedPulling="2026-01-27 08:06:51.793356153 +0000 UTC m=+1278.104460218" observedRunningTime="2026-01-27 08:06:52.289589866 +0000 UTC m=+1278.600693931" watchObservedRunningTime="2026-01-27 08:06:52.293853424 +0000 UTC m=+1278.604957489" Jan 27 08:06:53 crc kubenswrapper[4799]: I0127 08:06:53.282724 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" event={"ID":"73af45c0-86a5-47bb-8239-ca973fece66c","Type":"ContainerStarted","Data":"ff83e4eb9332bf08ac928d88e94c10462103fafe74cadaeaf49629034816cca8"} Jan 27 08:06:53 crc kubenswrapper[4799]: I0127 08:06:53.283363 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" Jan 27 08:06:53 crc kubenswrapper[4799]: I0127 08:06:53.284707 4799 generic.go:334] "Generic (PLEG): container finished" podID="82b996cd-10af-493c-9972-bb6d9bedc711" containerID="9273d444c90e860b425f55a10394dc9bc4ec4a919c765d16a707028eb5a0d9d7" exitCode=0 Jan 27 08:06:53 crc kubenswrapper[4799]: I0127 08:06:53.284776 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zct2j" event={"ID":"82b996cd-10af-493c-9972-bb6d9bedc711","Type":"ContainerDied","Data":"9273d444c90e860b425f55a10394dc9bc4ec4a919c765d16a707028eb5a0d9d7"} Jan 27 08:06:53 crc kubenswrapper[4799]: I0127 08:06:53.288036 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" event={"ID":"a14543e4-52bb-497f-bec7-d986ec4545e5","Type":"ContainerStarted","Data":"0feb6978ca2b40da37f4a7a77f58bded569c4b1e36443258f15bbaf2d5999ab9"} Jan 27 08:06:53 crc kubenswrapper[4799]: I0127 08:06:53.288409 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" Jan 27 08:06:53 crc kubenswrapper[4799]: I0127 08:06:53.319419 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" podStartSLOduration=5.353777737 podStartE2EDuration="6.319397703s" podCreationTimestamp="2026-01-27 08:06:47 +0000 UTC" firstStartedPulling="2026-01-27 08:06:50.832445367 +0000 UTC m=+1277.143549442" lastFinishedPulling="2026-01-27 08:06:51.798065343 +0000 UTC m=+1278.109169408" observedRunningTime="2026-01-27 08:06:53.30515445 +0000 UTC m=+1279.616258515" watchObservedRunningTime="2026-01-27 08:06:53.319397703 +0000 UTC m=+1279.630501768" Jan 27 08:06:53 crc kubenswrapper[4799]: I0127 08:06:53.352596 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" podStartSLOduration=5.360911284 podStartE2EDuration="6.352573938s" podCreationTimestamp="2026-01-27 08:06:47 +0000 UTC" firstStartedPulling="2026-01-27 08:06:50.802910983 +0000 UTC m=+1277.114015048" lastFinishedPulling="2026-01-27 08:06:51.794573627 +0000 UTC m=+1278.105677702" observedRunningTime="2026-01-27 08:06:53.34650143 +0000 UTC m=+1279.657605505" watchObservedRunningTime="2026-01-27 08:06:53.352573938 +0000 UTC m=+1279.663678003" Jan 27 08:06:53 crc kubenswrapper[4799]: I0127 08:06:53.731360 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:06:53 crc kubenswrapper[4799]: I0127 08:06:53.731421 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:06:53 crc kubenswrapper[4799]: I0127 08:06:53.731469 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 08:06:53 crc kubenswrapper[4799]: I0127 08:06:53.732076 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13185769064c6ec2b432a1350a219c32fc8634c50a20acef33753a2a5d7615d7"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 08:06:53 crc kubenswrapper[4799]: I0127 08:06:53.732131 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://13185769064c6ec2b432a1350a219c32fc8634c50a20acef33753a2a5d7615d7" gracePeriod=600 Jan 27 08:06:54 crc kubenswrapper[4799]: I0127 08:06:54.305410 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="13185769064c6ec2b432a1350a219c32fc8634c50a20acef33753a2a5d7615d7" exitCode=0 Jan 27 08:06:54 crc kubenswrapper[4799]: I0127 08:06:54.305522 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"13185769064c6ec2b432a1350a219c32fc8634c50a20acef33753a2a5d7615d7"} Jan 27 08:06:54 crc kubenswrapper[4799]: I0127 08:06:54.305583 4799 scope.go:117] "RemoveContainer" containerID="213c4fc7aacd3827bd72695593e20f0c8c733e85cf917988ad9ae8811f1be289" Jan 27 08:06:55 crc kubenswrapper[4799]: I0127 08:06:55.316557 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dt4kd" event={"ID":"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6","Type":"ContainerStarted","Data":"5509cf2b97d8f78121cc9bf786809bf94e29e6e3d4c6777462e39be2a813f69f"} Jan 27 08:06:55 crc kubenswrapper[4799]: I0127 08:06:55.319553 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"45e8464efa823c0efb39a137ea02aa341a85fc57fd2ab60277b88ead10fb975d"} Jan 27 08:06:55 crc kubenswrapper[4799]: I0127 08:06:55.322521 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zct2j" event={"ID":"82b996cd-10af-493c-9972-bb6d9bedc711","Type":"ContainerStarted","Data":"21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f"} Jan 27 08:06:55 crc kubenswrapper[4799]: I0127 08:06:55.322585 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zct2j" event={"ID":"82b996cd-10af-493c-9972-bb6d9bedc711","Type":"ContainerStarted","Data":"cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e"} Jan 27 08:06:55 crc kubenswrapper[4799]: I0127 08:06:55.322807 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:06:55 crc kubenswrapper[4799]: I0127 08:06:55.322844 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:06:55 crc kubenswrapper[4799]: I0127 08:06:55.324913 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e6b7da0a-2774-4bae-ba2f-3b943e027082","Type":"ContainerStarted","Data":"af4582fbc376280b8069bf7f7b55933070749f10ea9380861c1e04e1287e288f"} Jan 27 08:06:55 crc kubenswrapper[4799]: I0127 08:06:55.326587 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"617cc655-aae2-4918-ba79-05e346cf9200","Type":"ContainerStarted","Data":"97b857c500f0dc120edc4b9f7299baa035a1a1571e8961c116690abe3c273321"} Jan 27 08:06:55 crc kubenswrapper[4799]: I0127 08:06:55.341066 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-dt4kd" podStartSLOduration=2.677334156 podStartE2EDuration="8.341045618s" podCreationTimestamp="2026-01-27 08:06:47 +0000 UTC" firstStartedPulling="2026-01-27 08:06:49.04898729 +0000 UTC m=+1275.360091355" lastFinishedPulling="2026-01-27 08:06:54.712698732 +0000 UTC m=+1281.023802817" observedRunningTime="2026-01-27 08:06:55.330906219 +0000 UTC m=+1281.642010304" watchObservedRunningTime="2026-01-27 08:06:55.341045618 +0000 UTC m=+1281.652149683" Jan 27 08:06:55 crc kubenswrapper[4799]: I0127 08:06:55.437552 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=21.255399663 podStartE2EDuration="29.437534529s" podCreationTimestamp="2026-01-27 08:06:26 +0000 UTC" firstStartedPulling="2026-01-27 08:06:46.541734044 +0000 UTC m=+1272.852838109" lastFinishedPulling="2026-01-27 08:06:54.7238689 +0000 UTC m=+1281.034972975" observedRunningTime="2026-01-27 08:06:55.410875704 +0000 UTC m=+1281.721979769" watchObservedRunningTime="2026-01-27 08:06:55.437534529 +0000 UTC m=+1281.748638594" Jan 27 08:06:55 crc kubenswrapper[4799]: I0127 08:06:55.486868 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-zct2j" podStartSLOduration=26.452250133 podStartE2EDuration="31.486838218s" podCreationTimestamp="2026-01-27 08:06:24 +0000 UTC" firstStartedPulling="2026-01-27 08:06:46.703036802 +0000 UTC m=+1273.014140867" lastFinishedPulling="2026-01-27 08:06:51.737624887 +0000 UTC m=+1278.048728952" observedRunningTime="2026-01-27 08:06:55.486693664 +0000 UTC m=+1281.797797729" watchObservedRunningTime="2026-01-27 08:06:55.486838218 +0000 UTC m=+1281.797942283" Jan 27 08:06:55 crc kubenswrapper[4799]: I0127 08:06:55.521225 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=25.099757402 podStartE2EDuration="33.521202476s" podCreationTimestamp="2026-01-27 08:06:22 +0000 UTC" firstStartedPulling="2026-01-27 08:06:46.317022998 +0000 UTC m=+1272.628127063" lastFinishedPulling="2026-01-27 08:06:54.738468072 +0000 UTC m=+1281.049572137" observedRunningTime="2026-01-27 08:06:55.515692354 +0000 UTC m=+1281.826796419" watchObservedRunningTime="2026-01-27 08:06:55.521202476 +0000 UTC m=+1281.832306531" Jan 27 08:06:56 crc kubenswrapper[4799]: I0127 08:06:56.333982 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"26e17670-568e-498f-be09-ffb1406c3152","Type":"ContainerStarted","Data":"85a174ff8c67bb38bd7c91ea1b4524dd854982c1de39341da5ed9b10d4340709"} Jan 27 08:06:56 crc kubenswrapper[4799]: I0127 08:06:56.907236 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:56 crc kubenswrapper[4799]: I0127 08:06:56.976364 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:57 crc kubenswrapper[4799]: I0127 08:06:57.342873 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eff64e6c-4e67-435e-9f12-2d0e77530da3","Type":"ContainerStarted","Data":"41ceb431597d999dd9865e384f1d372534e35d3e6a1cead6b6867fb70170d327"} Jan 27 08:06:57 crc kubenswrapper[4799]: I0127 08:06:57.343109 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:57 crc kubenswrapper[4799]: I0127 08:06:57.384161 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 27 08:06:57 crc kubenswrapper[4799]: I0127 08:06:57.463916 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:57 crc kubenswrapper[4799]: I0127 08:06:57.463998 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:57 crc kubenswrapper[4799]: I0127 08:06:57.507810 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.013487 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.214941 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.261137 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hb57w"] Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.350402 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace","Type":"ContainerStarted","Data":"eeb8b3edecbf9c4102ac408a97dae6338b573a4495066dcb7a4630df2561b314"} Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.350820 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" podUID="73af45c0-86a5-47bb-8239-ca973fece66c" containerName="dnsmasq-dns" containerID="cri-o://ff83e4eb9332bf08ac928d88e94c10462103fafe74cadaeaf49629034816cca8" gracePeriod=10 Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.388805 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.628737 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.632552 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.635228 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.635668 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.635930 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-fvzc2" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.638569 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.672992 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.682594 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/54237546-70b8-4475-bd97-53ea6047786b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " pod="openstack/ovn-northd-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.682661 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/54237546-70b8-4475-bd97-53ea6047786b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " pod="openstack/ovn-northd-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.682749 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/54237546-70b8-4475-bd97-53ea6047786b-scripts\") pod \"ovn-northd-0\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " pod="openstack/ovn-northd-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.682774 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54237546-70b8-4475-bd97-53ea6047786b-config\") pod \"ovn-northd-0\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " pod="openstack/ovn-northd-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.682832 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/54237546-70b8-4475-bd97-53ea6047786b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " pod="openstack/ovn-northd-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.682856 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvdx7\" (UniqueName: \"kubernetes.io/projected/54237546-70b8-4475-bd97-53ea6047786b-kube-api-access-jvdx7\") pod \"ovn-northd-0\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " pod="openstack/ovn-northd-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.682895 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54237546-70b8-4475-bd97-53ea6047786b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " pod="openstack/ovn-northd-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.784885 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvdx7\" (UniqueName: \"kubernetes.io/projected/54237546-70b8-4475-bd97-53ea6047786b-kube-api-access-jvdx7\") pod \"ovn-northd-0\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " pod="openstack/ovn-northd-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.784939 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54237546-70b8-4475-bd97-53ea6047786b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " pod="openstack/ovn-northd-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.785007 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/54237546-70b8-4475-bd97-53ea6047786b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " pod="openstack/ovn-northd-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.785032 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/54237546-70b8-4475-bd97-53ea6047786b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " pod="openstack/ovn-northd-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.785072 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/54237546-70b8-4475-bd97-53ea6047786b-scripts\") pod \"ovn-northd-0\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " pod="openstack/ovn-northd-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.785093 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54237546-70b8-4475-bd97-53ea6047786b-config\") pod \"ovn-northd-0\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " pod="openstack/ovn-northd-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.785127 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/54237546-70b8-4475-bd97-53ea6047786b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " pod="openstack/ovn-northd-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.789090 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/54237546-70b8-4475-bd97-53ea6047786b-scripts\") pod \"ovn-northd-0\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " pod="openstack/ovn-northd-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.789404 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/54237546-70b8-4475-bd97-53ea6047786b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " pod="openstack/ovn-northd-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.789911 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54237546-70b8-4475-bd97-53ea6047786b-config\") pod \"ovn-northd-0\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " pod="openstack/ovn-northd-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.794714 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54237546-70b8-4475-bd97-53ea6047786b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " pod="openstack/ovn-northd-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.805527 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/54237546-70b8-4475-bd97-53ea6047786b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " pod="openstack/ovn-northd-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.807144 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/54237546-70b8-4475-bd97-53ea6047786b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " pod="openstack/ovn-northd-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.851516 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvdx7\" (UniqueName: \"kubernetes.io/projected/54237546-70b8-4475-bd97-53ea6047786b-kube-api-access-jvdx7\") pod \"ovn-northd-0\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " pod="openstack/ovn-northd-0" Jan 27 08:06:58 crc kubenswrapper[4799]: I0127 08:06:58.959383 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.044571 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.192811 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2k5p\" (UniqueName: \"kubernetes.io/projected/73af45c0-86a5-47bb-8239-ca973fece66c-kube-api-access-d2k5p\") pod \"73af45c0-86a5-47bb-8239-ca973fece66c\" (UID: \"73af45c0-86a5-47bb-8239-ca973fece66c\") " Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.192895 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73af45c0-86a5-47bb-8239-ca973fece66c-ovsdbserver-nb\") pod \"73af45c0-86a5-47bb-8239-ca973fece66c\" (UID: \"73af45c0-86a5-47bb-8239-ca973fece66c\") " Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.193016 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73af45c0-86a5-47bb-8239-ca973fece66c-config\") pod \"73af45c0-86a5-47bb-8239-ca973fece66c\" (UID: \"73af45c0-86a5-47bb-8239-ca973fece66c\") " Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.193044 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73af45c0-86a5-47bb-8239-ca973fece66c-dns-svc\") pod \"73af45c0-86a5-47bb-8239-ca973fece66c\" (UID: \"73af45c0-86a5-47bb-8239-ca973fece66c\") " Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.198090 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73af45c0-86a5-47bb-8239-ca973fece66c-kube-api-access-d2k5p" (OuterVolumeSpecName: "kube-api-access-d2k5p") pod "73af45c0-86a5-47bb-8239-ca973fece66c" (UID: "73af45c0-86a5-47bb-8239-ca973fece66c"). InnerVolumeSpecName "kube-api-access-d2k5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.229627 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73af45c0-86a5-47bb-8239-ca973fece66c-config" (OuterVolumeSpecName: "config") pod "73af45c0-86a5-47bb-8239-ca973fece66c" (UID: "73af45c0-86a5-47bb-8239-ca973fece66c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.232959 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73af45c0-86a5-47bb-8239-ca973fece66c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "73af45c0-86a5-47bb-8239-ca973fece66c" (UID: "73af45c0-86a5-47bb-8239-ca973fece66c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.239009 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73af45c0-86a5-47bb-8239-ca973fece66c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "73af45c0-86a5-47bb-8239-ca973fece66c" (UID: "73af45c0-86a5-47bb-8239-ca973fece66c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.294814 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73af45c0-86a5-47bb-8239-ca973fece66c-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.294848 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73af45c0-86a5-47bb-8239-ca973fece66c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.294857 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2k5p\" (UniqueName: \"kubernetes.io/projected/73af45c0-86a5-47bb-8239-ca973fece66c-kube-api-access-d2k5p\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.294868 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73af45c0-86a5-47bb-8239-ca973fece66c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.359656 4799 generic.go:334] "Generic (PLEG): container finished" podID="73af45c0-86a5-47bb-8239-ca973fece66c" containerID="ff83e4eb9332bf08ac928d88e94c10462103fafe74cadaeaf49629034816cca8" exitCode=0 Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.359828 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" event={"ID":"73af45c0-86a5-47bb-8239-ca973fece66c","Type":"ContainerDied","Data":"ff83e4eb9332bf08ac928d88e94c10462103fafe74cadaeaf49629034816cca8"} Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.360564 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" event={"ID":"73af45c0-86a5-47bb-8239-ca973fece66c","Type":"ContainerDied","Data":"49caef6c045d5c37916ef8a92777257b987af68d3480bfd4953c78b60cb3b76e"} Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.359918 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hb57w" Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.360625 4799 scope.go:117] "RemoveContainer" containerID="ff83e4eb9332bf08ac928d88e94c10462103fafe74cadaeaf49629034816cca8" Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.394603 4799 scope.go:117] "RemoveContainer" containerID="31bc6f62bfa11fe0898da7e849febd2cbf021672bff14d0060c3fb5162e47bbb" Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.397404 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hb57w"] Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.405845 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hb57w"] Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.428451 4799 scope.go:117] "RemoveContainer" containerID="ff83e4eb9332bf08ac928d88e94c10462103fafe74cadaeaf49629034816cca8" Jan 27 08:06:59 crc kubenswrapper[4799]: E0127 08:06:59.431092 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff83e4eb9332bf08ac928d88e94c10462103fafe74cadaeaf49629034816cca8\": container with ID starting with ff83e4eb9332bf08ac928d88e94c10462103fafe74cadaeaf49629034816cca8 not found: ID does not exist" containerID="ff83e4eb9332bf08ac928d88e94c10462103fafe74cadaeaf49629034816cca8" Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.431122 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff83e4eb9332bf08ac928d88e94c10462103fafe74cadaeaf49629034816cca8"} err="failed to get container status \"ff83e4eb9332bf08ac928d88e94c10462103fafe74cadaeaf49629034816cca8\": rpc error: code = NotFound desc = could not find container \"ff83e4eb9332bf08ac928d88e94c10462103fafe74cadaeaf49629034816cca8\": container with ID starting with ff83e4eb9332bf08ac928d88e94c10462103fafe74cadaeaf49629034816cca8 not found: ID does not exist" Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.431142 4799 scope.go:117] "RemoveContainer" containerID="31bc6f62bfa11fe0898da7e849febd2cbf021672bff14d0060c3fb5162e47bbb" Jan 27 08:06:59 crc kubenswrapper[4799]: E0127 08:06:59.431576 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31bc6f62bfa11fe0898da7e849febd2cbf021672bff14d0060c3fb5162e47bbb\": container with ID starting with 31bc6f62bfa11fe0898da7e849febd2cbf021672bff14d0060c3fb5162e47bbb not found: ID does not exist" containerID="31bc6f62bfa11fe0898da7e849febd2cbf021672bff14d0060c3fb5162e47bbb" Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.431597 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31bc6f62bfa11fe0898da7e849febd2cbf021672bff14d0060c3fb5162e47bbb"} err="failed to get container status \"31bc6f62bfa11fe0898da7e849febd2cbf021672bff14d0060c3fb5162e47bbb\": rpc error: code = NotFound desc = could not find container \"31bc6f62bfa11fe0898da7e849febd2cbf021672bff14d0060c3fb5162e47bbb\": container with ID starting with 31bc6f62bfa11fe0898da7e849febd2cbf021672bff14d0060c3fb5162e47bbb not found: ID does not exist" Jan 27 08:06:59 crc kubenswrapper[4799]: I0127 08:06:59.448130 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 08:07:00 crc kubenswrapper[4799]: I0127 08:07:00.207860 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 08:07:00 crc kubenswrapper[4799]: I0127 08:07:00.373761 4799 generic.go:334] "Generic (PLEG): container finished" podID="26e17670-568e-498f-be09-ffb1406c3152" containerID="85a174ff8c67bb38bd7c91ea1b4524dd854982c1de39341da5ed9b10d4340709" exitCode=0 Jan 27 08:07:00 crc kubenswrapper[4799]: I0127 08:07:00.373811 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"26e17670-568e-498f-be09-ffb1406c3152","Type":"ContainerDied","Data":"85a174ff8c67bb38bd7c91ea1b4524dd854982c1de39341da5ed9b10d4340709"} Jan 27 08:07:00 crc kubenswrapper[4799]: I0127 08:07:00.378139 4799 generic.go:334] "Generic (PLEG): container finished" podID="eff64e6c-4e67-435e-9f12-2d0e77530da3" containerID="41ceb431597d999dd9865e384f1d372534e35d3e6a1cead6b6867fb70170d327" exitCode=0 Jan 27 08:07:00 crc kubenswrapper[4799]: I0127 08:07:00.378212 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eff64e6c-4e67-435e-9f12-2d0e77530da3","Type":"ContainerDied","Data":"41ceb431597d999dd9865e384f1d372534e35d3e6a1cead6b6867fb70170d327"} Jan 27 08:07:00 crc kubenswrapper[4799]: I0127 08:07:00.386262 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"963110c4-038a-4208-b712-f66e885aff69","Type":"ContainerStarted","Data":"d98d0c79854cb8f58e852a15bda25d609c871904704067e2c7d590e5fdacb53a"} Jan 27 08:07:00 crc kubenswrapper[4799]: I0127 08:07:00.386577 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 27 08:07:00 crc kubenswrapper[4799]: I0127 08:07:00.389543 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"54237546-70b8-4475-bd97-53ea6047786b","Type":"ContainerStarted","Data":"8bc1e658ce322d9a110b37012f6d585bc2b4c99706cdbf248e49fd1af4932bcf"} Jan 27 08:07:00 crc kubenswrapper[4799]: I0127 08:07:00.433074 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=7.473484104 podStartE2EDuration="43.433048326s" podCreationTimestamp="2026-01-27 08:06:17 +0000 UTC" firstStartedPulling="2026-01-27 08:06:23.897598583 +0000 UTC m=+1250.208702648" lastFinishedPulling="2026-01-27 08:06:59.857162805 +0000 UTC m=+1286.168266870" observedRunningTime="2026-01-27 08:07:00.426854624 +0000 UTC m=+1286.737958689" watchObservedRunningTime="2026-01-27 08:07:00.433048326 +0000 UTC m=+1286.744152391" Jan 27 08:07:00 crc kubenswrapper[4799]: I0127 08:07:00.467279 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73af45c0-86a5-47bb-8239-ca973fece66c" path="/var/lib/kubelet/pods/73af45c0-86a5-47bb-8239-ca973fece66c/volumes" Jan 27 08:07:01 crc kubenswrapper[4799]: I0127 08:07:01.406619 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"54237546-70b8-4475-bd97-53ea6047786b","Type":"ContainerStarted","Data":"960c6e3a2d0404b26224ebbe0c842e8ab32ce14fc65333840ba8d2163a57fc6d"} Jan 27 08:07:01 crc kubenswrapper[4799]: I0127 08:07:01.406926 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 27 08:07:01 crc kubenswrapper[4799]: I0127 08:07:01.406937 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"54237546-70b8-4475-bd97-53ea6047786b","Type":"ContainerStarted","Data":"fbcff016e9760704203725b31f6b8f4186145b4f641856d6714921eacca80540"} Jan 27 08:07:01 crc kubenswrapper[4799]: I0127 08:07:01.410668 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"26e17670-568e-498f-be09-ffb1406c3152","Type":"ContainerStarted","Data":"4bd65b5bc7d74ca250c680832d09104b02e9463eba911724fc54dcc3b8686b82"} Jan 27 08:07:01 crc kubenswrapper[4799]: I0127 08:07:01.414440 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eff64e6c-4e67-435e-9f12-2d0e77530da3","Type":"ContainerStarted","Data":"bdf097745f232f49646a67ce09032547dfab7180e40c2a444e34628e220dced3"} Jan 27 08:07:01 crc kubenswrapper[4799]: I0127 08:07:01.418189 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0","Type":"ContainerStarted","Data":"f22f93b6f1734abe8a314a5e928f427e1c1c4b7777e7932be85882de326727e8"} Jan 27 08:07:01 crc kubenswrapper[4799]: I0127 08:07:01.427823 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.35997606 podStartE2EDuration="3.427803775s" podCreationTimestamp="2026-01-27 08:06:58 +0000 UTC" firstStartedPulling="2026-01-27 08:06:59.46057983 +0000 UTC m=+1285.771683945" lastFinishedPulling="2026-01-27 08:07:00.528407595 +0000 UTC m=+1286.839511660" observedRunningTime="2026-01-27 08:07:01.424938526 +0000 UTC m=+1287.736042601" watchObservedRunningTime="2026-01-27 08:07:01.427803775 +0000 UTC m=+1287.738907850" Jan 27 08:07:01 crc kubenswrapper[4799]: I0127 08:07:01.453728 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371990.401062 podStartE2EDuration="46.45371383s" podCreationTimestamp="2026-01-27 08:06:15 +0000 UTC" firstStartedPulling="2026-01-27 08:06:17.48422574 +0000 UTC m=+1243.795329805" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:07:01.449664528 +0000 UTC m=+1287.760768583" watchObservedRunningTime="2026-01-27 08:07:01.45371383 +0000 UTC m=+1287.764817885" Jan 27 08:07:01 crc kubenswrapper[4799]: I0127 08:07:01.495264 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.314622222 podStartE2EDuration="45.495240934s" podCreationTimestamp="2026-01-27 08:06:16 +0000 UTC" firstStartedPulling="2026-01-27 08:06:18.69876966 +0000 UTC m=+1245.009873715" lastFinishedPulling="2026-01-27 08:06:55.879388362 +0000 UTC m=+1282.190492427" observedRunningTime="2026-01-27 08:07:01.492404576 +0000 UTC m=+1287.803508661" watchObservedRunningTime="2026-01-27 08:07:01.495240934 +0000 UTC m=+1287.806344989" Jan 27 08:07:06 crc kubenswrapper[4799]: I0127 08:07:06.737066 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 08:07:06 crc kubenswrapper[4799]: I0127 08:07:06.737689 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 08:07:07 crc kubenswrapper[4799]: I0127 08:07:07.619618 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 08:07:07 crc kubenswrapper[4799]: I0127 08:07:07.704659 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 08:07:07 crc kubenswrapper[4799]: I0127 08:07:07.916034 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0981-account-create-update-2s2dd"] Jan 27 08:07:07 crc kubenswrapper[4799]: E0127 08:07:07.916393 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73af45c0-86a5-47bb-8239-ca973fece66c" containerName="init" Jan 27 08:07:07 crc kubenswrapper[4799]: I0127 08:07:07.916406 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="73af45c0-86a5-47bb-8239-ca973fece66c" containerName="init" Jan 27 08:07:07 crc kubenswrapper[4799]: E0127 08:07:07.916425 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73af45c0-86a5-47bb-8239-ca973fece66c" containerName="dnsmasq-dns" Jan 27 08:07:07 crc kubenswrapper[4799]: I0127 08:07:07.916432 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="73af45c0-86a5-47bb-8239-ca973fece66c" containerName="dnsmasq-dns" Jan 27 08:07:07 crc kubenswrapper[4799]: I0127 08:07:07.916589 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="73af45c0-86a5-47bb-8239-ca973fece66c" containerName="dnsmasq-dns" Jan 27 08:07:07 crc kubenswrapper[4799]: I0127 08:07:07.917099 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0981-account-create-update-2s2dd" Jan 27 08:07:07 crc kubenswrapper[4799]: I0127 08:07:07.919090 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 27 08:07:07 crc kubenswrapper[4799]: I0127 08:07:07.931266 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0981-account-create-update-2s2dd"] Jan 27 08:07:07 crc kubenswrapper[4799]: I0127 08:07:07.968014 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-bf84n"] Jan 27 08:07:07 crc kubenswrapper[4799]: I0127 08:07:07.969380 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bf84n" Jan 27 08:07:07 crc kubenswrapper[4799]: I0127 08:07:07.975438 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bf84n"] Jan 27 08:07:07 crc kubenswrapper[4799]: I0127 08:07:07.992638 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49c238fc-db9c-4928-95a5-ba3a81f716f8-operator-scripts\") pod \"keystone-0981-account-create-update-2s2dd\" (UID: \"49c238fc-db9c-4928-95a5-ba3a81f716f8\") " pod="openstack/keystone-0981-account-create-update-2s2dd" Jan 27 08:07:07 crc kubenswrapper[4799]: I0127 08:07:07.992800 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7vv4\" (UniqueName: \"kubernetes.io/projected/49c238fc-db9c-4928-95a5-ba3a81f716f8-kube-api-access-q7vv4\") pod \"keystone-0981-account-create-update-2s2dd\" (UID: \"49c238fc-db9c-4928-95a5-ba3a81f716f8\") " pod="openstack/keystone-0981-account-create-update-2s2dd" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.094642 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6cd25bd-8dfb-4557-a0c8-06b3ae779192-operator-scripts\") pod \"keystone-db-create-bf84n\" (UID: \"a6cd25bd-8dfb-4557-a0c8-06b3ae779192\") " pod="openstack/keystone-db-create-bf84n" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.094687 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7vv4\" (UniqueName: \"kubernetes.io/projected/49c238fc-db9c-4928-95a5-ba3a81f716f8-kube-api-access-q7vv4\") pod \"keystone-0981-account-create-update-2s2dd\" (UID: \"49c238fc-db9c-4928-95a5-ba3a81f716f8\") " pod="openstack/keystone-0981-account-create-update-2s2dd" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.094761 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49c238fc-db9c-4928-95a5-ba3a81f716f8-operator-scripts\") pod \"keystone-0981-account-create-update-2s2dd\" (UID: \"49c238fc-db9c-4928-95a5-ba3a81f716f8\") " pod="openstack/keystone-0981-account-create-update-2s2dd" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.094785 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5smqb\" (UniqueName: \"kubernetes.io/projected/a6cd25bd-8dfb-4557-a0c8-06b3ae779192-kube-api-access-5smqb\") pod \"keystone-db-create-bf84n\" (UID: \"a6cd25bd-8dfb-4557-a0c8-06b3ae779192\") " pod="openstack/keystone-db-create-bf84n" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.095627 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49c238fc-db9c-4928-95a5-ba3a81f716f8-operator-scripts\") pod \"keystone-0981-account-create-update-2s2dd\" (UID: \"49c238fc-db9c-4928-95a5-ba3a81f716f8\") " pod="openstack/keystone-0981-account-create-update-2s2dd" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.113386 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7vv4\" (UniqueName: \"kubernetes.io/projected/49c238fc-db9c-4928-95a5-ba3a81f716f8-kube-api-access-q7vv4\") pod \"keystone-0981-account-create-update-2s2dd\" (UID: \"49c238fc-db9c-4928-95a5-ba3a81f716f8\") " pod="openstack/keystone-0981-account-create-update-2s2dd" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.187736 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-pjzqp"] Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.188805 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pjzqp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.195931 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5smqb\" (UniqueName: \"kubernetes.io/projected/a6cd25bd-8dfb-4557-a0c8-06b3ae779192-kube-api-access-5smqb\") pod \"keystone-db-create-bf84n\" (UID: \"a6cd25bd-8dfb-4557-a0c8-06b3ae779192\") " pod="openstack/keystone-db-create-bf84n" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.196083 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6cd25bd-8dfb-4557-a0c8-06b3ae779192-operator-scripts\") pod \"keystone-db-create-bf84n\" (UID: \"a6cd25bd-8dfb-4557-a0c8-06b3ae779192\") " pod="openstack/keystone-db-create-bf84n" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.196605 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-pjzqp"] Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.197018 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6cd25bd-8dfb-4557-a0c8-06b3ae779192-operator-scripts\") pod \"keystone-db-create-bf84n\" (UID: \"a6cd25bd-8dfb-4557-a0c8-06b3ae779192\") " pod="openstack/keystone-db-create-bf84n" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.222794 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5smqb\" (UniqueName: \"kubernetes.io/projected/a6cd25bd-8dfb-4557-a0c8-06b3ae779192-kube-api-access-5smqb\") pod \"keystone-db-create-bf84n\" (UID: \"a6cd25bd-8dfb-4557-a0c8-06b3ae779192\") " pod="openstack/keystone-db-create-bf84n" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.236718 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0981-account-create-update-2s2dd" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.283639 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bf84n" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.288597 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.288644 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.298366 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de94f63b-88ce-4f40-acc5-d9f70195f265-operator-scripts\") pod \"placement-db-create-pjzqp\" (UID: \"de94f63b-88ce-4f40-acc5-d9f70195f265\") " pod="openstack/placement-db-create-pjzqp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.298493 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwtzn\" (UniqueName: \"kubernetes.io/projected/de94f63b-88ce-4f40-acc5-d9f70195f265-kube-api-access-kwtzn\") pod \"placement-db-create-pjzqp\" (UID: \"de94f63b-88ce-4f40-acc5-d9f70195f265\") " pod="openstack/placement-db-create-pjzqp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.322272 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c37c-account-create-update-6x5kp"] Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.323391 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c37c-account-create-update-6x5kp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.327469 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.338690 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c37c-account-create-update-6x5kp"] Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.353729 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.401202 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwtzn\" (UniqueName: \"kubernetes.io/projected/de94f63b-88ce-4f40-acc5-d9f70195f265-kube-api-access-kwtzn\") pod \"placement-db-create-pjzqp\" (UID: \"de94f63b-88ce-4f40-acc5-d9f70195f265\") " pod="openstack/placement-db-create-pjzqp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.401705 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de94f63b-88ce-4f40-acc5-d9f70195f265-operator-scripts\") pod \"placement-db-create-pjzqp\" (UID: \"de94f63b-88ce-4f40-acc5-d9f70195f265\") " pod="openstack/placement-db-create-pjzqp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.401775 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqgjj\" (UniqueName: \"kubernetes.io/projected/49f26f77-a15a-4c1a-a697-fd3823a47c5b-kube-api-access-sqgjj\") pod \"placement-c37c-account-create-update-6x5kp\" (UID: \"49f26f77-a15a-4c1a-a697-fd3823a47c5b\") " pod="openstack/placement-c37c-account-create-update-6x5kp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.401952 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49f26f77-a15a-4c1a-a697-fd3823a47c5b-operator-scripts\") pod \"placement-c37c-account-create-update-6x5kp\" (UID: \"49f26f77-a15a-4c1a-a697-fd3823a47c5b\") " pod="openstack/placement-c37c-account-create-update-6x5kp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.405376 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de94f63b-88ce-4f40-acc5-d9f70195f265-operator-scripts\") pod \"placement-db-create-pjzqp\" (UID: \"de94f63b-88ce-4f40-acc5-d9f70195f265\") " pod="openstack/placement-db-create-pjzqp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.421510 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.434781 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwtzn\" (UniqueName: \"kubernetes.io/projected/de94f63b-88ce-4f40-acc5-d9f70195f265-kube-api-access-kwtzn\") pod \"placement-db-create-pjzqp\" (UID: \"de94f63b-88ce-4f40-acc5-d9f70195f265\") " pod="openstack/placement-db-create-pjzqp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.504459 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pjzqp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.504696 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqgjj\" (UniqueName: \"kubernetes.io/projected/49f26f77-a15a-4c1a-a697-fd3823a47c5b-kube-api-access-sqgjj\") pod \"placement-c37c-account-create-update-6x5kp\" (UID: \"49f26f77-a15a-4c1a-a697-fd3823a47c5b\") " pod="openstack/placement-c37c-account-create-update-6x5kp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.505022 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49f26f77-a15a-4c1a-a697-fd3823a47c5b-operator-scripts\") pod \"placement-c37c-account-create-update-6x5kp\" (UID: \"49f26f77-a15a-4c1a-a697-fd3823a47c5b\") " pod="openstack/placement-c37c-account-create-update-6x5kp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.507669 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49f26f77-a15a-4c1a-a697-fd3823a47c5b-operator-scripts\") pod \"placement-c37c-account-create-update-6x5kp\" (UID: \"49f26f77-a15a-4c1a-a697-fd3823a47c5b\") " pod="openstack/placement-c37c-account-create-update-6x5kp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.524838 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqgjj\" (UniqueName: \"kubernetes.io/projected/49f26f77-a15a-4c1a-a697-fd3823a47c5b-kube-api-access-sqgjj\") pod \"placement-c37c-account-create-update-6x5kp\" (UID: \"49f26f77-a15a-4c1a-a697-fd3823a47c5b\") " pod="openstack/placement-c37c-account-create-update-6x5kp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.586266 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-dgncp"] Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.590604 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dgncp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.608891 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dgncp"] Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.616069 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.674270 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c37c-account-create-update-6x5kp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.689521 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-f310-account-create-update-nffw8"] Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.690920 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f310-account-create-update-nffw8" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.693771 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.710377 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87-operator-scripts\") pod \"glance-f310-account-create-update-nffw8\" (UID: \"f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87\") " pod="openstack/glance-f310-account-create-update-nffw8" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.710510 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6crh\" (UniqueName: \"kubernetes.io/projected/f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87-kube-api-access-d6crh\") pod \"glance-f310-account-create-update-nffw8\" (UID: \"f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87\") " pod="openstack/glance-f310-account-create-update-nffw8" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.710549 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zlqm\" (UniqueName: \"kubernetes.io/projected/74da5422-dcf4-48cb-a29a-7378082a827d-kube-api-access-6zlqm\") pod \"glance-db-create-dgncp\" (UID: \"74da5422-dcf4-48cb-a29a-7378082a827d\") " pod="openstack/glance-db-create-dgncp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.710601 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74da5422-dcf4-48cb-a29a-7378082a827d-operator-scripts\") pod \"glance-db-create-dgncp\" (UID: \"74da5422-dcf4-48cb-a29a-7378082a827d\") " pod="openstack/glance-db-create-dgncp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.730794 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f310-account-create-update-nffw8"] Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.749576 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0981-account-create-update-2s2dd"] Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.811521 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6crh\" (UniqueName: \"kubernetes.io/projected/f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87-kube-api-access-d6crh\") pod \"glance-f310-account-create-update-nffw8\" (UID: \"f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87\") " pod="openstack/glance-f310-account-create-update-nffw8" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.811565 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74da5422-dcf4-48cb-a29a-7378082a827d-operator-scripts\") pod \"glance-db-create-dgncp\" (UID: \"74da5422-dcf4-48cb-a29a-7378082a827d\") " pod="openstack/glance-db-create-dgncp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.811584 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zlqm\" (UniqueName: \"kubernetes.io/projected/74da5422-dcf4-48cb-a29a-7378082a827d-kube-api-access-6zlqm\") pod \"glance-db-create-dgncp\" (UID: \"74da5422-dcf4-48cb-a29a-7378082a827d\") " pod="openstack/glance-db-create-dgncp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.811674 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87-operator-scripts\") pod \"glance-f310-account-create-update-nffw8\" (UID: \"f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87\") " pod="openstack/glance-f310-account-create-update-nffw8" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.812387 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87-operator-scripts\") pod \"glance-f310-account-create-update-nffw8\" (UID: \"f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87\") " pod="openstack/glance-f310-account-create-update-nffw8" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.813134 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74da5422-dcf4-48cb-a29a-7378082a827d-operator-scripts\") pod \"glance-db-create-dgncp\" (UID: \"74da5422-dcf4-48cb-a29a-7378082a827d\") " pod="openstack/glance-db-create-dgncp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.830033 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zlqm\" (UniqueName: \"kubernetes.io/projected/74da5422-dcf4-48cb-a29a-7378082a827d-kube-api-access-6zlqm\") pod \"glance-db-create-dgncp\" (UID: \"74da5422-dcf4-48cb-a29a-7378082a827d\") " pod="openstack/glance-db-create-dgncp" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.832825 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6crh\" (UniqueName: \"kubernetes.io/projected/f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87-kube-api-access-d6crh\") pod \"glance-f310-account-create-update-nffw8\" (UID: \"f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87\") " pod="openstack/glance-f310-account-create-update-nffw8" Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.844592 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bf84n"] Jan 27 08:07:08 crc kubenswrapper[4799]: W0127 08:07:08.861529 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6cd25bd_8dfb_4557_a0c8_06b3ae779192.slice/crio-850a2e93cce8141f2cc7b678394049dab8a4d64183d3d7dd3d55ee9f929805b3 WatchSource:0}: Error finding container 850a2e93cce8141f2cc7b678394049dab8a4d64183d3d7dd3d55ee9f929805b3: Status 404 returned error can't find the container with id 850a2e93cce8141f2cc7b678394049dab8a4d64183d3d7dd3d55ee9f929805b3 Jan 27 08:07:08 crc kubenswrapper[4799]: I0127 08:07:08.926512 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dgncp" Jan 27 08:07:09 crc kubenswrapper[4799]: I0127 08:07:09.027382 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f310-account-create-update-nffw8" Jan 27 08:07:09 crc kubenswrapper[4799]: I0127 08:07:09.074629 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-pjzqp"] Jan 27 08:07:09 crc kubenswrapper[4799]: I0127 08:07:09.208371 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c37c-account-create-update-6x5kp"] Jan 27 08:07:09 crc kubenswrapper[4799]: I0127 08:07:09.359594 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dgncp"] Jan 27 08:07:09 crc kubenswrapper[4799]: W0127 08:07:09.422341 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74da5422_dcf4_48cb_a29a_7378082a827d.slice/crio-86110c0bd4cb7838b4853e9b5bc8846a6e51f50639e2a99a4ab95db8144badfe WatchSource:0}: Error finding container 86110c0bd4cb7838b4853e9b5bc8846a6e51f50639e2a99a4ab95db8144badfe: Status 404 returned error can't find the container with id 86110c0bd4cb7838b4853e9b5bc8846a6e51f50639e2a99a4ab95db8144badfe Jan 27 08:07:09 crc kubenswrapper[4799]: I0127 08:07:09.505357 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pjzqp" event={"ID":"de94f63b-88ce-4f40-acc5-d9f70195f265","Type":"ContainerStarted","Data":"7bfaef694e22e5a40ed6a51a010aed6fc114cc3d82e200c369c016b57c978d3f"} Jan 27 08:07:09 crc kubenswrapper[4799]: I0127 08:07:09.505403 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pjzqp" event={"ID":"de94f63b-88ce-4f40-acc5-d9f70195f265","Type":"ContainerStarted","Data":"dcb042e323f18c752fca5ee192d844f1728162e492f9bdc5b57e75b1e23dcd68"} Jan 27 08:07:09 crc kubenswrapper[4799]: I0127 08:07:09.532584 4799 generic.go:334] "Generic (PLEG): container finished" podID="49c238fc-db9c-4928-95a5-ba3a81f716f8" containerID="8ea6ccce7f6c8746e9576d8da35e940f2fec2ce87b0123fcad72b2ce4a91d8ca" exitCode=0 Jan 27 08:07:09 crc kubenswrapper[4799]: I0127 08:07:09.532683 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0981-account-create-update-2s2dd" event={"ID":"49c238fc-db9c-4928-95a5-ba3a81f716f8","Type":"ContainerDied","Data":"8ea6ccce7f6c8746e9576d8da35e940f2fec2ce87b0123fcad72b2ce4a91d8ca"} Jan 27 08:07:09 crc kubenswrapper[4799]: I0127 08:07:09.532714 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0981-account-create-update-2s2dd" event={"ID":"49c238fc-db9c-4928-95a5-ba3a81f716f8","Type":"ContainerStarted","Data":"c9128e286908e8a3056e66bda81323ffa64a25a54ca3b21ccbccb7dcc63d7e32"} Jan 27 08:07:09 crc kubenswrapper[4799]: I0127 08:07:09.550367 4799 generic.go:334] "Generic (PLEG): container finished" podID="a6cd25bd-8dfb-4557-a0c8-06b3ae779192" containerID="8531b9fe91c1d7e57c8cf0e306faf43d14f749256aafc3cbbde126566f3c856a" exitCode=0 Jan 27 08:07:09 crc kubenswrapper[4799]: I0127 08:07:09.550469 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bf84n" event={"ID":"a6cd25bd-8dfb-4557-a0c8-06b3ae779192","Type":"ContainerDied","Data":"8531b9fe91c1d7e57c8cf0e306faf43d14f749256aafc3cbbde126566f3c856a"} Jan 27 08:07:09 crc kubenswrapper[4799]: I0127 08:07:09.550515 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bf84n" event={"ID":"a6cd25bd-8dfb-4557-a0c8-06b3ae779192","Type":"ContainerStarted","Data":"850a2e93cce8141f2cc7b678394049dab8a4d64183d3d7dd3d55ee9f929805b3"} Jan 27 08:07:09 crc kubenswrapper[4799]: I0127 08:07:09.552256 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-pjzqp" podStartSLOduration=1.552245379 podStartE2EDuration="1.552245379s" podCreationTimestamp="2026-01-27 08:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:07:09.547801576 +0000 UTC m=+1295.858905641" watchObservedRunningTime="2026-01-27 08:07:09.552245379 +0000 UTC m=+1295.863349444" Jan 27 08:07:09 crc kubenswrapper[4799]: I0127 08:07:09.565917 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dgncp" event={"ID":"74da5422-dcf4-48cb-a29a-7378082a827d","Type":"ContainerStarted","Data":"86110c0bd4cb7838b4853e9b5bc8846a6e51f50639e2a99a4ab95db8144badfe"} Jan 27 08:07:09 crc kubenswrapper[4799]: I0127 08:07:09.594056 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c37c-account-create-update-6x5kp" event={"ID":"49f26f77-a15a-4c1a-a697-fd3823a47c5b","Type":"ContainerStarted","Data":"c5c30ee649059f2df57e1c622c248f05c9721e9c2209bd4f77ee9b887d4f7b83"} Jan 27 08:07:09 crc kubenswrapper[4799]: I0127 08:07:09.594124 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f310-account-create-update-nffw8"] Jan 27 08:07:09 crc kubenswrapper[4799]: I0127 08:07:09.594139 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c37c-account-create-update-6x5kp" event={"ID":"49f26f77-a15a-4c1a-a697-fd3823a47c5b","Type":"ContainerStarted","Data":"8969580d89fbe5726bea8e526f1884d8dac8c91bdee35b0dc623d9b41935b97b"} Jan 27 08:07:09 crc kubenswrapper[4799]: I0127 08:07:09.638361 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-c37c-account-create-update-6x5kp" podStartSLOduration=1.6382442400000001 podStartE2EDuration="1.63824424s" podCreationTimestamp="2026-01-27 08:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:07:09.630544478 +0000 UTC m=+1295.941648543" watchObservedRunningTime="2026-01-27 08:07:09.63824424 +0000 UTC m=+1295.949348305" Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.130568 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-6bk7k"] Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.132101 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-6bk7k" Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.150409 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-6bk7k"] Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.242742 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-6bk7k\" (UID: \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\") " pod="openstack/dnsmasq-dns-698758b865-6bk7k" Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.242791 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8xf2\" (UniqueName: \"kubernetes.io/projected/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-kube-api-access-q8xf2\") pod \"dnsmasq-dns-698758b865-6bk7k\" (UID: \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\") " pod="openstack/dnsmasq-dns-698758b865-6bk7k" Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.243066 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-config\") pod \"dnsmasq-dns-698758b865-6bk7k\" (UID: \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\") " pod="openstack/dnsmasq-dns-698758b865-6bk7k" Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.243245 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-6bk7k\" (UID: \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\") " pod="openstack/dnsmasq-dns-698758b865-6bk7k" Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.243313 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-dns-svc\") pod \"dnsmasq-dns-698758b865-6bk7k\" (UID: \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\") " pod="openstack/dnsmasq-dns-698758b865-6bk7k" Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.344397 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-config\") pod \"dnsmasq-dns-698758b865-6bk7k\" (UID: \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\") " pod="openstack/dnsmasq-dns-698758b865-6bk7k" Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.344478 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-6bk7k\" (UID: \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\") " pod="openstack/dnsmasq-dns-698758b865-6bk7k" Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.344504 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-dns-svc\") pod \"dnsmasq-dns-698758b865-6bk7k\" (UID: \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\") " pod="openstack/dnsmasq-dns-698758b865-6bk7k" Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.344522 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-6bk7k\" (UID: \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\") " pod="openstack/dnsmasq-dns-698758b865-6bk7k" Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.344547 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8xf2\" (UniqueName: \"kubernetes.io/projected/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-kube-api-access-q8xf2\") pod \"dnsmasq-dns-698758b865-6bk7k\" (UID: \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\") " pod="openstack/dnsmasq-dns-698758b865-6bk7k" Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.345519 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-6bk7k\" (UID: \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\") " pod="openstack/dnsmasq-dns-698758b865-6bk7k" Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.345567 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-config\") pod \"dnsmasq-dns-698758b865-6bk7k\" (UID: \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\") " pod="openstack/dnsmasq-dns-698758b865-6bk7k" Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.345637 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-6bk7k\" (UID: \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\") " pod="openstack/dnsmasq-dns-698758b865-6bk7k" Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.345655 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-dns-svc\") pod \"dnsmasq-dns-698758b865-6bk7k\" (UID: \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\") " pod="openstack/dnsmasq-dns-698758b865-6bk7k" Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.362875 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8xf2\" (UniqueName: \"kubernetes.io/projected/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-kube-api-access-q8xf2\") pod \"dnsmasq-dns-698758b865-6bk7k\" (UID: \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\") " pod="openstack/dnsmasq-dns-698758b865-6bk7k" Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.472869 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-6bk7k" Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.603198 4799 generic.go:334] "Generic (PLEG): container finished" podID="de94f63b-88ce-4f40-acc5-d9f70195f265" containerID="7bfaef694e22e5a40ed6a51a010aed6fc114cc3d82e200c369c016b57c978d3f" exitCode=0 Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.603255 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pjzqp" event={"ID":"de94f63b-88ce-4f40-acc5-d9f70195f265","Type":"ContainerDied","Data":"7bfaef694e22e5a40ed6a51a010aed6fc114cc3d82e200c369c016b57c978d3f"} Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.606241 4799 generic.go:334] "Generic (PLEG): container finished" podID="f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87" containerID="c66b308f3c279630c385e185b8439caef62eae7e151ec37ec9ff3e1f97bbef5c" exitCode=0 Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.606331 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f310-account-create-update-nffw8" event={"ID":"f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87","Type":"ContainerDied","Data":"c66b308f3c279630c385e185b8439caef62eae7e151ec37ec9ff3e1f97bbef5c"} Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.606360 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f310-account-create-update-nffw8" event={"ID":"f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87","Type":"ContainerStarted","Data":"14977de06d9f0461752856d70fc868535a2f3cf84515bb546d6eebf468b9f273"} Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.608620 4799 generic.go:334] "Generic (PLEG): container finished" podID="74da5422-dcf4-48cb-a29a-7378082a827d" containerID="d4baa9a4ca248bb632c726ed19d0931976fe1a80ea8ddd800a2ef255bc28cc84" exitCode=0 Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.608687 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dgncp" event={"ID":"74da5422-dcf4-48cb-a29a-7378082a827d","Type":"ContainerDied","Data":"d4baa9a4ca248bb632c726ed19d0931976fe1a80ea8ddd800a2ef255bc28cc84"} Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.611358 4799 generic.go:334] "Generic (PLEG): container finished" podID="49f26f77-a15a-4c1a-a697-fd3823a47c5b" containerID="c5c30ee649059f2df57e1c622c248f05c9721e9c2209bd4f77ee9b887d4f7b83" exitCode=0 Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.611485 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c37c-account-create-update-6x5kp" event={"ID":"49f26f77-a15a-4c1a-a697-fd3823a47c5b","Type":"ContainerDied","Data":"c5c30ee649059f2df57e1c622c248f05c9721e9c2209bd4f77ee9b887d4f7b83"} Jan 27 08:07:10 crc kubenswrapper[4799]: I0127 08:07:10.924629 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-6bk7k"] Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.061183 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bf84n" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.071476 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0981-account-create-update-2s2dd" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.158366 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7vv4\" (UniqueName: \"kubernetes.io/projected/49c238fc-db9c-4928-95a5-ba3a81f716f8-kube-api-access-q7vv4\") pod \"49c238fc-db9c-4928-95a5-ba3a81f716f8\" (UID: \"49c238fc-db9c-4928-95a5-ba3a81f716f8\") " Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.158419 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49c238fc-db9c-4928-95a5-ba3a81f716f8-operator-scripts\") pod \"49c238fc-db9c-4928-95a5-ba3a81f716f8\" (UID: \"49c238fc-db9c-4928-95a5-ba3a81f716f8\") " Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.158453 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6cd25bd-8dfb-4557-a0c8-06b3ae779192-operator-scripts\") pod \"a6cd25bd-8dfb-4557-a0c8-06b3ae779192\" (UID: \"a6cd25bd-8dfb-4557-a0c8-06b3ae779192\") " Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.158480 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5smqb\" (UniqueName: \"kubernetes.io/projected/a6cd25bd-8dfb-4557-a0c8-06b3ae779192-kube-api-access-5smqb\") pod \"a6cd25bd-8dfb-4557-a0c8-06b3ae779192\" (UID: \"a6cd25bd-8dfb-4557-a0c8-06b3ae779192\") " Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.159225 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c238fc-db9c-4928-95a5-ba3a81f716f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49c238fc-db9c-4928-95a5-ba3a81f716f8" (UID: "49c238fc-db9c-4928-95a5-ba3a81f716f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.159434 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6cd25bd-8dfb-4557-a0c8-06b3ae779192-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a6cd25bd-8dfb-4557-a0c8-06b3ae779192" (UID: "a6cd25bd-8dfb-4557-a0c8-06b3ae779192"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.159915 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49c238fc-db9c-4928-95a5-ba3a81f716f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.159940 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6cd25bd-8dfb-4557-a0c8-06b3ae779192-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.163010 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c238fc-db9c-4928-95a5-ba3a81f716f8-kube-api-access-q7vv4" (OuterVolumeSpecName: "kube-api-access-q7vv4") pod "49c238fc-db9c-4928-95a5-ba3a81f716f8" (UID: "49c238fc-db9c-4928-95a5-ba3a81f716f8"). InnerVolumeSpecName "kube-api-access-q7vv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.163610 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6cd25bd-8dfb-4557-a0c8-06b3ae779192-kube-api-access-5smqb" (OuterVolumeSpecName: "kube-api-access-5smqb") pod "a6cd25bd-8dfb-4557-a0c8-06b3ae779192" (UID: "a6cd25bd-8dfb-4557-a0c8-06b3ae779192"). InnerVolumeSpecName "kube-api-access-5smqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.261885 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7vv4\" (UniqueName: \"kubernetes.io/projected/49c238fc-db9c-4928-95a5-ba3a81f716f8-kube-api-access-q7vv4\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.262125 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5smqb\" (UniqueName: \"kubernetes.io/projected/a6cd25bd-8dfb-4557-a0c8-06b3ae779192-kube-api-access-5smqb\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.275847 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 27 08:07:11 crc kubenswrapper[4799]: E0127 08:07:11.276238 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49c238fc-db9c-4928-95a5-ba3a81f716f8" containerName="mariadb-account-create-update" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.276260 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="49c238fc-db9c-4928-95a5-ba3a81f716f8" containerName="mariadb-account-create-update" Jan 27 08:07:11 crc kubenswrapper[4799]: E0127 08:07:11.276281 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6cd25bd-8dfb-4557-a0c8-06b3ae779192" containerName="mariadb-database-create" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.276290 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6cd25bd-8dfb-4557-a0c8-06b3ae779192" containerName="mariadb-database-create" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.276559 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="49c238fc-db9c-4928-95a5-ba3a81f716f8" containerName="mariadb-account-create-update" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.276589 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6cd25bd-8dfb-4557-a0c8-06b3ae779192" containerName="mariadb-database-create" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.283334 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.285523 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-k72xm" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.286115 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.286238 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.286421 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.313353 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.363179 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nchm6\" (UniqueName: \"kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-kube-api-access-nchm6\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.363245 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.363291 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f707c5d5-a9c3-4fdb-8361-9604b6b70153-cache\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.363353 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-etc-swift\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.363377 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f707c5d5-a9c3-4fdb-8361-9604b6b70153-lock\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.363395 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f707c5d5-a9c3-4fdb-8361-9604b6b70153-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.464850 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.464933 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f707c5d5-a9c3-4fdb-8361-9604b6b70153-cache\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.465001 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-etc-swift\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.465036 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f707c5d5-a9c3-4fdb-8361-9604b6b70153-lock\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.465057 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f707c5d5-a9c3-4fdb-8361-9604b6b70153-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.465105 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nchm6\" (UniqueName: \"kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-kube-api-access-nchm6\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.465345 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.465518 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f707c5d5-a9c3-4fdb-8361-9604b6b70153-cache\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:11 crc kubenswrapper[4799]: E0127 08:07:11.465562 4799 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 08:07:11 crc kubenswrapper[4799]: E0127 08:07:11.465581 4799 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 08:07:11 crc kubenswrapper[4799]: E0127 08:07:11.465639 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-etc-swift podName:f707c5d5-a9c3-4fdb-8361-9604b6b70153 nodeName:}" failed. No retries permitted until 2026-01-27 08:07:11.965620108 +0000 UTC m=+1298.276724193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-etc-swift") pod "swift-storage-0" (UID: "f707c5d5-a9c3-4fdb-8361-9604b6b70153") : configmap "swift-ring-files" not found Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.465833 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f707c5d5-a9c3-4fdb-8361-9604b6b70153-lock\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.473465 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f707c5d5-a9c3-4fdb-8361-9604b6b70153-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.481967 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nchm6\" (UniqueName: \"kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-kube-api-access-nchm6\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.495499 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.621163 4799 generic.go:334] "Generic (PLEG): container finished" podID="5ff0c7f6-f6de-4632-8214-7cc2758b7e4d" containerID="7ef9c851512b5f799f2cfd7194a07e99d1307e7a015174fbbe88354400b56eb0" exitCode=0 Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.621232 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-6bk7k" event={"ID":"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d","Type":"ContainerDied","Data":"7ef9c851512b5f799f2cfd7194a07e99d1307e7a015174fbbe88354400b56eb0"} Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.621262 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-6bk7k" event={"ID":"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d","Type":"ContainerStarted","Data":"2d3f6dd28e3cb5b0b6f00d4e765235888acb96df8df08a975a39e6cb8f799365"} Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.623840 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0981-account-create-update-2s2dd" event={"ID":"49c238fc-db9c-4928-95a5-ba3a81f716f8","Type":"ContainerDied","Data":"c9128e286908e8a3056e66bda81323ffa64a25a54ca3b21ccbccb7dcc63d7e32"} Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.623861 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9128e286908e8a3056e66bda81323ffa64a25a54ca3b21ccbccb7dcc63d7e32" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.623908 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0981-account-create-update-2s2dd" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.632805 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bf84n" event={"ID":"a6cd25bd-8dfb-4557-a0c8-06b3ae779192","Type":"ContainerDied","Data":"850a2e93cce8141f2cc7b678394049dab8a4d64183d3d7dd3d55ee9f929805b3"} Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.632837 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bf84n" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.632856 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="850a2e93cce8141f2cc7b678394049dab8a4d64183d3d7dd3d55ee9f929805b3" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.826437 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-nnhs2"] Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.827790 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.829863 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.829954 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.830031 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.837345 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nnhs2"] Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.973213 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-scripts\") pod \"swift-ring-rebalance-nnhs2\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.973584 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-combined-ca-bundle\") pod \"swift-ring-rebalance-nnhs2\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.973639 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-etc-swift\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.973672 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-etc-swift\") pod \"swift-ring-rebalance-nnhs2\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.973699 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-dispersionconf\") pod \"swift-ring-rebalance-nnhs2\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.973729 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-ring-data-devices\") pod \"swift-ring-rebalance-nnhs2\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.973754 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-swiftconf\") pod \"swift-ring-rebalance-nnhs2\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:11 crc kubenswrapper[4799]: I0127 08:07:11.973776 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x49h6\" (UniqueName: \"kubernetes.io/projected/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-kube-api-access-x49h6\") pod \"swift-ring-rebalance-nnhs2\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:11 crc kubenswrapper[4799]: E0127 08:07:11.973951 4799 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 08:07:11 crc kubenswrapper[4799]: E0127 08:07:11.973964 4799 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 08:07:11 crc kubenswrapper[4799]: E0127 08:07:11.974002 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-etc-swift podName:f707c5d5-a9c3-4fdb-8361-9604b6b70153 nodeName:}" failed. No retries permitted until 2026-01-27 08:07:12.973987996 +0000 UTC m=+1299.285092061 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-etc-swift") pod "swift-storage-0" (UID: "f707c5d5-a9c3-4fdb-8361-9604b6b70153") : configmap "swift-ring-files" not found Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.080292 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-combined-ca-bundle\") pod \"swift-ring-rebalance-nnhs2\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.080524 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-etc-swift\") pod \"swift-ring-rebalance-nnhs2\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.080682 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-dispersionconf\") pod \"swift-ring-rebalance-nnhs2\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.080753 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-ring-data-devices\") pod \"swift-ring-rebalance-nnhs2\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.080797 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-swiftconf\") pod \"swift-ring-rebalance-nnhs2\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.080848 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x49h6\" (UniqueName: \"kubernetes.io/projected/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-kube-api-access-x49h6\") pod \"swift-ring-rebalance-nnhs2\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.080929 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-scripts\") pod \"swift-ring-rebalance-nnhs2\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.083617 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-etc-swift\") pod \"swift-ring-rebalance-nnhs2\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.083951 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-ring-data-devices\") pod \"swift-ring-rebalance-nnhs2\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.089274 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-combined-ca-bundle\") pod \"swift-ring-rebalance-nnhs2\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.089775 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-scripts\") pod \"swift-ring-rebalance-nnhs2\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.091696 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-swiftconf\") pod \"swift-ring-rebalance-nnhs2\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.091751 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-dispersionconf\") pod \"swift-ring-rebalance-nnhs2\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.129985 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x49h6\" (UniqueName: \"kubernetes.io/projected/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-kube-api-access-x49h6\") pod \"swift-ring-rebalance-nnhs2\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.159511 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.309568 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f310-account-create-update-nffw8" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.389090 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87-operator-scripts\") pod \"f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87\" (UID: \"f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87\") " Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.389546 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6crh\" (UniqueName: \"kubernetes.io/projected/f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87-kube-api-access-d6crh\") pod \"f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87\" (UID: \"f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87\") " Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.389991 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87" (UID: "f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.395349 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87-kube-api-access-d6crh" (OuterVolumeSpecName: "kube-api-access-d6crh") pod "f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87" (UID: "f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87"). InnerVolumeSpecName "kube-api-access-d6crh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.442771 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pjzqp" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.450468 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dgncp" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.462195 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c37c-account-create-update-6x5kp" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.492681 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.492698 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6crh\" (UniqueName: \"kubernetes.io/projected/f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87-kube-api-access-d6crh\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.594197 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwtzn\" (UniqueName: \"kubernetes.io/projected/de94f63b-88ce-4f40-acc5-d9f70195f265-kube-api-access-kwtzn\") pod \"de94f63b-88ce-4f40-acc5-d9f70195f265\" (UID: \"de94f63b-88ce-4f40-acc5-d9f70195f265\") " Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.594245 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74da5422-dcf4-48cb-a29a-7378082a827d-operator-scripts\") pod \"74da5422-dcf4-48cb-a29a-7378082a827d\" (UID: \"74da5422-dcf4-48cb-a29a-7378082a827d\") " Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.594271 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49f26f77-a15a-4c1a-a697-fd3823a47c5b-operator-scripts\") pod \"49f26f77-a15a-4c1a-a697-fd3823a47c5b\" (UID: \"49f26f77-a15a-4c1a-a697-fd3823a47c5b\") " Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.594320 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqgjj\" (UniqueName: \"kubernetes.io/projected/49f26f77-a15a-4c1a-a697-fd3823a47c5b-kube-api-access-sqgjj\") pod \"49f26f77-a15a-4c1a-a697-fd3823a47c5b\" (UID: \"49f26f77-a15a-4c1a-a697-fd3823a47c5b\") " Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.594386 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de94f63b-88ce-4f40-acc5-d9f70195f265-operator-scripts\") pod \"de94f63b-88ce-4f40-acc5-d9f70195f265\" (UID: \"de94f63b-88ce-4f40-acc5-d9f70195f265\") " Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.594407 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zlqm\" (UniqueName: \"kubernetes.io/projected/74da5422-dcf4-48cb-a29a-7378082a827d-kube-api-access-6zlqm\") pod \"74da5422-dcf4-48cb-a29a-7378082a827d\" (UID: \"74da5422-dcf4-48cb-a29a-7378082a827d\") " Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.595370 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de94f63b-88ce-4f40-acc5-d9f70195f265-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de94f63b-88ce-4f40-acc5-d9f70195f265" (UID: "de94f63b-88ce-4f40-acc5-d9f70195f265"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.596268 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49f26f77-a15a-4c1a-a697-fd3823a47c5b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49f26f77-a15a-4c1a-a697-fd3823a47c5b" (UID: "49f26f77-a15a-4c1a-a697-fd3823a47c5b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.596647 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74da5422-dcf4-48cb-a29a-7378082a827d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "74da5422-dcf4-48cb-a29a-7378082a827d" (UID: "74da5422-dcf4-48cb-a29a-7378082a827d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.599170 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de94f63b-88ce-4f40-acc5-d9f70195f265-kube-api-access-kwtzn" (OuterVolumeSpecName: "kube-api-access-kwtzn") pod "de94f63b-88ce-4f40-acc5-d9f70195f265" (UID: "de94f63b-88ce-4f40-acc5-d9f70195f265"). InnerVolumeSpecName "kube-api-access-kwtzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.600530 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74da5422-dcf4-48cb-a29a-7378082a827d-kube-api-access-6zlqm" (OuterVolumeSpecName: "kube-api-access-6zlqm") pod "74da5422-dcf4-48cb-a29a-7378082a827d" (UID: "74da5422-dcf4-48cb-a29a-7378082a827d"). InnerVolumeSpecName "kube-api-access-6zlqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.600540 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49f26f77-a15a-4c1a-a697-fd3823a47c5b-kube-api-access-sqgjj" (OuterVolumeSpecName: "kube-api-access-sqgjj") pod "49f26f77-a15a-4c1a-a697-fd3823a47c5b" (UID: "49f26f77-a15a-4c1a-a697-fd3823a47c5b"). InnerVolumeSpecName "kube-api-access-sqgjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.641448 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-6bk7k" event={"ID":"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d","Type":"ContainerStarted","Data":"54b43f79b487fff1b480e14a5d11c70def40ddc73870f53f16f188ce8e2bb699"} Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.641588 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-6bk7k" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.645064 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dgncp" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.645063 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dgncp" event={"ID":"74da5422-dcf4-48cb-a29a-7378082a827d","Type":"ContainerDied","Data":"86110c0bd4cb7838b4853e9b5bc8846a6e51f50639e2a99a4ab95db8144badfe"} Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.645169 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86110c0bd4cb7838b4853e9b5bc8846a6e51f50639e2a99a4ab95db8144badfe" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.646816 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c37c-account-create-update-6x5kp" event={"ID":"49f26f77-a15a-4c1a-a697-fd3823a47c5b","Type":"ContainerDied","Data":"8969580d89fbe5726bea8e526f1884d8dac8c91bdee35b0dc623d9b41935b97b"} Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.646840 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8969580d89fbe5726bea8e526f1884d8dac8c91bdee35b0dc623d9b41935b97b" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.646881 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c37c-account-create-update-6x5kp" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.648041 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pjzqp" event={"ID":"de94f63b-88ce-4f40-acc5-d9f70195f265","Type":"ContainerDied","Data":"dcb042e323f18c752fca5ee192d844f1728162e492f9bdc5b57e75b1e23dcd68"} Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.648062 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcb042e323f18c752fca5ee192d844f1728162e492f9bdc5b57e75b1e23dcd68" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.648097 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pjzqp" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.654314 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f310-account-create-update-nffw8" event={"ID":"f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87","Type":"ContainerDied","Data":"14977de06d9f0461752856d70fc868535a2f3cf84515bb546d6eebf468b9f273"} Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.654349 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14977de06d9f0461752856d70fc868535a2f3cf84515bb546d6eebf468b9f273" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.654419 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f310-account-create-update-nffw8" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.674444 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-6bk7k" podStartSLOduration=2.6744198900000002 podStartE2EDuration="2.67441989s" podCreationTimestamp="2026-01-27 08:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:07:12.668731223 +0000 UTC m=+1298.979835288" watchObservedRunningTime="2026-01-27 08:07:12.67441989 +0000 UTC m=+1298.985523955" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.695822 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqgjj\" (UniqueName: \"kubernetes.io/projected/49f26f77-a15a-4c1a-a697-fd3823a47c5b-kube-api-access-sqgjj\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.695844 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de94f63b-88ce-4f40-acc5-d9f70195f265-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.695855 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zlqm\" (UniqueName: \"kubernetes.io/projected/74da5422-dcf4-48cb-a29a-7378082a827d-kube-api-access-6zlqm\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.695863 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74da5422-dcf4-48cb-a29a-7378082a827d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.695871 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwtzn\" (UniqueName: \"kubernetes.io/projected/de94f63b-88ce-4f40-acc5-d9f70195f265-kube-api-access-kwtzn\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.695879 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49f26f77-a15a-4c1a-a697-fd3823a47c5b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:12 crc kubenswrapper[4799]: W0127 08:07:12.696259 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3b3796e_e4b6_41e1_b1f5_dc7ade294816.slice/crio-ef6eda5c821716baafd33d304018eb5c9bdf941be1209ae41fb1f3c209cd9041 WatchSource:0}: Error finding container ef6eda5c821716baafd33d304018eb5c9bdf941be1209ae41fb1f3c209cd9041: Status 404 returned error can't find the container with id ef6eda5c821716baafd33d304018eb5c9bdf941be1209ae41fb1f3c209cd9041 Jan 27 08:07:12 crc kubenswrapper[4799]: I0127 08:07:12.704667 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nnhs2"] Jan 27 08:07:13 crc kubenswrapper[4799]: I0127 08:07:13.018281 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-etc-swift\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:13 crc kubenswrapper[4799]: E0127 08:07:13.018653 4799 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 08:07:13 crc kubenswrapper[4799]: E0127 08:07:13.018687 4799 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 08:07:13 crc kubenswrapper[4799]: E0127 08:07:13.018766 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-etc-swift podName:f707c5d5-a9c3-4fdb-8361-9604b6b70153 nodeName:}" failed. No retries permitted until 2026-01-27 08:07:15.018739854 +0000 UTC m=+1301.329843919 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-etc-swift") pod "swift-storage-0" (UID: "f707c5d5-a9c3-4fdb-8361-9604b6b70153") : configmap "swift-ring-files" not found Jan 27 08:07:13 crc kubenswrapper[4799]: I0127 08:07:13.666291 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nnhs2" event={"ID":"b3b3796e-e4b6-41e1-b1f5-dc7ade294816","Type":"ContainerStarted","Data":"ef6eda5c821716baafd33d304018eb5c9bdf941be1209ae41fb1f3c209cd9041"} Jan 27 08:07:13 crc kubenswrapper[4799]: I0127 08:07:13.876021 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-22s2d"] Jan 27 08:07:13 crc kubenswrapper[4799]: E0127 08:07:13.876396 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f26f77-a15a-4c1a-a697-fd3823a47c5b" containerName="mariadb-account-create-update" Jan 27 08:07:13 crc kubenswrapper[4799]: I0127 08:07:13.876413 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f26f77-a15a-4c1a-a697-fd3823a47c5b" containerName="mariadb-account-create-update" Jan 27 08:07:13 crc kubenswrapper[4799]: E0127 08:07:13.876430 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87" containerName="mariadb-account-create-update" Jan 27 08:07:13 crc kubenswrapper[4799]: I0127 08:07:13.876437 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87" containerName="mariadb-account-create-update" Jan 27 08:07:13 crc kubenswrapper[4799]: E0127 08:07:13.876448 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74da5422-dcf4-48cb-a29a-7378082a827d" containerName="mariadb-database-create" Jan 27 08:07:13 crc kubenswrapper[4799]: I0127 08:07:13.876456 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="74da5422-dcf4-48cb-a29a-7378082a827d" containerName="mariadb-database-create" Jan 27 08:07:13 crc kubenswrapper[4799]: E0127 08:07:13.876465 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de94f63b-88ce-4f40-acc5-d9f70195f265" containerName="mariadb-database-create" Jan 27 08:07:13 crc kubenswrapper[4799]: I0127 08:07:13.876471 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="de94f63b-88ce-4f40-acc5-d9f70195f265" containerName="mariadb-database-create" Jan 27 08:07:13 crc kubenswrapper[4799]: I0127 08:07:13.876625 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="de94f63b-88ce-4f40-acc5-d9f70195f265" containerName="mariadb-database-create" Jan 27 08:07:13 crc kubenswrapper[4799]: I0127 08:07:13.876638 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="74da5422-dcf4-48cb-a29a-7378082a827d" containerName="mariadb-database-create" Jan 27 08:07:13 crc kubenswrapper[4799]: I0127 08:07:13.876653 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87" containerName="mariadb-account-create-update" Jan 27 08:07:13 crc kubenswrapper[4799]: I0127 08:07:13.876663 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="49f26f77-a15a-4c1a-a697-fd3823a47c5b" containerName="mariadb-account-create-update" Jan 27 08:07:13 crc kubenswrapper[4799]: I0127 08:07:13.877105 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-22s2d" Jan 27 08:07:13 crc kubenswrapper[4799]: I0127 08:07:13.879657 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 27 08:07:13 crc kubenswrapper[4799]: I0127 08:07:13.880416 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-j7cm7" Jan 27 08:07:13 crc kubenswrapper[4799]: I0127 08:07:13.885862 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-22s2d"] Jan 27 08:07:14 crc kubenswrapper[4799]: I0127 08:07:14.048021 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e7f06a1-752e-4e8b-9d59-991326981dda-config-data\") pod \"glance-db-sync-22s2d\" (UID: \"1e7f06a1-752e-4e8b-9d59-991326981dda\") " pod="openstack/glance-db-sync-22s2d" Jan 27 08:07:14 crc kubenswrapper[4799]: I0127 08:07:14.048315 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqtpm\" (UniqueName: \"kubernetes.io/projected/1e7f06a1-752e-4e8b-9d59-991326981dda-kube-api-access-lqtpm\") pod \"glance-db-sync-22s2d\" (UID: \"1e7f06a1-752e-4e8b-9d59-991326981dda\") " pod="openstack/glance-db-sync-22s2d" Jan 27 08:07:14 crc kubenswrapper[4799]: I0127 08:07:14.048376 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e7f06a1-752e-4e8b-9d59-991326981dda-db-sync-config-data\") pod \"glance-db-sync-22s2d\" (UID: \"1e7f06a1-752e-4e8b-9d59-991326981dda\") " pod="openstack/glance-db-sync-22s2d" Jan 27 08:07:14 crc kubenswrapper[4799]: I0127 08:07:14.048511 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e7f06a1-752e-4e8b-9d59-991326981dda-combined-ca-bundle\") pod \"glance-db-sync-22s2d\" (UID: \"1e7f06a1-752e-4e8b-9d59-991326981dda\") " pod="openstack/glance-db-sync-22s2d" Jan 27 08:07:14 crc kubenswrapper[4799]: I0127 08:07:14.150547 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqtpm\" (UniqueName: \"kubernetes.io/projected/1e7f06a1-752e-4e8b-9d59-991326981dda-kube-api-access-lqtpm\") pod \"glance-db-sync-22s2d\" (UID: \"1e7f06a1-752e-4e8b-9d59-991326981dda\") " pod="openstack/glance-db-sync-22s2d" Jan 27 08:07:14 crc kubenswrapper[4799]: I0127 08:07:14.150592 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e7f06a1-752e-4e8b-9d59-991326981dda-db-sync-config-data\") pod \"glance-db-sync-22s2d\" (UID: \"1e7f06a1-752e-4e8b-9d59-991326981dda\") " pod="openstack/glance-db-sync-22s2d" Jan 27 08:07:14 crc kubenswrapper[4799]: I0127 08:07:14.150620 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e7f06a1-752e-4e8b-9d59-991326981dda-combined-ca-bundle\") pod \"glance-db-sync-22s2d\" (UID: \"1e7f06a1-752e-4e8b-9d59-991326981dda\") " pod="openstack/glance-db-sync-22s2d" Jan 27 08:07:14 crc kubenswrapper[4799]: I0127 08:07:14.150697 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e7f06a1-752e-4e8b-9d59-991326981dda-config-data\") pod \"glance-db-sync-22s2d\" (UID: \"1e7f06a1-752e-4e8b-9d59-991326981dda\") " pod="openstack/glance-db-sync-22s2d" Jan 27 08:07:14 crc kubenswrapper[4799]: I0127 08:07:14.155952 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e7f06a1-752e-4e8b-9d59-991326981dda-config-data\") pod \"glance-db-sync-22s2d\" (UID: \"1e7f06a1-752e-4e8b-9d59-991326981dda\") " pod="openstack/glance-db-sync-22s2d" Jan 27 08:07:14 crc kubenswrapper[4799]: I0127 08:07:14.158017 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e7f06a1-752e-4e8b-9d59-991326981dda-combined-ca-bundle\") pod \"glance-db-sync-22s2d\" (UID: \"1e7f06a1-752e-4e8b-9d59-991326981dda\") " pod="openstack/glance-db-sync-22s2d" Jan 27 08:07:14 crc kubenswrapper[4799]: I0127 08:07:14.159696 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e7f06a1-752e-4e8b-9d59-991326981dda-db-sync-config-data\") pod \"glance-db-sync-22s2d\" (UID: \"1e7f06a1-752e-4e8b-9d59-991326981dda\") " pod="openstack/glance-db-sync-22s2d" Jan 27 08:07:14 crc kubenswrapper[4799]: I0127 08:07:14.169167 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqtpm\" (UniqueName: \"kubernetes.io/projected/1e7f06a1-752e-4e8b-9d59-991326981dda-kube-api-access-lqtpm\") pod \"glance-db-sync-22s2d\" (UID: \"1e7f06a1-752e-4e8b-9d59-991326981dda\") " pod="openstack/glance-db-sync-22s2d" Jan 27 08:07:14 crc kubenswrapper[4799]: I0127 08:07:14.203212 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-22s2d" Jan 27 08:07:15 crc kubenswrapper[4799]: I0127 08:07:15.065426 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-etc-swift\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:15 crc kubenswrapper[4799]: E0127 08:07:15.065614 4799 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 08:07:15 crc kubenswrapper[4799]: E0127 08:07:15.065635 4799 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 08:07:15 crc kubenswrapper[4799]: E0127 08:07:15.065686 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-etc-swift podName:f707c5d5-a9c3-4fdb-8361-9604b6b70153 nodeName:}" failed. No retries permitted until 2026-01-27 08:07:19.065670676 +0000 UTC m=+1305.376774741 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-etc-swift") pod "swift-storage-0" (UID: "f707c5d5-a9c3-4fdb-8361-9604b6b70153") : configmap "swift-ring-files" not found Jan 27 08:07:15 crc kubenswrapper[4799]: I0127 08:07:15.378405 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-rkn86"] Jan 27 08:07:15 crc kubenswrapper[4799]: I0127 08:07:15.379656 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rkn86" Jan 27 08:07:15 crc kubenswrapper[4799]: I0127 08:07:15.381976 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 08:07:15 crc kubenswrapper[4799]: I0127 08:07:15.386792 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rkn86"] Jan 27 08:07:15 crc kubenswrapper[4799]: I0127 08:07:15.474806 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vj8b\" (UniqueName: \"kubernetes.io/projected/47c58d8c-7243-48d4-8359-c400be398f94-kube-api-access-9vj8b\") pod \"root-account-create-update-rkn86\" (UID: \"47c58d8c-7243-48d4-8359-c400be398f94\") " pod="openstack/root-account-create-update-rkn86" Jan 27 08:07:15 crc kubenswrapper[4799]: I0127 08:07:15.474916 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47c58d8c-7243-48d4-8359-c400be398f94-operator-scripts\") pod \"root-account-create-update-rkn86\" (UID: \"47c58d8c-7243-48d4-8359-c400be398f94\") " pod="openstack/root-account-create-update-rkn86" Jan 27 08:07:15 crc kubenswrapper[4799]: I0127 08:07:15.576490 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47c58d8c-7243-48d4-8359-c400be398f94-operator-scripts\") pod \"root-account-create-update-rkn86\" (UID: \"47c58d8c-7243-48d4-8359-c400be398f94\") " pod="openstack/root-account-create-update-rkn86" Jan 27 08:07:15 crc kubenswrapper[4799]: I0127 08:07:15.576596 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vj8b\" (UniqueName: \"kubernetes.io/projected/47c58d8c-7243-48d4-8359-c400be398f94-kube-api-access-9vj8b\") pod \"root-account-create-update-rkn86\" (UID: \"47c58d8c-7243-48d4-8359-c400be398f94\") " pod="openstack/root-account-create-update-rkn86" Jan 27 08:07:15 crc kubenswrapper[4799]: I0127 08:07:15.577693 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47c58d8c-7243-48d4-8359-c400be398f94-operator-scripts\") pod \"root-account-create-update-rkn86\" (UID: \"47c58d8c-7243-48d4-8359-c400be398f94\") " pod="openstack/root-account-create-update-rkn86" Jan 27 08:07:15 crc kubenswrapper[4799]: I0127 08:07:15.599707 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vj8b\" (UniqueName: \"kubernetes.io/projected/47c58d8c-7243-48d4-8359-c400be398f94-kube-api-access-9vj8b\") pod \"root-account-create-update-rkn86\" (UID: \"47c58d8c-7243-48d4-8359-c400be398f94\") " pod="openstack/root-account-create-update-rkn86" Jan 27 08:07:15 crc kubenswrapper[4799]: I0127 08:07:15.699707 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rkn86" Jan 27 08:07:17 crc kubenswrapper[4799]: I0127 08:07:17.089760 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rkn86"] Jan 27 08:07:17 crc kubenswrapper[4799]: W0127 08:07:17.095142 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47c58d8c_7243_48d4_8359_c400be398f94.slice/crio-20650e12512e09ed7647f4cb68ffb286df127475f3405725bd7c5861a81a9cff WatchSource:0}: Error finding container 20650e12512e09ed7647f4cb68ffb286df127475f3405725bd7c5861a81a9cff: Status 404 returned error can't find the container with id 20650e12512e09ed7647f4cb68ffb286df127475f3405725bd7c5861a81a9cff Jan 27 08:07:17 crc kubenswrapper[4799]: I0127 08:07:17.114823 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-22s2d"] Jan 27 08:07:17 crc kubenswrapper[4799]: I0127 08:07:17.708219 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-22s2d" event={"ID":"1e7f06a1-752e-4e8b-9d59-991326981dda","Type":"ContainerStarted","Data":"1836536ff3ed05a60b70fa11155386632abe179de5950659064663568c7cd9f4"} Jan 27 08:07:17 crc kubenswrapper[4799]: I0127 08:07:17.710450 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nnhs2" event={"ID":"b3b3796e-e4b6-41e1-b1f5-dc7ade294816","Type":"ContainerStarted","Data":"42fb8ed3bcac153c476ac7c8729e2db6553912fbdf45263c9f43c02892a1d01d"} Jan 27 08:07:17 crc kubenswrapper[4799]: I0127 08:07:17.714998 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rkn86" event={"ID":"47c58d8c-7243-48d4-8359-c400be398f94","Type":"ContainerDied","Data":"b97a28e671c633aa552074459bcd9ca6370d2cd4bbbec1651f30669159c9ecb0"} Jan 27 08:07:17 crc kubenswrapper[4799]: I0127 08:07:17.714807 4799 generic.go:334] "Generic (PLEG): container finished" podID="47c58d8c-7243-48d4-8359-c400be398f94" containerID="b97a28e671c633aa552074459bcd9ca6370d2cd4bbbec1651f30669159c9ecb0" exitCode=0 Jan 27 08:07:17 crc kubenswrapper[4799]: I0127 08:07:17.715399 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rkn86" event={"ID":"47c58d8c-7243-48d4-8359-c400be398f94","Type":"ContainerStarted","Data":"20650e12512e09ed7647f4cb68ffb286df127475f3405725bd7c5861a81a9cff"} Jan 27 08:07:17 crc kubenswrapper[4799]: I0127 08:07:17.749705 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-nnhs2" podStartSLOduration=2.869486202 podStartE2EDuration="6.749671355s" podCreationTimestamp="2026-01-27 08:07:11 +0000 UTC" firstStartedPulling="2026-01-27 08:07:12.70091012 +0000 UTC m=+1299.012014185" lastFinishedPulling="2026-01-27 08:07:16.581095263 +0000 UTC m=+1302.892199338" observedRunningTime="2026-01-27 08:07:17.738875788 +0000 UTC m=+1304.049979853" watchObservedRunningTime="2026-01-27 08:07:17.749671355 +0000 UTC m=+1304.060775460" Jan 27 08:07:19 crc kubenswrapper[4799]: I0127 08:07:19.018800 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 27 08:07:19 crc kubenswrapper[4799]: I0127 08:07:19.096558 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rkn86" Jan 27 08:07:19 crc kubenswrapper[4799]: I0127 08:07:19.137778 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-etc-swift\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:19 crc kubenswrapper[4799]: E0127 08:07:19.139491 4799 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 08:07:19 crc kubenswrapper[4799]: E0127 08:07:19.139514 4799 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 08:07:19 crc kubenswrapper[4799]: E0127 08:07:19.139552 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-etc-swift podName:f707c5d5-a9c3-4fdb-8361-9604b6b70153 nodeName:}" failed. No retries permitted until 2026-01-27 08:07:27.13953756 +0000 UTC m=+1313.450641615 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-etc-swift") pod "swift-storage-0" (UID: "f707c5d5-a9c3-4fdb-8361-9604b6b70153") : configmap "swift-ring-files" not found Jan 27 08:07:19 crc kubenswrapper[4799]: I0127 08:07:19.239234 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47c58d8c-7243-48d4-8359-c400be398f94-operator-scripts\") pod \"47c58d8c-7243-48d4-8359-c400be398f94\" (UID: \"47c58d8c-7243-48d4-8359-c400be398f94\") " Jan 27 08:07:19 crc kubenswrapper[4799]: I0127 08:07:19.239334 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vj8b\" (UniqueName: \"kubernetes.io/projected/47c58d8c-7243-48d4-8359-c400be398f94-kube-api-access-9vj8b\") pod \"47c58d8c-7243-48d4-8359-c400be398f94\" (UID: \"47c58d8c-7243-48d4-8359-c400be398f94\") " Jan 27 08:07:19 crc kubenswrapper[4799]: I0127 08:07:19.240559 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47c58d8c-7243-48d4-8359-c400be398f94-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "47c58d8c-7243-48d4-8359-c400be398f94" (UID: "47c58d8c-7243-48d4-8359-c400be398f94"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:19 crc kubenswrapper[4799]: I0127 08:07:19.246649 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47c58d8c-7243-48d4-8359-c400be398f94-kube-api-access-9vj8b" (OuterVolumeSpecName: "kube-api-access-9vj8b") pod "47c58d8c-7243-48d4-8359-c400be398f94" (UID: "47c58d8c-7243-48d4-8359-c400be398f94"). InnerVolumeSpecName "kube-api-access-9vj8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:07:19 crc kubenswrapper[4799]: I0127 08:07:19.341364 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47c58d8c-7243-48d4-8359-c400be398f94-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:19 crc kubenswrapper[4799]: I0127 08:07:19.341701 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vj8b\" (UniqueName: \"kubernetes.io/projected/47c58d8c-7243-48d4-8359-c400be398f94-kube-api-access-9vj8b\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:19 crc kubenswrapper[4799]: I0127 08:07:19.732944 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rkn86" event={"ID":"47c58d8c-7243-48d4-8359-c400be398f94","Type":"ContainerDied","Data":"20650e12512e09ed7647f4cb68ffb286df127475f3405725bd7c5861a81a9cff"} Jan 27 08:07:19 crc kubenswrapper[4799]: I0127 08:07:19.732996 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20650e12512e09ed7647f4cb68ffb286df127475f3405725bd7c5861a81a9cff" Jan 27 08:07:19 crc kubenswrapper[4799]: I0127 08:07:19.732971 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rkn86" Jan 27 08:07:20 crc kubenswrapper[4799]: I0127 08:07:20.475223 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-6bk7k" Jan 27 08:07:20 crc kubenswrapper[4799]: I0127 08:07:20.562894 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7tq2z"] Jan 27 08:07:20 crc kubenswrapper[4799]: I0127 08:07:20.563214 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" podUID="a14543e4-52bb-497f-bec7-d986ec4545e5" containerName="dnsmasq-dns" containerID="cri-o://0feb6978ca2b40da37f4a7a77f58bded569c4b1e36443258f15bbaf2d5999ab9" gracePeriod=10 Jan 27 08:07:20 crc kubenswrapper[4799]: I0127 08:07:20.742309 4799 generic.go:334] "Generic (PLEG): container finished" podID="a14543e4-52bb-497f-bec7-d986ec4545e5" containerID="0feb6978ca2b40da37f4a7a77f58bded569c4b1e36443258f15bbaf2d5999ab9" exitCode=0 Jan 27 08:07:20 crc kubenswrapper[4799]: I0127 08:07:20.742360 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" event={"ID":"a14543e4-52bb-497f-bec7-d986ec4545e5","Type":"ContainerDied","Data":"0feb6978ca2b40da37f4a7a77f58bded569c4b1e36443258f15bbaf2d5999ab9"} Jan 27 08:07:21 crc kubenswrapper[4799]: I0127 08:07:21.583323 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" Jan 27 08:07:21 crc kubenswrapper[4799]: I0127 08:07:21.703809 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-ovsdbserver-nb\") pod \"a14543e4-52bb-497f-bec7-d986ec4545e5\" (UID: \"a14543e4-52bb-497f-bec7-d986ec4545e5\") " Jan 27 08:07:21 crc kubenswrapper[4799]: I0127 08:07:21.703907 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-config\") pod \"a14543e4-52bb-497f-bec7-d986ec4545e5\" (UID: \"a14543e4-52bb-497f-bec7-d986ec4545e5\") " Jan 27 08:07:21 crc kubenswrapper[4799]: I0127 08:07:21.704006 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-ovsdbserver-sb\") pod \"a14543e4-52bb-497f-bec7-d986ec4545e5\" (UID: \"a14543e4-52bb-497f-bec7-d986ec4545e5\") " Jan 27 08:07:21 crc kubenswrapper[4799]: I0127 08:07:21.704074 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnbcq\" (UniqueName: \"kubernetes.io/projected/a14543e4-52bb-497f-bec7-d986ec4545e5-kube-api-access-mnbcq\") pod \"a14543e4-52bb-497f-bec7-d986ec4545e5\" (UID: \"a14543e4-52bb-497f-bec7-d986ec4545e5\") " Jan 27 08:07:21 crc kubenswrapper[4799]: I0127 08:07:21.704105 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-dns-svc\") pod \"a14543e4-52bb-497f-bec7-d986ec4545e5\" (UID: \"a14543e4-52bb-497f-bec7-d986ec4545e5\") " Jan 27 08:07:21 crc kubenswrapper[4799]: I0127 08:07:21.709832 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a14543e4-52bb-497f-bec7-d986ec4545e5-kube-api-access-mnbcq" (OuterVolumeSpecName: "kube-api-access-mnbcq") pod "a14543e4-52bb-497f-bec7-d986ec4545e5" (UID: "a14543e4-52bb-497f-bec7-d986ec4545e5"). InnerVolumeSpecName "kube-api-access-mnbcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:07:21 crc kubenswrapper[4799]: I0127 08:07:21.746816 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a14543e4-52bb-497f-bec7-d986ec4545e5" (UID: "a14543e4-52bb-497f-bec7-d986ec4545e5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:21 crc kubenswrapper[4799]: I0127 08:07:21.761794 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" event={"ID":"a14543e4-52bb-497f-bec7-d986ec4545e5","Type":"ContainerDied","Data":"b637b61c035d3a90f8430cf86cb4b6491f545134bcdedc38a338ad7c3fe1d4fa"} Jan 27 08:07:21 crc kubenswrapper[4799]: I0127 08:07:21.761876 4799 scope.go:117] "RemoveContainer" containerID="0feb6978ca2b40da37f4a7a77f58bded569c4b1e36443258f15bbaf2d5999ab9" Jan 27 08:07:21 crc kubenswrapper[4799]: I0127 08:07:21.762030 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7tq2z" Jan 27 08:07:21 crc kubenswrapper[4799]: I0127 08:07:21.765359 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-config" (OuterVolumeSpecName: "config") pod "a14543e4-52bb-497f-bec7-d986ec4545e5" (UID: "a14543e4-52bb-497f-bec7-d986ec4545e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:21 crc kubenswrapper[4799]: I0127 08:07:21.773678 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a14543e4-52bb-497f-bec7-d986ec4545e5" (UID: "a14543e4-52bb-497f-bec7-d986ec4545e5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:21 crc kubenswrapper[4799]: I0127 08:07:21.778386 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a14543e4-52bb-497f-bec7-d986ec4545e5" (UID: "a14543e4-52bb-497f-bec7-d986ec4545e5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:21 crc kubenswrapper[4799]: I0127 08:07:21.808804 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:21 crc kubenswrapper[4799]: I0127 08:07:21.808872 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:21 crc kubenswrapper[4799]: I0127 08:07:21.808884 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:21 crc kubenswrapper[4799]: I0127 08:07:21.808893 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnbcq\" (UniqueName: \"kubernetes.io/projected/a14543e4-52bb-497f-bec7-d986ec4545e5-kube-api-access-mnbcq\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:21 crc kubenswrapper[4799]: I0127 08:07:21.808928 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a14543e4-52bb-497f-bec7-d986ec4545e5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:21 crc kubenswrapper[4799]: I0127 08:07:21.866419 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-rkn86"] Jan 27 08:07:21 crc kubenswrapper[4799]: I0127 08:07:21.873086 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-rkn86"] Jan 27 08:07:22 crc kubenswrapper[4799]: I0127 08:07:22.102583 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7tq2z"] Jan 27 08:07:22 crc kubenswrapper[4799]: I0127 08:07:22.110785 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7tq2z"] Jan 27 08:07:22 crc kubenswrapper[4799]: I0127 08:07:22.464828 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47c58d8c-7243-48d4-8359-c400be398f94" path="/var/lib/kubelet/pods/47c58d8c-7243-48d4-8359-c400be398f94/volumes" Jan 27 08:07:22 crc kubenswrapper[4799]: I0127 08:07:22.465597 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a14543e4-52bb-497f-bec7-d986ec4545e5" path="/var/lib/kubelet/pods/a14543e4-52bb-497f-bec7-d986ec4545e5/volumes" Jan 27 08:07:24 crc kubenswrapper[4799]: I0127 08:07:24.644221 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-lx6nr" podUID="c92846fc-e305-4af9-816a-4067b79d2403" containerName="ovn-controller" probeResult="failure" output=< Jan 27 08:07:24 crc kubenswrapper[4799]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 08:07:24 crc kubenswrapper[4799]: > Jan 27 08:07:25 crc kubenswrapper[4799]: I0127 08:07:25.794467 4799 generic.go:334] "Generic (PLEG): container finished" podID="b3b3796e-e4b6-41e1-b1f5-dc7ade294816" containerID="42fb8ed3bcac153c476ac7c8729e2db6553912fbdf45263c9f43c02892a1d01d" exitCode=0 Jan 27 08:07:25 crc kubenswrapper[4799]: I0127 08:07:25.794570 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nnhs2" event={"ID":"b3b3796e-e4b6-41e1-b1f5-dc7ade294816","Type":"ContainerDied","Data":"42fb8ed3bcac153c476ac7c8729e2db6553912fbdf45263c9f43c02892a1d01d"} Jan 27 08:07:26 crc kubenswrapper[4799]: I0127 08:07:26.926671 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-5wzgq"] Jan 27 08:07:26 crc kubenswrapper[4799]: E0127 08:07:26.927001 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47c58d8c-7243-48d4-8359-c400be398f94" containerName="mariadb-account-create-update" Jan 27 08:07:26 crc kubenswrapper[4799]: I0127 08:07:26.927013 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c58d8c-7243-48d4-8359-c400be398f94" containerName="mariadb-account-create-update" Jan 27 08:07:26 crc kubenswrapper[4799]: E0127 08:07:26.927027 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a14543e4-52bb-497f-bec7-d986ec4545e5" containerName="init" Jan 27 08:07:26 crc kubenswrapper[4799]: I0127 08:07:26.927033 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a14543e4-52bb-497f-bec7-d986ec4545e5" containerName="init" Jan 27 08:07:26 crc kubenswrapper[4799]: E0127 08:07:26.927047 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a14543e4-52bb-497f-bec7-d986ec4545e5" containerName="dnsmasq-dns" Jan 27 08:07:26 crc kubenswrapper[4799]: I0127 08:07:26.927054 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a14543e4-52bb-497f-bec7-d986ec4545e5" containerName="dnsmasq-dns" Jan 27 08:07:26 crc kubenswrapper[4799]: I0127 08:07:26.927213 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a14543e4-52bb-497f-bec7-d986ec4545e5" containerName="dnsmasq-dns" Jan 27 08:07:26 crc kubenswrapper[4799]: I0127 08:07:26.927223 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="47c58d8c-7243-48d4-8359-c400be398f94" containerName="mariadb-account-create-update" Jan 27 08:07:26 crc kubenswrapper[4799]: I0127 08:07:26.927828 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5wzgq" Jan 27 08:07:26 crc kubenswrapper[4799]: I0127 08:07:26.932892 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 08:07:26 crc kubenswrapper[4799]: I0127 08:07:26.940013 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5wzgq"] Jan 27 08:07:27 crc kubenswrapper[4799]: I0127 08:07:27.024157 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94a9d5a4-a457-4cf4-92aa-2de96430c864-operator-scripts\") pod \"root-account-create-update-5wzgq\" (UID: \"94a9d5a4-a457-4cf4-92aa-2de96430c864\") " pod="openstack/root-account-create-update-5wzgq" Jan 27 08:07:27 crc kubenswrapper[4799]: I0127 08:07:27.024312 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzp9m\" (UniqueName: \"kubernetes.io/projected/94a9d5a4-a457-4cf4-92aa-2de96430c864-kube-api-access-jzp9m\") pod \"root-account-create-update-5wzgq\" (UID: \"94a9d5a4-a457-4cf4-92aa-2de96430c864\") " pod="openstack/root-account-create-update-5wzgq" Jan 27 08:07:27 crc kubenswrapper[4799]: I0127 08:07:27.126833 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94a9d5a4-a457-4cf4-92aa-2de96430c864-operator-scripts\") pod \"root-account-create-update-5wzgq\" (UID: \"94a9d5a4-a457-4cf4-92aa-2de96430c864\") " pod="openstack/root-account-create-update-5wzgq" Jan 27 08:07:27 crc kubenswrapper[4799]: I0127 08:07:27.126996 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzp9m\" (UniqueName: \"kubernetes.io/projected/94a9d5a4-a457-4cf4-92aa-2de96430c864-kube-api-access-jzp9m\") pod \"root-account-create-update-5wzgq\" (UID: \"94a9d5a4-a457-4cf4-92aa-2de96430c864\") " pod="openstack/root-account-create-update-5wzgq" Jan 27 08:07:27 crc kubenswrapper[4799]: I0127 08:07:27.128682 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94a9d5a4-a457-4cf4-92aa-2de96430c864-operator-scripts\") pod \"root-account-create-update-5wzgq\" (UID: \"94a9d5a4-a457-4cf4-92aa-2de96430c864\") " pod="openstack/root-account-create-update-5wzgq" Jan 27 08:07:27 crc kubenswrapper[4799]: I0127 08:07:27.148903 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzp9m\" (UniqueName: \"kubernetes.io/projected/94a9d5a4-a457-4cf4-92aa-2de96430c864-kube-api-access-jzp9m\") pod \"root-account-create-update-5wzgq\" (UID: \"94a9d5a4-a457-4cf4-92aa-2de96430c864\") " pod="openstack/root-account-create-update-5wzgq" Jan 27 08:07:27 crc kubenswrapper[4799]: I0127 08:07:27.229071 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-etc-swift\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:27 crc kubenswrapper[4799]: I0127 08:07:27.234291 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-etc-swift\") pod \"swift-storage-0\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " pod="openstack/swift-storage-0" Jan 27 08:07:27 crc kubenswrapper[4799]: I0127 08:07:27.297630 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5wzgq" Jan 27 08:07:27 crc kubenswrapper[4799]: I0127 08:07:27.507164 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 08:07:28 crc kubenswrapper[4799]: I0127 08:07:28.894524 4799 scope.go:117] "RemoveContainer" containerID="7c6c2f5fcdc9ba28ed0f22244ff75a7f412c6822d9d23eb91ca28beb2c719adf" Jan 27 08:07:28 crc kubenswrapper[4799]: I0127 08:07:28.991511 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.165998 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-ring-data-devices\") pod \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.166054 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-dispersionconf\") pod \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.166156 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-combined-ca-bundle\") pod \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.166195 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-etc-swift\") pod \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.166250 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-scripts\") pod \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.166274 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-swiftconf\") pod \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.166353 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x49h6\" (UniqueName: \"kubernetes.io/projected/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-kube-api-access-x49h6\") pod \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\" (UID: \"b3b3796e-e4b6-41e1-b1f5-dc7ade294816\") " Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.169015 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b3b3796e-e4b6-41e1-b1f5-dc7ade294816" (UID: "b3b3796e-e4b6-41e1-b1f5-dc7ade294816"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.170957 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b3b3796e-e4b6-41e1-b1f5-dc7ade294816" (UID: "b3b3796e-e4b6-41e1-b1f5-dc7ade294816"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.173247 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-kube-api-access-x49h6" (OuterVolumeSpecName: "kube-api-access-x49h6") pod "b3b3796e-e4b6-41e1-b1f5-dc7ade294816" (UID: "b3b3796e-e4b6-41e1-b1f5-dc7ade294816"). InnerVolumeSpecName "kube-api-access-x49h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.177767 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b3b3796e-e4b6-41e1-b1f5-dc7ade294816" (UID: "b3b3796e-e4b6-41e1-b1f5-dc7ade294816"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.195575 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b3b3796e-e4b6-41e1-b1f5-dc7ade294816" (UID: "b3b3796e-e4b6-41e1-b1f5-dc7ade294816"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.202727 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-scripts" (OuterVolumeSpecName: "scripts") pod "b3b3796e-e4b6-41e1-b1f5-dc7ade294816" (UID: "b3b3796e-e4b6-41e1-b1f5-dc7ade294816"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.205464 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3b3796e-e4b6-41e1-b1f5-dc7ade294816" (UID: "b3b3796e-e4b6-41e1-b1f5-dc7ade294816"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.268761 4799 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.268799 4799 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.268815 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.268826 4799 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.268837 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.268848 4799 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.268863 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x49h6\" (UniqueName: \"kubernetes.io/projected/b3b3796e-e4b6-41e1-b1f5-dc7ade294816-kube-api-access-x49h6\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.516850 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5wzgq"] Jan 27 08:07:29 crc kubenswrapper[4799]: W0127 08:07:29.525675 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94a9d5a4_a457_4cf4_92aa_2de96430c864.slice/crio-f0e39b360b191c7a02a443898083f3d030d8bffad538a70ddc5274c13b5db956 WatchSource:0}: Error finding container f0e39b360b191c7a02a443898083f3d030d8bffad538a70ddc5274c13b5db956: Status 404 returned error can't find the container with id f0e39b360b191c7a02a443898083f3d030d8bffad538a70ddc5274c13b5db956 Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.609554 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 08:07:29 crc kubenswrapper[4799]: W0127 08:07:29.622973 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf707c5d5_a9c3_4fdb_8361_9604b6b70153.slice/crio-93d96ff78264b52b471037a52000aa722f8578f5c5c48d5a662ada9c6c454fc5 WatchSource:0}: Error finding container 93d96ff78264b52b471037a52000aa722f8578f5c5c48d5a662ada9c6c454fc5: Status 404 returned error can't find the container with id 93d96ff78264b52b471037a52000aa722f8578f5c5c48d5a662ada9c6c454fc5 Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.656699 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-lx6nr" podUID="c92846fc-e305-4af9-816a-4067b79d2403" containerName="ovn-controller" probeResult="failure" output=< Jan 27 08:07:29 crc kubenswrapper[4799]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 08:07:29 crc kubenswrapper[4799]: > Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.669443 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.670888 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.834994 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nnhs2" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.834994 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nnhs2" event={"ID":"b3b3796e-e4b6-41e1-b1f5-dc7ade294816","Type":"ContainerDied","Data":"ef6eda5c821716baafd33d304018eb5c9bdf941be1209ae41fb1f3c209cd9041"} Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.835052 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef6eda5c821716baafd33d304018eb5c9bdf941be1209ae41fb1f3c209cd9041" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.836738 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerStarted","Data":"93d96ff78264b52b471037a52000aa722f8578f5c5c48d5a662ada9c6c454fc5"} Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.838682 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5wzgq" event={"ID":"94a9d5a4-a457-4cf4-92aa-2de96430c864","Type":"ContainerStarted","Data":"fc2b3e78461f31ae8c0d985b3bfc7d70d2d784521f140b7a91d5d07cc8a1c1bb"} Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.838711 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5wzgq" event={"ID":"94a9d5a4-a457-4cf4-92aa-2de96430c864","Type":"ContainerStarted","Data":"f0e39b360b191c7a02a443898083f3d030d8bffad538a70ddc5274c13b5db956"} Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.840609 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-22s2d" event={"ID":"1e7f06a1-752e-4e8b-9d59-991326981dda","Type":"ContainerStarted","Data":"16cbdbeec150e5934b19c040711658f9ffae0d22a481b7b98c67646ecc86a55d"} Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.858411 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-5wzgq" podStartSLOduration=3.858383554 podStartE2EDuration="3.858383554s" podCreationTimestamp="2026-01-27 08:07:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:07:29.852599953 +0000 UTC m=+1316.163704058" watchObservedRunningTime="2026-01-27 08:07:29.858383554 +0000 UTC m=+1316.169487629" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.893468 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-22s2d" podStartSLOduration=4.944339655 podStartE2EDuration="16.89344362s" podCreationTimestamp="2026-01-27 08:07:13 +0000 UTC" firstStartedPulling="2026-01-27 08:07:17.153838976 +0000 UTC m=+1303.464943041" lastFinishedPulling="2026-01-27 08:07:29.102942941 +0000 UTC m=+1315.414047006" observedRunningTime="2026-01-27 08:07:29.892117824 +0000 UTC m=+1316.203221889" watchObservedRunningTime="2026-01-27 08:07:29.89344362 +0000 UTC m=+1316.204547695" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.931540 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-lx6nr-config-jsq6j"] Jan 27 08:07:29 crc kubenswrapper[4799]: E0127 08:07:29.933758 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b3796e-e4b6-41e1-b1f5-dc7ade294816" containerName="swift-ring-rebalance" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.933786 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b3796e-e4b6-41e1-b1f5-dc7ade294816" containerName="swift-ring-rebalance" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.934122 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3b3796e-e4b6-41e1-b1f5-dc7ade294816" containerName="swift-ring-rebalance" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.935119 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lx6nr-config-jsq6j" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.937914 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 08:07:29 crc kubenswrapper[4799]: I0127 08:07:29.950100 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lx6nr-config-jsq6j"] Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.089074 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/52b4f31d-10e3-4fcf-a531-72197d81a764-var-log-ovn\") pod \"ovn-controller-lx6nr-config-jsq6j\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " pod="openstack/ovn-controller-lx6nr-config-jsq6j" Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.089457 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/52b4f31d-10e3-4fcf-a531-72197d81a764-additional-scripts\") pod \"ovn-controller-lx6nr-config-jsq6j\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " pod="openstack/ovn-controller-lx6nr-config-jsq6j" Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.089493 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/52b4f31d-10e3-4fcf-a531-72197d81a764-var-run\") pod \"ovn-controller-lx6nr-config-jsq6j\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " pod="openstack/ovn-controller-lx6nr-config-jsq6j" Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.089529 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/52b4f31d-10e3-4fcf-a531-72197d81a764-var-run-ovn\") pod \"ovn-controller-lx6nr-config-jsq6j\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " pod="openstack/ovn-controller-lx6nr-config-jsq6j" Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.089717 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52b4f31d-10e3-4fcf-a531-72197d81a764-scripts\") pod \"ovn-controller-lx6nr-config-jsq6j\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " pod="openstack/ovn-controller-lx6nr-config-jsq6j" Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.089915 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q8jr\" (UniqueName: \"kubernetes.io/projected/52b4f31d-10e3-4fcf-a531-72197d81a764-kube-api-access-9q8jr\") pod \"ovn-controller-lx6nr-config-jsq6j\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " pod="openstack/ovn-controller-lx6nr-config-jsq6j" Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.191136 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/52b4f31d-10e3-4fcf-a531-72197d81a764-var-run-ovn\") pod \"ovn-controller-lx6nr-config-jsq6j\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " pod="openstack/ovn-controller-lx6nr-config-jsq6j" Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.191194 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52b4f31d-10e3-4fcf-a531-72197d81a764-scripts\") pod \"ovn-controller-lx6nr-config-jsq6j\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " pod="openstack/ovn-controller-lx6nr-config-jsq6j" Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.191248 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q8jr\" (UniqueName: \"kubernetes.io/projected/52b4f31d-10e3-4fcf-a531-72197d81a764-kube-api-access-9q8jr\") pod \"ovn-controller-lx6nr-config-jsq6j\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " pod="openstack/ovn-controller-lx6nr-config-jsq6j" Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.191388 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/52b4f31d-10e3-4fcf-a531-72197d81a764-var-log-ovn\") pod \"ovn-controller-lx6nr-config-jsq6j\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " pod="openstack/ovn-controller-lx6nr-config-jsq6j" Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.191410 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/52b4f31d-10e3-4fcf-a531-72197d81a764-additional-scripts\") pod \"ovn-controller-lx6nr-config-jsq6j\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " pod="openstack/ovn-controller-lx6nr-config-jsq6j" Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.191458 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/52b4f31d-10e3-4fcf-a531-72197d81a764-var-run\") pod \"ovn-controller-lx6nr-config-jsq6j\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " pod="openstack/ovn-controller-lx6nr-config-jsq6j" Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.191713 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/52b4f31d-10e3-4fcf-a531-72197d81a764-var-run-ovn\") pod \"ovn-controller-lx6nr-config-jsq6j\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " pod="openstack/ovn-controller-lx6nr-config-jsq6j" Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.191733 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/52b4f31d-10e3-4fcf-a531-72197d81a764-var-run\") pod \"ovn-controller-lx6nr-config-jsq6j\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " pod="openstack/ovn-controller-lx6nr-config-jsq6j" Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.191744 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/52b4f31d-10e3-4fcf-a531-72197d81a764-var-log-ovn\") pod \"ovn-controller-lx6nr-config-jsq6j\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " pod="openstack/ovn-controller-lx6nr-config-jsq6j" Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.192646 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/52b4f31d-10e3-4fcf-a531-72197d81a764-additional-scripts\") pod \"ovn-controller-lx6nr-config-jsq6j\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " pod="openstack/ovn-controller-lx6nr-config-jsq6j" Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.193639 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52b4f31d-10e3-4fcf-a531-72197d81a764-scripts\") pod \"ovn-controller-lx6nr-config-jsq6j\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " pod="openstack/ovn-controller-lx6nr-config-jsq6j" Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.221959 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q8jr\" (UniqueName: \"kubernetes.io/projected/52b4f31d-10e3-4fcf-a531-72197d81a764-kube-api-access-9q8jr\") pod \"ovn-controller-lx6nr-config-jsq6j\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " pod="openstack/ovn-controller-lx6nr-config-jsq6j" Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.269345 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lx6nr-config-jsq6j" Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.718428 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lx6nr-config-jsq6j"] Jan 27 08:07:30 crc kubenswrapper[4799]: W0127 08:07:30.786473 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52b4f31d_10e3_4fcf_a531_72197d81a764.slice/crio-8f3b0ae30b4440782b77b5a99938e65fadd63d471b9687e6df9071f62fac3fbb WatchSource:0}: Error finding container 8f3b0ae30b4440782b77b5a99938e65fadd63d471b9687e6df9071f62fac3fbb: Status 404 returned error can't find the container with id 8f3b0ae30b4440782b77b5a99938e65fadd63d471b9687e6df9071f62fac3fbb Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.852633 4799 generic.go:334] "Generic (PLEG): container finished" podID="8d822fe6-f547-4b8f-a6e4-c7256e1b2ace" containerID="eeb8b3edecbf9c4102ac408a97dae6338b573a4495066dcb7a4630df2561b314" exitCode=0 Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.852727 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace","Type":"ContainerDied","Data":"eeb8b3edecbf9c4102ac408a97dae6338b573a4495066dcb7a4630df2561b314"} Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.856641 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lx6nr-config-jsq6j" event={"ID":"52b4f31d-10e3-4fcf-a531-72197d81a764","Type":"ContainerStarted","Data":"8f3b0ae30b4440782b77b5a99938e65fadd63d471b9687e6df9071f62fac3fbb"} Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.859023 4799 generic.go:334] "Generic (PLEG): container finished" podID="94a9d5a4-a457-4cf4-92aa-2de96430c864" containerID="fc2b3e78461f31ae8c0d985b3bfc7d70d2d784521f140b7a91d5d07cc8a1c1bb" exitCode=0 Jan 27 08:07:30 crc kubenswrapper[4799]: I0127 08:07:30.859100 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5wzgq" event={"ID":"94a9d5a4-a457-4cf4-92aa-2de96430c864","Type":"ContainerDied","Data":"fc2b3e78461f31ae8c0d985b3bfc7d70d2d784521f140b7a91d5d07cc8a1c1bb"} Jan 27 08:07:31 crc kubenswrapper[4799]: I0127 08:07:31.878684 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerStarted","Data":"5f6ad523ec32449ea83c02924beadf32d298bbe23dfa33c93e20912e9492a329"} Jan 27 08:07:31 crc kubenswrapper[4799]: I0127 08:07:31.879325 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerStarted","Data":"c1c3036953768d8694461daaf820c6e0b6719d2fd7d5cf0b122afc241b86a7f8"} Jan 27 08:07:31 crc kubenswrapper[4799]: I0127 08:07:31.879347 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerStarted","Data":"f81a6afb6d0a44c9057b113f1164161ca416a68ae9a9c80e23e3c53a915439ca"} Jan 27 08:07:31 crc kubenswrapper[4799]: I0127 08:07:31.879363 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerStarted","Data":"c9d0893ee0366152b7225975257a1cb9bd87ad844aa46d49cb26f4f0a856f1bd"} Jan 27 08:07:31 crc kubenswrapper[4799]: I0127 08:07:31.885613 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lx6nr-config-jsq6j" event={"ID":"52b4f31d-10e3-4fcf-a531-72197d81a764","Type":"ContainerDied","Data":"875be4c83d9eb5ce22a15cfed93a30c532bc23ab0615fda18bc0dede97ba831d"} Jan 27 08:07:31 crc kubenswrapper[4799]: I0127 08:07:31.885475 4799 generic.go:334] "Generic (PLEG): container finished" podID="52b4f31d-10e3-4fcf-a531-72197d81a764" containerID="875be4c83d9eb5ce22a15cfed93a30c532bc23ab0615fda18bc0dede97ba831d" exitCode=0 Jan 27 08:07:31 crc kubenswrapper[4799]: I0127 08:07:31.891767 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace","Type":"ContainerStarted","Data":"89a107c494f936fe6c451a1549012a9e938164def6d5383c5c024b222a155ca4"} Jan 27 08:07:31 crc kubenswrapper[4799]: I0127 08:07:31.891972 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 08:07:31 crc kubenswrapper[4799]: I0127 08:07:31.932193 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.947143778 podStartE2EDuration="1m18.932168618s" podCreationTimestamp="2026-01-27 08:06:13 +0000 UTC" firstStartedPulling="2026-01-27 08:06:14.883948819 +0000 UTC m=+1241.195052884" lastFinishedPulling="2026-01-27 08:06:56.868973639 +0000 UTC m=+1283.180077724" observedRunningTime="2026-01-27 08:07:31.931648483 +0000 UTC m=+1318.242752558" watchObservedRunningTime="2026-01-27 08:07:31.932168618 +0000 UTC m=+1318.243272683" Jan 27 08:07:32 crc kubenswrapper[4799]: I0127 08:07:32.270677 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5wzgq" Jan 27 08:07:32 crc kubenswrapper[4799]: I0127 08:07:32.446869 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94a9d5a4-a457-4cf4-92aa-2de96430c864-operator-scripts\") pod \"94a9d5a4-a457-4cf4-92aa-2de96430c864\" (UID: \"94a9d5a4-a457-4cf4-92aa-2de96430c864\") " Jan 27 08:07:32 crc kubenswrapper[4799]: I0127 08:07:32.446993 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzp9m\" (UniqueName: \"kubernetes.io/projected/94a9d5a4-a457-4cf4-92aa-2de96430c864-kube-api-access-jzp9m\") pod \"94a9d5a4-a457-4cf4-92aa-2de96430c864\" (UID: \"94a9d5a4-a457-4cf4-92aa-2de96430c864\") " Jan 27 08:07:32 crc kubenswrapper[4799]: I0127 08:07:32.447276 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94a9d5a4-a457-4cf4-92aa-2de96430c864-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94a9d5a4-a457-4cf4-92aa-2de96430c864" (UID: "94a9d5a4-a457-4cf4-92aa-2de96430c864"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:32 crc kubenswrapper[4799]: I0127 08:07:32.451980 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a9d5a4-a457-4cf4-92aa-2de96430c864-kube-api-access-jzp9m" (OuterVolumeSpecName: "kube-api-access-jzp9m") pod "94a9d5a4-a457-4cf4-92aa-2de96430c864" (UID: "94a9d5a4-a457-4cf4-92aa-2de96430c864"). InnerVolumeSpecName "kube-api-access-jzp9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:07:32 crc kubenswrapper[4799]: I0127 08:07:32.549125 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzp9m\" (UniqueName: \"kubernetes.io/projected/94a9d5a4-a457-4cf4-92aa-2de96430c864-kube-api-access-jzp9m\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:32 crc kubenswrapper[4799]: I0127 08:07:32.549170 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94a9d5a4-a457-4cf4-92aa-2de96430c864-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:32 crc kubenswrapper[4799]: I0127 08:07:32.900815 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5wzgq" event={"ID":"94a9d5a4-a457-4cf4-92aa-2de96430c864","Type":"ContainerDied","Data":"f0e39b360b191c7a02a443898083f3d030d8bffad538a70ddc5274c13b5db956"} Jan 27 08:07:32 crc kubenswrapper[4799]: I0127 08:07:32.901202 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0e39b360b191c7a02a443898083f3d030d8bffad538a70ddc5274c13b5db956" Jan 27 08:07:32 crc kubenswrapper[4799]: I0127 08:07:32.900911 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5wzgq" Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.221029 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lx6nr-config-jsq6j" Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.362493 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/52b4f31d-10e3-4fcf-a531-72197d81a764-var-run\") pod \"52b4f31d-10e3-4fcf-a531-72197d81a764\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.363003 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/52b4f31d-10e3-4fcf-a531-72197d81a764-additional-scripts\") pod \"52b4f31d-10e3-4fcf-a531-72197d81a764\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.363049 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/52b4f31d-10e3-4fcf-a531-72197d81a764-var-log-ovn\") pod \"52b4f31d-10e3-4fcf-a531-72197d81a764\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.363074 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q8jr\" (UniqueName: \"kubernetes.io/projected/52b4f31d-10e3-4fcf-a531-72197d81a764-kube-api-access-9q8jr\") pod \"52b4f31d-10e3-4fcf-a531-72197d81a764\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.363123 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/52b4f31d-10e3-4fcf-a531-72197d81a764-var-run-ovn\") pod \"52b4f31d-10e3-4fcf-a531-72197d81a764\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.363157 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52b4f31d-10e3-4fcf-a531-72197d81a764-scripts\") pod \"52b4f31d-10e3-4fcf-a531-72197d81a764\" (UID: \"52b4f31d-10e3-4fcf-a531-72197d81a764\") " Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.363405 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52b4f31d-10e3-4fcf-a531-72197d81a764-var-run" (OuterVolumeSpecName: "var-run") pod "52b4f31d-10e3-4fcf-a531-72197d81a764" (UID: "52b4f31d-10e3-4fcf-a531-72197d81a764"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.363830 4799 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/52b4f31d-10e3-4fcf-a531-72197d81a764-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.363869 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52b4f31d-10e3-4fcf-a531-72197d81a764-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "52b4f31d-10e3-4fcf-a531-72197d81a764" (UID: "52b4f31d-10e3-4fcf-a531-72197d81a764"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.364579 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52b4f31d-10e3-4fcf-a531-72197d81a764-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "52b4f31d-10e3-4fcf-a531-72197d81a764" (UID: "52b4f31d-10e3-4fcf-a531-72197d81a764"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.365147 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52b4f31d-10e3-4fcf-a531-72197d81a764-scripts" (OuterVolumeSpecName: "scripts") pod "52b4f31d-10e3-4fcf-a531-72197d81a764" (UID: "52b4f31d-10e3-4fcf-a531-72197d81a764"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.365226 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52b4f31d-10e3-4fcf-a531-72197d81a764-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "52b4f31d-10e3-4fcf-a531-72197d81a764" (UID: "52b4f31d-10e3-4fcf-a531-72197d81a764"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.369805 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52b4f31d-10e3-4fcf-a531-72197d81a764-kube-api-access-9q8jr" (OuterVolumeSpecName: "kube-api-access-9q8jr") pod "52b4f31d-10e3-4fcf-a531-72197d81a764" (UID: "52b4f31d-10e3-4fcf-a531-72197d81a764"). InnerVolumeSpecName "kube-api-access-9q8jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.465154 4799 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/52b4f31d-10e3-4fcf-a531-72197d81a764-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.465189 4799 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/52b4f31d-10e3-4fcf-a531-72197d81a764-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.465203 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q8jr\" (UniqueName: \"kubernetes.io/projected/52b4f31d-10e3-4fcf-a531-72197d81a764-kube-api-access-9q8jr\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.465213 4799 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/52b4f31d-10e3-4fcf-a531-72197d81a764-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.465224 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52b4f31d-10e3-4fcf-a531-72197d81a764-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.910155 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lx6nr-config-jsq6j" event={"ID":"52b4f31d-10e3-4fcf-a531-72197d81a764","Type":"ContainerDied","Data":"8f3b0ae30b4440782b77b5a99938e65fadd63d471b9687e6df9071f62fac3fbb"} Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.910204 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f3b0ae30b4440782b77b5a99938e65fadd63d471b9687e6df9071f62fac3fbb" Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.910208 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lx6nr-config-jsq6j" Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.914968 4799 generic.go:334] "Generic (PLEG): container finished" podID="0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" containerID="f22f93b6f1734abe8a314a5e928f427e1c1c4b7777e7932be85882de326727e8" exitCode=0 Jan 27 08:07:33 crc kubenswrapper[4799]: I0127 08:07:33.915010 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0","Type":"ContainerDied","Data":"f22f93b6f1734abe8a314a5e928f427e1c1c4b7777e7932be85882de326727e8"} Jan 27 08:07:34 crc kubenswrapper[4799]: I0127 08:07:34.347435 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-lx6nr-config-jsq6j"] Jan 27 08:07:34 crc kubenswrapper[4799]: I0127 08:07:34.367020 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-lx6nr-config-jsq6j"] Jan 27 08:07:34 crc kubenswrapper[4799]: I0127 08:07:34.472031 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52b4f31d-10e3-4fcf-a531-72197d81a764" path="/var/lib/kubelet/pods/52b4f31d-10e3-4fcf-a531-72197d81a764/volumes" Jan 27 08:07:34 crc kubenswrapper[4799]: I0127 08:07:34.654394 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-lx6nr" Jan 27 08:07:34 crc kubenswrapper[4799]: I0127 08:07:34.931481 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0","Type":"ContainerStarted","Data":"966f0455778a97fcdcd44dc78bdffda4cc3f28c1ba75e48f0a8013fb5a8ec713"} Jan 27 08:07:34 crc kubenswrapper[4799]: I0127 08:07:34.932389 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:07:34 crc kubenswrapper[4799]: I0127 08:07:34.940496 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerStarted","Data":"899ef5eb6ef3452c56a64f0c4e70618404205cce529f22053b03a285e9ee13a3"} Jan 27 08:07:34 crc kubenswrapper[4799]: I0127 08:07:34.940639 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerStarted","Data":"929e394a1e6c0338b8779b3f2f7a5f4bcce35d3226afdb70bb609d003ca46732"} Jan 27 08:07:34 crc kubenswrapper[4799]: I0127 08:07:34.940732 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerStarted","Data":"27328900cd6228d146086ac95ae8b05b2862be13eb3c0f09db06830d1bca9dcd"} Jan 27 08:07:38 crc kubenswrapper[4799]: I0127 08:07:38.977430 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerStarted","Data":"e030d8bc2db8dd9aa03cb62df712dbc9c8cb6607608f2f7f1450bf93e538b751"} Jan 27 08:07:41 crc kubenswrapper[4799]: I0127 08:07:41.004500 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerStarted","Data":"a0f80023889ce615a3db222ff0674d625f01a9a123a68219af42b2380036e108"} Jan 27 08:07:41 crc kubenswrapper[4799]: I0127 08:07:41.004988 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerStarted","Data":"ed9695191592e2cf7c9a81c2f7e406e573fe094f24810d13f52025b42fd14e45"} Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.017274 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerStarted","Data":"c4d3c4b4e64dfbc1c45aeef0d3fa8039b0ef7f24f1db2ef2a53e7b81f2dbf7cd"} Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.017636 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerStarted","Data":"45d6619acf1257ed156eb62ccd78bce4b9de066ddff06f50c79b9cfa7413832a"} Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.017652 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerStarted","Data":"3922fa50fa34a49a4cae14b9fb8d549d80b13972fa6b8a9ebc9d6e8b35d4c31a"} Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.017665 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerStarted","Data":"9f59e75754ee9b9eac827452a8a976c731b40f46763088ad523dac5e470ed06f"} Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.017677 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerStarted","Data":"654d69afe42028cbde3190c61f3ec77cf53f47e3e019c731d93e9629e2ab6f7e"} Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.059649 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371948.795147 podStartE2EDuration="1m28.059628014s" podCreationTimestamp="2026-01-27 08:06:14 +0000 UTC" firstStartedPulling="2026-01-27 08:06:16.094274493 +0000 UTC m=+1242.405378558" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:07:34.97240178 +0000 UTC m=+1321.283505865" watchObservedRunningTime="2026-01-27 08:07:42.059628014 +0000 UTC m=+1328.370732089" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.062657 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=21.147254202 podStartE2EDuration="32.062647787s" podCreationTimestamp="2026-01-27 08:07:10 +0000 UTC" firstStartedPulling="2026-01-27 08:07:29.62584225 +0000 UTC m=+1315.936946305" lastFinishedPulling="2026-01-27 08:07:40.541235825 +0000 UTC m=+1326.852339890" observedRunningTime="2026-01-27 08:07:42.060128917 +0000 UTC m=+1328.371233002" watchObservedRunningTime="2026-01-27 08:07:42.062647787 +0000 UTC m=+1328.373751862" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.473277 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-4c4x7"] Jan 27 08:07:42 crc kubenswrapper[4799]: E0127 08:07:42.484710 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b4f31d-10e3-4fcf-a531-72197d81a764" containerName="ovn-config" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.484743 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b4f31d-10e3-4fcf-a531-72197d81a764" containerName="ovn-config" Jan 27 08:07:42 crc kubenswrapper[4799]: E0127 08:07:42.484773 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a9d5a4-a457-4cf4-92aa-2de96430c864" containerName="mariadb-account-create-update" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.484779 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a9d5a4-a457-4cf4-92aa-2de96430c864" containerName="mariadb-account-create-update" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.484946 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="52b4f31d-10e3-4fcf-a531-72197d81a764" containerName="ovn-config" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.484971 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="94a9d5a4-a457-4cf4-92aa-2de96430c864" containerName="mariadb-account-create-update" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.485820 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.489246 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-4c4x7"] Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.489912 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.622133 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf2m2\" (UniqueName: \"kubernetes.io/projected/0947eb13-2cb0-48b9-944d-4ae4d3db110c-kube-api-access-qf2m2\") pod \"dnsmasq-dns-764c5664d7-4c4x7\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.622234 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-4c4x7\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.622264 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-4c4x7\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.622293 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-config\") pod \"dnsmasq-dns-764c5664d7-4c4x7\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.622412 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-dns-svc\") pod \"dnsmasq-dns-764c5664d7-4c4x7\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.622491 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-4c4x7\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.724866 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-4c4x7\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.724936 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf2m2\" (UniqueName: \"kubernetes.io/projected/0947eb13-2cb0-48b9-944d-4ae4d3db110c-kube-api-access-qf2m2\") pod \"dnsmasq-dns-764c5664d7-4c4x7\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.724998 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-4c4x7\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.725023 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-4c4x7\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.725083 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-config\") pod \"dnsmasq-dns-764c5664d7-4c4x7\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.725171 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-dns-svc\") pod \"dnsmasq-dns-764c5664d7-4c4x7\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.726524 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-dns-svc\") pod \"dnsmasq-dns-764c5664d7-4c4x7\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.727394 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-4c4x7\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.728235 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-config\") pod \"dnsmasq-dns-764c5664d7-4c4x7\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.728294 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-4c4x7\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.728364 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-4c4x7\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.755152 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf2m2\" (UniqueName: \"kubernetes.io/projected/0947eb13-2cb0-48b9-944d-4ae4d3db110c-kube-api-access-qf2m2\") pod \"dnsmasq-dns-764c5664d7-4c4x7\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:42 crc kubenswrapper[4799]: I0127 08:07:42.823977 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:43 crc kubenswrapper[4799]: I0127 08:07:43.321006 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-4c4x7"] Jan 27 08:07:43 crc kubenswrapper[4799]: W0127 08:07:43.329048 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0947eb13_2cb0_48b9_944d_4ae4d3db110c.slice/crio-046d5460b26153367d62d13fa8aa486db56f39fda89611103f9d1f72fc576737 WatchSource:0}: Error finding container 046d5460b26153367d62d13fa8aa486db56f39fda89611103f9d1f72fc576737: Status 404 returned error can't find the container with id 046d5460b26153367d62d13fa8aa486db56f39fda89611103f9d1f72fc576737 Jan 27 08:07:44 crc kubenswrapper[4799]: I0127 08:07:44.034289 4799 generic.go:334] "Generic (PLEG): container finished" podID="0947eb13-2cb0-48b9-944d-4ae4d3db110c" containerID="fc49bc58c0d0f8ade28b95273e683d735d4bd2fd74ab1408e0ca7cd88feb6437" exitCode=0 Jan 27 08:07:44 crc kubenswrapper[4799]: I0127 08:07:44.034441 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" event={"ID":"0947eb13-2cb0-48b9-944d-4ae4d3db110c","Type":"ContainerDied","Data":"fc49bc58c0d0f8ade28b95273e683d735d4bd2fd74ab1408e0ca7cd88feb6437"} Jan 27 08:07:44 crc kubenswrapper[4799]: I0127 08:07:44.034663 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" event={"ID":"0947eb13-2cb0-48b9-944d-4ae4d3db110c","Type":"ContainerStarted","Data":"046d5460b26153367d62d13fa8aa486db56f39fda89611103f9d1f72fc576737"} Jan 27 08:07:44 crc kubenswrapper[4799]: I0127 08:07:44.036587 4799 generic.go:334] "Generic (PLEG): container finished" podID="1e7f06a1-752e-4e8b-9d59-991326981dda" containerID="16cbdbeec150e5934b19c040711658f9ffae0d22a481b7b98c67646ecc86a55d" exitCode=0 Jan 27 08:07:44 crc kubenswrapper[4799]: I0127 08:07:44.036629 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-22s2d" event={"ID":"1e7f06a1-752e-4e8b-9d59-991326981dda","Type":"ContainerDied","Data":"16cbdbeec150e5934b19c040711658f9ffae0d22a481b7b98c67646ecc86a55d"} Jan 27 08:07:44 crc kubenswrapper[4799]: I0127 08:07:44.600483 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 08:07:44 crc kubenswrapper[4799]: I0127 08:07:44.877896 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-8gkts"] Jan 27 08:07:44 crc kubenswrapper[4799]: I0127 08:07:44.879749 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8gkts" Jan 27 08:07:44 crc kubenswrapper[4799]: I0127 08:07:44.901206 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-8gkts"] Jan 27 08:07:44 crc kubenswrapper[4799]: I0127 08:07:44.965832 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-jqxqz"] Jan 27 08:07:44 crc kubenswrapper[4799]: I0127 08:07:44.967419 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jqxqz" Jan 27 08:07:44 crc kubenswrapper[4799]: I0127 08:07:44.973232 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkx9s\" (UniqueName: \"kubernetes.io/projected/24508566-6f8e-48c4-a3e5-088544cd6b94-kube-api-access-jkx9s\") pod \"cinder-db-create-8gkts\" (UID: \"24508566-6f8e-48c4-a3e5-088544cd6b94\") " pod="openstack/cinder-db-create-8gkts" Jan 27 08:07:44 crc kubenswrapper[4799]: I0127 08:07:44.973322 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24508566-6f8e-48c4-a3e5-088544cd6b94-operator-scripts\") pod \"cinder-db-create-8gkts\" (UID: \"24508566-6f8e-48c4-a3e5-088544cd6b94\") " pod="openstack/cinder-db-create-8gkts" Jan 27 08:07:44 crc kubenswrapper[4799]: I0127 08:07:44.978432 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-5c72-account-create-update-9x5nw"] Jan 27 08:07:44 crc kubenswrapper[4799]: I0127 08:07:44.979856 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5c72-account-create-update-9x5nw" Jan 27 08:07:44 crc kubenswrapper[4799]: I0127 08:07:44.984031 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 27 08:07:44 crc kubenswrapper[4799]: I0127 08:07:44.988414 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jqxqz"] Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.001742 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5c72-account-create-update-9x5nw"] Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.053374 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" event={"ID":"0947eb13-2cb0-48b9-944d-4ae4d3db110c","Type":"ContainerStarted","Data":"84b7b7f060f9d443695f5f2c676792466217b8527027454d39ebf62fc53abc75"} Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.053411 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.075410 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24508566-6f8e-48c4-a3e5-088544cd6b94-operator-scripts\") pod \"cinder-db-create-8gkts\" (UID: \"24508566-6f8e-48c4-a3e5-088544cd6b94\") " pod="openstack/cinder-db-create-8gkts" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.076015 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfe230e4-078d-4aeb-858f-296dd5505f4a-operator-scripts\") pod \"cinder-5c72-account-create-update-9x5nw\" (UID: \"dfe230e4-078d-4aeb-858f-296dd5505f4a\") " pod="openstack/cinder-5c72-account-create-update-9x5nw" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.076174 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b01cc32-2ffb-4377-afff-7fbaa3d14de7-operator-scripts\") pod \"barbican-db-create-jqxqz\" (UID: \"5b01cc32-2ffb-4377-afff-7fbaa3d14de7\") " pod="openstack/barbican-db-create-jqxqz" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.076458 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2scrj\" (UniqueName: \"kubernetes.io/projected/dfe230e4-078d-4aeb-858f-296dd5505f4a-kube-api-access-2scrj\") pod \"cinder-5c72-account-create-update-9x5nw\" (UID: \"dfe230e4-078d-4aeb-858f-296dd5505f4a\") " pod="openstack/cinder-5c72-account-create-update-9x5nw" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.076590 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfv57\" (UniqueName: \"kubernetes.io/projected/5b01cc32-2ffb-4377-afff-7fbaa3d14de7-kube-api-access-dfv57\") pod \"barbican-db-create-jqxqz\" (UID: \"5b01cc32-2ffb-4377-afff-7fbaa3d14de7\") " pod="openstack/barbican-db-create-jqxqz" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.076807 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkx9s\" (UniqueName: \"kubernetes.io/projected/24508566-6f8e-48c4-a3e5-088544cd6b94-kube-api-access-jkx9s\") pod \"cinder-db-create-8gkts\" (UID: \"24508566-6f8e-48c4-a3e5-088544cd6b94\") " pod="openstack/cinder-db-create-8gkts" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.078044 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24508566-6f8e-48c4-a3e5-088544cd6b94-operator-scripts\") pod \"cinder-db-create-8gkts\" (UID: \"24508566-6f8e-48c4-a3e5-088544cd6b94\") " pod="openstack/cinder-db-create-8gkts" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.094091 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-abfe-account-create-update-l8nsq"] Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.095242 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-abfe-account-create-update-l8nsq" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.103515 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" podStartSLOduration=3.103479835 podStartE2EDuration="3.103479835s" podCreationTimestamp="2026-01-27 08:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:07:45.081116759 +0000 UTC m=+1331.392220824" watchObservedRunningTime="2026-01-27 08:07:45.103479835 +0000 UTC m=+1331.414583910" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.113578 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-abfe-account-create-update-l8nsq"] Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.115125 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.115702 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkx9s\" (UniqueName: \"kubernetes.io/projected/24508566-6f8e-48c4-a3e5-088544cd6b94-kube-api-access-jkx9s\") pod \"cinder-db-create-8gkts\" (UID: \"24508566-6f8e-48c4-a3e5-088544cd6b94\") " pod="openstack/cinder-db-create-8gkts" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.178069 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfe230e4-078d-4aeb-858f-296dd5505f4a-operator-scripts\") pod \"cinder-5c72-account-create-update-9x5nw\" (UID: \"dfe230e4-078d-4aeb-858f-296dd5505f4a\") " pod="openstack/cinder-5c72-account-create-update-9x5nw" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.178384 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b01cc32-2ffb-4377-afff-7fbaa3d14de7-operator-scripts\") pod \"barbican-db-create-jqxqz\" (UID: \"5b01cc32-2ffb-4377-afff-7fbaa3d14de7\") " pod="openstack/barbican-db-create-jqxqz" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.178563 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3acfc06-6e63-4a08-a201-50a9d6fe8ed5-operator-scripts\") pod \"barbican-abfe-account-create-update-l8nsq\" (UID: \"a3acfc06-6e63-4a08-a201-50a9d6fe8ed5\") " pod="openstack/barbican-abfe-account-create-update-l8nsq" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.178822 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfe230e4-078d-4aeb-858f-296dd5505f4a-operator-scripts\") pod \"cinder-5c72-account-create-update-9x5nw\" (UID: \"dfe230e4-078d-4aeb-858f-296dd5505f4a\") " pod="openstack/cinder-5c72-account-create-update-9x5nw" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.179323 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b01cc32-2ffb-4377-afff-7fbaa3d14de7-operator-scripts\") pod \"barbican-db-create-jqxqz\" (UID: \"5b01cc32-2ffb-4377-afff-7fbaa3d14de7\") " pod="openstack/barbican-db-create-jqxqz" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.179228 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wqqg\" (UniqueName: \"kubernetes.io/projected/a3acfc06-6e63-4a08-a201-50a9d6fe8ed5-kube-api-access-6wqqg\") pod \"barbican-abfe-account-create-update-l8nsq\" (UID: \"a3acfc06-6e63-4a08-a201-50a9d6fe8ed5\") " pod="openstack/barbican-abfe-account-create-update-l8nsq" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.179824 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2scrj\" (UniqueName: \"kubernetes.io/projected/dfe230e4-078d-4aeb-858f-296dd5505f4a-kube-api-access-2scrj\") pod \"cinder-5c72-account-create-update-9x5nw\" (UID: \"dfe230e4-078d-4aeb-858f-296dd5505f4a\") " pod="openstack/cinder-5c72-account-create-update-9x5nw" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.179925 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfv57\" (UniqueName: \"kubernetes.io/projected/5b01cc32-2ffb-4377-afff-7fbaa3d14de7-kube-api-access-dfv57\") pod \"barbican-db-create-jqxqz\" (UID: \"5b01cc32-2ffb-4377-afff-7fbaa3d14de7\") " pod="openstack/barbican-db-create-jqxqz" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.200549 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2scrj\" (UniqueName: \"kubernetes.io/projected/dfe230e4-078d-4aeb-858f-296dd5505f4a-kube-api-access-2scrj\") pod \"cinder-5c72-account-create-update-9x5nw\" (UID: \"dfe230e4-078d-4aeb-858f-296dd5505f4a\") " pod="openstack/cinder-5c72-account-create-update-9x5nw" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.201728 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfv57\" (UniqueName: \"kubernetes.io/projected/5b01cc32-2ffb-4377-afff-7fbaa3d14de7-kube-api-access-dfv57\") pod \"barbican-db-create-jqxqz\" (UID: \"5b01cc32-2ffb-4377-afff-7fbaa3d14de7\") " pod="openstack/barbican-db-create-jqxqz" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.209207 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8gkts" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.269754 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-zbptw"] Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.270673 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zbptw" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.281414 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3acfc06-6e63-4a08-a201-50a9d6fe8ed5-operator-scripts\") pod \"barbican-abfe-account-create-update-l8nsq\" (UID: \"a3acfc06-6e63-4a08-a201-50a9d6fe8ed5\") " pod="openstack/barbican-abfe-account-create-update-l8nsq" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.281598 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wqqg\" (UniqueName: \"kubernetes.io/projected/a3acfc06-6e63-4a08-a201-50a9d6fe8ed5-kube-api-access-6wqqg\") pod \"barbican-abfe-account-create-update-l8nsq\" (UID: \"a3acfc06-6e63-4a08-a201-50a9d6fe8ed5\") " pod="openstack/barbican-abfe-account-create-update-l8nsq" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.282476 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jqxqz" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.282555 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3acfc06-6e63-4a08-a201-50a9d6fe8ed5-operator-scripts\") pod \"barbican-abfe-account-create-update-l8nsq\" (UID: \"a3acfc06-6e63-4a08-a201-50a9d6fe8ed5\") " pod="openstack/barbican-abfe-account-create-update-l8nsq" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.282662 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-zbptw"] Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.298844 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5c72-account-create-update-9x5nw" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.312554 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wqqg\" (UniqueName: \"kubernetes.io/projected/a3acfc06-6e63-4a08-a201-50a9d6fe8ed5-kube-api-access-6wqqg\") pod \"barbican-abfe-account-create-update-l8nsq\" (UID: \"a3acfc06-6e63-4a08-a201-50a9d6fe8ed5\") " pod="openstack/barbican-abfe-account-create-update-l8nsq" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.353638 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-fq478"] Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.354581 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fq478" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.376751 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.377029 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.377196 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.377530 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-shqm7" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.383242 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/666620a1-4d36-48ba-a226-e4ba6b9d82a0-operator-scripts\") pod \"neutron-db-create-zbptw\" (UID: \"666620a1-4d36-48ba-a226-e4ba6b9d82a0\") " pod="openstack/neutron-db-create-zbptw" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.383322 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c4zf\" (UniqueName: \"kubernetes.io/projected/666620a1-4d36-48ba-a226-e4ba6b9d82a0-kube-api-access-7c4zf\") pod \"neutron-db-create-zbptw\" (UID: \"666620a1-4d36-48ba-a226-e4ba6b9d82a0\") " pod="openstack/neutron-db-create-zbptw" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.394467 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-fq478"] Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.449236 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-abfe-account-create-update-l8nsq" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.465379 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3eda-account-create-update-v5sgh"] Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.466637 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3eda-account-create-update-v5sgh" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.472764 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.472855 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.480138 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3eda-account-create-update-v5sgh"] Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.484111 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fvxk\" (UniqueName: \"kubernetes.io/projected/6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37-kube-api-access-8fvxk\") pod \"keystone-db-sync-fq478\" (UID: \"6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37\") " pod="openstack/keystone-db-sync-fq478" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.484141 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37-config-data\") pod \"keystone-db-sync-fq478\" (UID: \"6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37\") " pod="openstack/keystone-db-sync-fq478" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.484177 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/666620a1-4d36-48ba-a226-e4ba6b9d82a0-operator-scripts\") pod \"neutron-db-create-zbptw\" (UID: \"666620a1-4d36-48ba-a226-e4ba6b9d82a0\") " pod="openstack/neutron-db-create-zbptw" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.484230 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37-combined-ca-bundle\") pod \"keystone-db-sync-fq478\" (UID: \"6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37\") " pod="openstack/keystone-db-sync-fq478" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.484268 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c4zf\" (UniqueName: \"kubernetes.io/projected/666620a1-4d36-48ba-a226-e4ba6b9d82a0-kube-api-access-7c4zf\") pod \"neutron-db-create-zbptw\" (UID: \"666620a1-4d36-48ba-a226-e4ba6b9d82a0\") " pod="openstack/neutron-db-create-zbptw" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.484798 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/666620a1-4d36-48ba-a226-e4ba6b9d82a0-operator-scripts\") pod \"neutron-db-create-zbptw\" (UID: \"666620a1-4d36-48ba-a226-e4ba6b9d82a0\") " pod="openstack/neutron-db-create-zbptw" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.513031 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c4zf\" (UniqueName: \"kubernetes.io/projected/666620a1-4d36-48ba-a226-e4ba6b9d82a0-kube-api-access-7c4zf\") pod \"neutron-db-create-zbptw\" (UID: \"666620a1-4d36-48ba-a226-e4ba6b9d82a0\") " pod="openstack/neutron-db-create-zbptw" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.583470 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-22s2d" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.588546 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zbptw" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.589024 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fvxk\" (UniqueName: \"kubernetes.io/projected/6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37-kube-api-access-8fvxk\") pod \"keystone-db-sync-fq478\" (UID: \"6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37\") " pod="openstack/keystone-db-sync-fq478" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.589066 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgq9s\" (UniqueName: \"kubernetes.io/projected/b9ea3026-b416-47dd-b55a-994533c7f302-kube-api-access-pgq9s\") pod \"neutron-3eda-account-create-update-v5sgh\" (UID: \"b9ea3026-b416-47dd-b55a-994533c7f302\") " pod="openstack/neutron-3eda-account-create-update-v5sgh" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.589087 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37-config-data\") pod \"keystone-db-sync-fq478\" (UID: \"6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37\") " pod="openstack/keystone-db-sync-fq478" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.589130 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9ea3026-b416-47dd-b55a-994533c7f302-operator-scripts\") pod \"neutron-3eda-account-create-update-v5sgh\" (UID: \"b9ea3026-b416-47dd-b55a-994533c7f302\") " pod="openstack/neutron-3eda-account-create-update-v5sgh" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.589178 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37-combined-ca-bundle\") pod \"keystone-db-sync-fq478\" (UID: \"6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37\") " pod="openstack/keystone-db-sync-fq478" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.592589 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37-combined-ca-bundle\") pod \"keystone-db-sync-fq478\" (UID: \"6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37\") " pod="openstack/keystone-db-sync-fq478" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.592878 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37-config-data\") pod \"keystone-db-sync-fq478\" (UID: \"6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37\") " pod="openstack/keystone-db-sync-fq478" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.609721 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fvxk\" (UniqueName: \"kubernetes.io/projected/6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37-kube-api-access-8fvxk\") pod \"keystone-db-sync-fq478\" (UID: \"6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37\") " pod="openstack/keystone-db-sync-fq478" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.690012 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e7f06a1-752e-4e8b-9d59-991326981dda-combined-ca-bundle\") pod \"1e7f06a1-752e-4e8b-9d59-991326981dda\" (UID: \"1e7f06a1-752e-4e8b-9d59-991326981dda\") " Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.690479 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqtpm\" (UniqueName: \"kubernetes.io/projected/1e7f06a1-752e-4e8b-9d59-991326981dda-kube-api-access-lqtpm\") pod \"1e7f06a1-752e-4e8b-9d59-991326981dda\" (UID: \"1e7f06a1-752e-4e8b-9d59-991326981dda\") " Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.690574 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e7f06a1-752e-4e8b-9d59-991326981dda-config-data\") pod \"1e7f06a1-752e-4e8b-9d59-991326981dda\" (UID: \"1e7f06a1-752e-4e8b-9d59-991326981dda\") " Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.690672 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e7f06a1-752e-4e8b-9d59-991326981dda-db-sync-config-data\") pod \"1e7f06a1-752e-4e8b-9d59-991326981dda\" (UID: \"1e7f06a1-752e-4e8b-9d59-991326981dda\") " Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.690948 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgq9s\" (UniqueName: \"kubernetes.io/projected/b9ea3026-b416-47dd-b55a-994533c7f302-kube-api-access-pgq9s\") pod \"neutron-3eda-account-create-update-v5sgh\" (UID: \"b9ea3026-b416-47dd-b55a-994533c7f302\") " pod="openstack/neutron-3eda-account-create-update-v5sgh" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.690991 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9ea3026-b416-47dd-b55a-994533c7f302-operator-scripts\") pod \"neutron-3eda-account-create-update-v5sgh\" (UID: \"b9ea3026-b416-47dd-b55a-994533c7f302\") " pod="openstack/neutron-3eda-account-create-update-v5sgh" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.691877 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9ea3026-b416-47dd-b55a-994533c7f302-operator-scripts\") pod \"neutron-3eda-account-create-update-v5sgh\" (UID: \"b9ea3026-b416-47dd-b55a-994533c7f302\") " pod="openstack/neutron-3eda-account-create-update-v5sgh" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.700708 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fq478" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.704268 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e7f06a1-752e-4e8b-9d59-991326981dda-kube-api-access-lqtpm" (OuterVolumeSpecName: "kube-api-access-lqtpm") pod "1e7f06a1-752e-4e8b-9d59-991326981dda" (UID: "1e7f06a1-752e-4e8b-9d59-991326981dda"). InnerVolumeSpecName "kube-api-access-lqtpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.717394 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e7f06a1-752e-4e8b-9d59-991326981dda-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1e7f06a1-752e-4e8b-9d59-991326981dda" (UID: "1e7f06a1-752e-4e8b-9d59-991326981dda"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.719110 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgq9s\" (UniqueName: \"kubernetes.io/projected/b9ea3026-b416-47dd-b55a-994533c7f302-kube-api-access-pgq9s\") pod \"neutron-3eda-account-create-update-v5sgh\" (UID: \"b9ea3026-b416-47dd-b55a-994533c7f302\") " pod="openstack/neutron-3eda-account-create-update-v5sgh" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.728820 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e7f06a1-752e-4e8b-9d59-991326981dda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e7f06a1-752e-4e8b-9d59-991326981dda" (UID: "1e7f06a1-752e-4e8b-9d59-991326981dda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.733365 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e7f06a1-752e-4e8b-9d59-991326981dda-config-data" (OuterVolumeSpecName: "config-data") pod "1e7f06a1-752e-4e8b-9d59-991326981dda" (UID: "1e7f06a1-752e-4e8b-9d59-991326981dda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.783861 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3eda-account-create-update-v5sgh" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.792331 4799 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e7f06a1-752e-4e8b-9d59-991326981dda-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.792374 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e7f06a1-752e-4e8b-9d59-991326981dda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.792386 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqtpm\" (UniqueName: \"kubernetes.io/projected/1e7f06a1-752e-4e8b-9d59-991326981dda-kube-api-access-lqtpm\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:45 crc kubenswrapper[4799]: I0127 08:07:45.792400 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e7f06a1-752e-4e8b-9d59-991326981dda-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:46 crc kubenswrapper[4799]: I0127 08:07:46.077502 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-22s2d" event={"ID":"1e7f06a1-752e-4e8b-9d59-991326981dda","Type":"ContainerDied","Data":"1836536ff3ed05a60b70fa11155386632abe179de5950659064663568c7cd9f4"} Jan 27 08:07:46 crc kubenswrapper[4799]: I0127 08:07:46.077540 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1836536ff3ed05a60b70fa11155386632abe179de5950659064663568c7cd9f4" Jan 27 08:07:46 crc kubenswrapper[4799]: I0127 08:07:46.077550 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-22s2d" Jan 27 08:07:46 crc kubenswrapper[4799]: I0127 08:07:46.605331 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-fq478"] Jan 27 08:07:46 crc kubenswrapper[4799]: I0127 08:07:46.617478 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-8gkts"] Jan 27 08:07:46 crc kubenswrapper[4799]: I0127 08:07:46.629590 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-4c4x7"] Jan 27 08:07:46 crc kubenswrapper[4799]: I0127 08:07:46.672661 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3eda-account-create-update-v5sgh"] Jan 27 08:07:46 crc kubenswrapper[4799]: I0127 08:07:46.689988 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-abfe-account-create-update-l8nsq"] Jan 27 08:07:46 crc kubenswrapper[4799]: I0127 08:07:46.715398 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jqxqz"] Jan 27 08:07:46 crc kubenswrapper[4799]: I0127 08:07:46.731322 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5c72-account-create-update-9x5nw"] Jan 27 08:07:46 crc kubenswrapper[4799]: I0127 08:07:46.751226 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-mgb2c"] Jan 27 08:07:46 crc kubenswrapper[4799]: E0127 08:07:46.751599 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e7f06a1-752e-4e8b-9d59-991326981dda" containerName="glance-db-sync" Jan 27 08:07:46 crc kubenswrapper[4799]: I0127 08:07:46.751617 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e7f06a1-752e-4e8b-9d59-991326981dda" containerName="glance-db-sync" Jan 27 08:07:46 crc kubenswrapper[4799]: I0127 08:07:46.751820 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e7f06a1-752e-4e8b-9d59-991326981dda" containerName="glance-db-sync" Jan 27 08:07:46 crc kubenswrapper[4799]: I0127 08:07:46.752693 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-zbptw"] Jan 27 08:07:46 crc kubenswrapper[4799]: I0127 08:07:46.752780 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:07:46 crc kubenswrapper[4799]: I0127 08:07:46.763322 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-mgb2c"] Jan 27 08:07:46 crc kubenswrapper[4799]: I0127 08:07:46.916854 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-mgb2c\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:07:46 crc kubenswrapper[4799]: I0127 08:07:46.916962 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-mgb2c\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:07:46 crc kubenswrapper[4799]: I0127 08:07:46.916995 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-mgb2c\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:07:46 crc kubenswrapper[4799]: I0127 08:07:46.917064 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66bh4\" (UniqueName: \"kubernetes.io/projected/f7687713-4d41-4085-aef9-4e0478651f4a-kube-api-access-66bh4\") pod \"dnsmasq-dns-74f6bcbc87-mgb2c\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:07:46 crc kubenswrapper[4799]: I0127 08:07:46.917093 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-mgb2c\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:07:46 crc kubenswrapper[4799]: I0127 08:07:46.917199 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-config\") pod \"dnsmasq-dns-74f6bcbc87-mgb2c\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:07:47 crc kubenswrapper[4799]: I0127 08:07:47.018282 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-mgb2c\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:07:47 crc kubenswrapper[4799]: I0127 08:07:47.018691 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-mgb2c\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:07:47 crc kubenswrapper[4799]: I0127 08:07:47.018721 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-mgb2c\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:07:47 crc kubenswrapper[4799]: I0127 08:07:47.018776 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66bh4\" (UniqueName: \"kubernetes.io/projected/f7687713-4d41-4085-aef9-4e0478651f4a-kube-api-access-66bh4\") pod \"dnsmasq-dns-74f6bcbc87-mgb2c\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:07:47 crc kubenswrapper[4799]: I0127 08:07:47.018799 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-mgb2c\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:07:47 crc kubenswrapper[4799]: I0127 08:07:47.018859 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-config\") pod \"dnsmasq-dns-74f6bcbc87-mgb2c\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:07:47 crc kubenswrapper[4799]: I0127 08:07:47.019502 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-mgb2c\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:07:47 crc kubenswrapper[4799]: I0127 08:07:47.019816 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-config\") pod \"dnsmasq-dns-74f6bcbc87-mgb2c\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:07:47 crc kubenswrapper[4799]: I0127 08:07:47.019926 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-mgb2c\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:07:47 crc kubenswrapper[4799]: I0127 08:07:47.022258 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-mgb2c\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:07:47 crc kubenswrapper[4799]: I0127 08:07:47.022311 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-mgb2c\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:07:47 crc kubenswrapper[4799]: I0127 08:07:47.062404 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66bh4\" (UniqueName: \"kubernetes.io/projected/f7687713-4d41-4085-aef9-4e0478651f4a-kube-api-access-66bh4\") pod \"dnsmasq-dns-74f6bcbc87-mgb2c\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:07:47 crc kubenswrapper[4799]: I0127 08:07:47.086905 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8gkts" event={"ID":"24508566-6f8e-48c4-a3e5-088544cd6b94","Type":"ContainerStarted","Data":"aa68947b61b90b18f826ecd07b960ffc3f5c5a7d354af1cb61632df66b134a69"} Jan 27 08:07:47 crc kubenswrapper[4799]: I0127 08:07:47.087955 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fq478" event={"ID":"6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37","Type":"ContainerStarted","Data":"b95d945c4853b9644d7e1c7f7cf45ab8346bcc248ab74443853a874751eb1fce"} Jan 27 08:07:47 crc kubenswrapper[4799]: I0127 08:07:47.089040 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zbptw" event={"ID":"666620a1-4d36-48ba-a226-e4ba6b9d82a0","Type":"ContainerStarted","Data":"9536b28742f86277366382b8f6b2d617af1fe28945084736979c69ac4fba7c26"} Jan 27 08:07:47 crc kubenswrapper[4799]: I0127 08:07:47.090280 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3eda-account-create-update-v5sgh" event={"ID":"b9ea3026-b416-47dd-b55a-994533c7f302","Type":"ContainerStarted","Data":"2a20fd331e58b4c0a2e162e3819916b67cadf8d311fab99c4544e3e6c2eefcd0"} Jan 27 08:07:47 crc kubenswrapper[4799]: I0127 08:07:47.091518 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5c72-account-create-update-9x5nw" event={"ID":"dfe230e4-078d-4aeb-858f-296dd5505f4a","Type":"ContainerStarted","Data":"84d8578317b5afb8c907124aea142fa8ec7109eae0c6cb66835933447a02b91f"} Jan 27 08:07:47 crc kubenswrapper[4799]: I0127 08:07:47.092624 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-abfe-account-create-update-l8nsq" event={"ID":"a3acfc06-6e63-4a08-a201-50a9d6fe8ed5","Type":"ContainerStarted","Data":"54fcd014e9dd605fcfcc006d8ae148d2434cfd78c9c282f0b63e524ad5caeca4"} Jan 27 08:07:47 crc kubenswrapper[4799]: I0127 08:07:47.094272 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jqxqz" event={"ID":"5b01cc32-2ffb-4377-afff-7fbaa3d14de7","Type":"ContainerStarted","Data":"c4bd37ea18ddbed7dbcafd2e7f0d5c916e181cc4aa62aba1489119a94cdcbc9e"} Jan 27 08:07:47 crc kubenswrapper[4799]: I0127 08:07:47.094476 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" podUID="0947eb13-2cb0-48b9-944d-4ae4d3db110c" containerName="dnsmasq-dns" containerID="cri-o://84b7b7f060f9d443695f5f2c676792466217b8527027454d39ebf62fc53abc75" gracePeriod=10 Jan 27 08:07:47 crc kubenswrapper[4799]: I0127 08:07:47.102695 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:07:47 crc kubenswrapper[4799]: I0127 08:07:47.624831 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-mgb2c"] Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.104181 4799 generic.go:334] "Generic (PLEG): container finished" podID="0947eb13-2cb0-48b9-944d-4ae4d3db110c" containerID="84b7b7f060f9d443695f5f2c676792466217b8527027454d39ebf62fc53abc75" exitCode=0 Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.104377 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" event={"ID":"0947eb13-2cb0-48b9-944d-4ae4d3db110c","Type":"ContainerDied","Data":"84b7b7f060f9d443695f5f2c676792466217b8527027454d39ebf62fc53abc75"} Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.104558 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" event={"ID":"0947eb13-2cb0-48b9-944d-4ae4d3db110c","Type":"ContainerDied","Data":"046d5460b26153367d62d13fa8aa486db56f39fda89611103f9d1f72fc576737"} Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.104576 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="046d5460b26153367d62d13fa8aa486db56f39fda89611103f9d1f72fc576737" Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.106554 4799 generic.go:334] "Generic (PLEG): container finished" podID="a3acfc06-6e63-4a08-a201-50a9d6fe8ed5" containerID="7fff2a80febe3280b1d6e57b5c687153274f75957492becf2d7bfc6cffdb5f65" exitCode=0 Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.106618 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-abfe-account-create-update-l8nsq" event={"ID":"a3acfc06-6e63-4a08-a201-50a9d6fe8ed5","Type":"ContainerDied","Data":"7fff2a80febe3280b1d6e57b5c687153274f75957492becf2d7bfc6cffdb5f65"} Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.108894 4799 generic.go:334] "Generic (PLEG): container finished" podID="5b01cc32-2ffb-4377-afff-7fbaa3d14de7" containerID="eff3e29f80d9de6e897b75e0b02adcfe12220ce6d41edc70ddb3183ed98e9b7c" exitCode=0 Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.108963 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jqxqz" event={"ID":"5b01cc32-2ffb-4377-afff-7fbaa3d14de7","Type":"ContainerDied","Data":"eff3e29f80d9de6e897b75e0b02adcfe12220ce6d41edc70ddb3183ed98e9b7c"} Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.116014 4799 generic.go:334] "Generic (PLEG): container finished" podID="24508566-6f8e-48c4-a3e5-088544cd6b94" containerID="1436dd48ceffbab52561f3a9d13362c3d4099c3e9523d46fb3bd724938714bc6" exitCode=0 Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.116093 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8gkts" event={"ID":"24508566-6f8e-48c4-a3e5-088544cd6b94","Type":"ContainerDied","Data":"1436dd48ceffbab52561f3a9d13362c3d4099c3e9523d46fb3bd724938714bc6"} Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.118262 4799 generic.go:334] "Generic (PLEG): container finished" podID="666620a1-4d36-48ba-a226-e4ba6b9d82a0" containerID="96c4042bf5406878f8dd3772d3a3371136adb68791394d18af4037fa332114d5" exitCode=0 Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.118333 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zbptw" event={"ID":"666620a1-4d36-48ba-a226-e4ba6b9d82a0","Type":"ContainerDied","Data":"96c4042bf5406878f8dd3772d3a3371136adb68791394d18af4037fa332114d5"} Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.119970 4799 generic.go:334] "Generic (PLEG): container finished" podID="f7687713-4d41-4085-aef9-4e0478651f4a" containerID="81be9102825c99fb252a254b1bc712cdf4bdbbd0f63c3b67b69ea9652402fde0" exitCode=0 Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.120024 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" event={"ID":"f7687713-4d41-4085-aef9-4e0478651f4a","Type":"ContainerDied","Data":"81be9102825c99fb252a254b1bc712cdf4bdbbd0f63c3b67b69ea9652402fde0"} Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.120044 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" event={"ID":"f7687713-4d41-4085-aef9-4e0478651f4a","Type":"ContainerStarted","Data":"173a1b751cc4eb1695ca6bd2a04167926c216b34df295a8fe106d78d88229ed8"} Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.126043 4799 generic.go:334] "Generic (PLEG): container finished" podID="b9ea3026-b416-47dd-b55a-994533c7f302" containerID="ef38302687eb542879cd9b75c45fd05dd88772a509cb4ad39c25facc67c4fd68" exitCode=0 Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.126153 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3eda-account-create-update-v5sgh" event={"ID":"b9ea3026-b416-47dd-b55a-994533c7f302","Type":"ContainerDied","Data":"ef38302687eb542879cd9b75c45fd05dd88772a509cb4ad39c25facc67c4fd68"} Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.133426 4799 generic.go:334] "Generic (PLEG): container finished" podID="dfe230e4-078d-4aeb-858f-296dd5505f4a" containerID="86687dcaddc2d7937cd80f84b8ee9085606202b5a97d31a2d04ae3bf757d7599" exitCode=0 Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.133481 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5c72-account-create-update-9x5nw" event={"ID":"dfe230e4-078d-4aeb-858f-296dd5505f4a","Type":"ContainerDied","Data":"86687dcaddc2d7937cd80f84b8ee9085606202b5a97d31a2d04ae3bf757d7599"} Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.423469 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.551191 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-ovsdbserver-nb\") pod \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.551249 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-config\") pod \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.551390 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf2m2\" (UniqueName: \"kubernetes.io/projected/0947eb13-2cb0-48b9-944d-4ae4d3db110c-kube-api-access-qf2m2\") pod \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.551473 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-ovsdbserver-sb\") pod \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.551526 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-dns-svc\") pod \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.551606 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-dns-swift-storage-0\") pod \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\" (UID: \"0947eb13-2cb0-48b9-944d-4ae4d3db110c\") " Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.581524 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0947eb13-2cb0-48b9-944d-4ae4d3db110c-kube-api-access-qf2m2" (OuterVolumeSpecName: "kube-api-access-qf2m2") pod "0947eb13-2cb0-48b9-944d-4ae4d3db110c" (UID: "0947eb13-2cb0-48b9-944d-4ae4d3db110c"). InnerVolumeSpecName "kube-api-access-qf2m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.653992 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf2m2\" (UniqueName: \"kubernetes.io/projected/0947eb13-2cb0-48b9-944d-4ae4d3db110c-kube-api-access-qf2m2\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.697352 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0947eb13-2cb0-48b9-944d-4ae4d3db110c" (UID: "0947eb13-2cb0-48b9-944d-4ae4d3db110c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.701854 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0947eb13-2cb0-48b9-944d-4ae4d3db110c" (UID: "0947eb13-2cb0-48b9-944d-4ae4d3db110c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.716984 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-config" (OuterVolumeSpecName: "config") pod "0947eb13-2cb0-48b9-944d-4ae4d3db110c" (UID: "0947eb13-2cb0-48b9-944d-4ae4d3db110c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.723956 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0947eb13-2cb0-48b9-944d-4ae4d3db110c" (UID: "0947eb13-2cb0-48b9-944d-4ae4d3db110c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.735168 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0947eb13-2cb0-48b9-944d-4ae4d3db110c" (UID: "0947eb13-2cb0-48b9-944d-4ae4d3db110c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.755690 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.755725 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.755737 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.755747 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:48 crc kubenswrapper[4799]: I0127 08:07:48.755757 4799 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0947eb13-2cb0-48b9-944d-4ae4d3db110c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.148873 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" event={"ID":"f7687713-4d41-4085-aef9-4e0478651f4a","Type":"ContainerStarted","Data":"a5ebfe33cee6f6d6b3a58ce99d9d667aadb7fd018e2aae4624e00476cff72f9b"} Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.148985 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-4c4x7" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.197729 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" podStartSLOduration=3.19770285 podStartE2EDuration="3.19770285s" podCreationTimestamp="2026-01-27 08:07:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:07:49.181679667 +0000 UTC m=+1335.492783742" watchObservedRunningTime="2026-01-27 08:07:49.19770285 +0000 UTC m=+1335.508806925" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.226199 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-4c4x7"] Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.233601 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-4c4x7"] Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.641264 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5c72-account-create-update-9x5nw" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.652401 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zbptw" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.682793 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4zf\" (UniqueName: \"kubernetes.io/projected/666620a1-4d36-48ba-a226-e4ba6b9d82a0-kube-api-access-7c4zf\") pod \"666620a1-4d36-48ba-a226-e4ba6b9d82a0\" (UID: \"666620a1-4d36-48ba-a226-e4ba6b9d82a0\") " Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.682857 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2scrj\" (UniqueName: \"kubernetes.io/projected/dfe230e4-078d-4aeb-858f-296dd5505f4a-kube-api-access-2scrj\") pod \"dfe230e4-078d-4aeb-858f-296dd5505f4a\" (UID: \"dfe230e4-078d-4aeb-858f-296dd5505f4a\") " Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.682883 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfe230e4-078d-4aeb-858f-296dd5505f4a-operator-scripts\") pod \"dfe230e4-078d-4aeb-858f-296dd5505f4a\" (UID: \"dfe230e4-078d-4aeb-858f-296dd5505f4a\") " Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.683002 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/666620a1-4d36-48ba-a226-e4ba6b9d82a0-operator-scripts\") pod \"666620a1-4d36-48ba-a226-e4ba6b9d82a0\" (UID: \"666620a1-4d36-48ba-a226-e4ba6b9d82a0\") " Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.683924 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/666620a1-4d36-48ba-a226-e4ba6b9d82a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "666620a1-4d36-48ba-a226-e4ba6b9d82a0" (UID: "666620a1-4d36-48ba-a226-e4ba6b9d82a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.685450 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfe230e4-078d-4aeb-858f-296dd5505f4a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dfe230e4-078d-4aeb-858f-296dd5505f4a" (UID: "dfe230e4-078d-4aeb-858f-296dd5505f4a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.688737 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/666620a1-4d36-48ba-a226-e4ba6b9d82a0-kube-api-access-7c4zf" (OuterVolumeSpecName: "kube-api-access-7c4zf") pod "666620a1-4d36-48ba-a226-e4ba6b9d82a0" (UID: "666620a1-4d36-48ba-a226-e4ba6b9d82a0"). InnerVolumeSpecName "kube-api-access-7c4zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.689759 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfe230e4-078d-4aeb-858f-296dd5505f4a-kube-api-access-2scrj" (OuterVolumeSpecName: "kube-api-access-2scrj") pod "dfe230e4-078d-4aeb-858f-296dd5505f4a" (UID: "dfe230e4-078d-4aeb-858f-296dd5505f4a"). InnerVolumeSpecName "kube-api-access-2scrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.768157 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3eda-account-create-update-v5sgh" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.776520 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8gkts" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.784276 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkx9s\" (UniqueName: \"kubernetes.io/projected/24508566-6f8e-48c4-a3e5-088544cd6b94-kube-api-access-jkx9s\") pod \"24508566-6f8e-48c4-a3e5-088544cd6b94\" (UID: \"24508566-6f8e-48c4-a3e5-088544cd6b94\") " Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.784586 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgq9s\" (UniqueName: \"kubernetes.io/projected/b9ea3026-b416-47dd-b55a-994533c7f302-kube-api-access-pgq9s\") pod \"b9ea3026-b416-47dd-b55a-994533c7f302\" (UID: \"b9ea3026-b416-47dd-b55a-994533c7f302\") " Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.784621 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24508566-6f8e-48c4-a3e5-088544cd6b94-operator-scripts\") pod \"24508566-6f8e-48c4-a3e5-088544cd6b94\" (UID: \"24508566-6f8e-48c4-a3e5-088544cd6b94\") " Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.784651 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9ea3026-b416-47dd-b55a-994533c7f302-operator-scripts\") pod \"b9ea3026-b416-47dd-b55a-994533c7f302\" (UID: \"b9ea3026-b416-47dd-b55a-994533c7f302\") " Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.785073 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4zf\" (UniqueName: \"kubernetes.io/projected/666620a1-4d36-48ba-a226-e4ba6b9d82a0-kube-api-access-7c4zf\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.787972 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24508566-6f8e-48c4-a3e5-088544cd6b94-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24508566-6f8e-48c4-a3e5-088544cd6b94" (UID: "24508566-6f8e-48c4-a3e5-088544cd6b94"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.788701 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9ea3026-b416-47dd-b55a-994533c7f302-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b9ea3026-b416-47dd-b55a-994533c7f302" (UID: "b9ea3026-b416-47dd-b55a-994533c7f302"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.789367 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2scrj\" (UniqueName: \"kubernetes.io/projected/dfe230e4-078d-4aeb-858f-296dd5505f4a-kube-api-access-2scrj\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.789422 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfe230e4-078d-4aeb-858f-296dd5505f4a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.789434 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/666620a1-4d36-48ba-a226-e4ba6b9d82a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.790678 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-abfe-account-create-update-l8nsq" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.800143 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jqxqz" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.802060 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24508566-6f8e-48c4-a3e5-088544cd6b94-kube-api-access-jkx9s" (OuterVolumeSpecName: "kube-api-access-jkx9s") pod "24508566-6f8e-48c4-a3e5-088544cd6b94" (UID: "24508566-6f8e-48c4-a3e5-088544cd6b94"). InnerVolumeSpecName "kube-api-access-jkx9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.805290 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9ea3026-b416-47dd-b55a-994533c7f302-kube-api-access-pgq9s" (OuterVolumeSpecName: "kube-api-access-pgq9s") pod "b9ea3026-b416-47dd-b55a-994533c7f302" (UID: "b9ea3026-b416-47dd-b55a-994533c7f302"). InnerVolumeSpecName "kube-api-access-pgq9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.890158 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfv57\" (UniqueName: \"kubernetes.io/projected/5b01cc32-2ffb-4377-afff-7fbaa3d14de7-kube-api-access-dfv57\") pod \"5b01cc32-2ffb-4377-afff-7fbaa3d14de7\" (UID: \"5b01cc32-2ffb-4377-afff-7fbaa3d14de7\") " Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.890235 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wqqg\" (UniqueName: \"kubernetes.io/projected/a3acfc06-6e63-4a08-a201-50a9d6fe8ed5-kube-api-access-6wqqg\") pod \"a3acfc06-6e63-4a08-a201-50a9d6fe8ed5\" (UID: \"a3acfc06-6e63-4a08-a201-50a9d6fe8ed5\") " Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.890267 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b01cc32-2ffb-4377-afff-7fbaa3d14de7-operator-scripts\") pod \"5b01cc32-2ffb-4377-afff-7fbaa3d14de7\" (UID: \"5b01cc32-2ffb-4377-afff-7fbaa3d14de7\") " Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.890429 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3acfc06-6e63-4a08-a201-50a9d6fe8ed5-operator-scripts\") pod \"a3acfc06-6e63-4a08-a201-50a9d6fe8ed5\" (UID: \"a3acfc06-6e63-4a08-a201-50a9d6fe8ed5\") " Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.890838 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgq9s\" (UniqueName: \"kubernetes.io/projected/b9ea3026-b416-47dd-b55a-994533c7f302-kube-api-access-pgq9s\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.890856 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24508566-6f8e-48c4-a3e5-088544cd6b94-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.890868 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9ea3026-b416-47dd-b55a-994533c7f302-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.890878 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkx9s\" (UniqueName: \"kubernetes.io/projected/24508566-6f8e-48c4-a3e5-088544cd6b94-kube-api-access-jkx9s\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.891069 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b01cc32-2ffb-4377-afff-7fbaa3d14de7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b01cc32-2ffb-4377-afff-7fbaa3d14de7" (UID: "5b01cc32-2ffb-4377-afff-7fbaa3d14de7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.891398 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3acfc06-6e63-4a08-a201-50a9d6fe8ed5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a3acfc06-6e63-4a08-a201-50a9d6fe8ed5" (UID: "a3acfc06-6e63-4a08-a201-50a9d6fe8ed5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.894662 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3acfc06-6e63-4a08-a201-50a9d6fe8ed5-kube-api-access-6wqqg" (OuterVolumeSpecName: "kube-api-access-6wqqg") pod "a3acfc06-6e63-4a08-a201-50a9d6fe8ed5" (UID: "a3acfc06-6e63-4a08-a201-50a9d6fe8ed5"). InnerVolumeSpecName "kube-api-access-6wqqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.894922 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b01cc32-2ffb-4377-afff-7fbaa3d14de7-kube-api-access-dfv57" (OuterVolumeSpecName: "kube-api-access-dfv57") pod "5b01cc32-2ffb-4377-afff-7fbaa3d14de7" (UID: "5b01cc32-2ffb-4377-afff-7fbaa3d14de7"). InnerVolumeSpecName "kube-api-access-dfv57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.991984 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3acfc06-6e63-4a08-a201-50a9d6fe8ed5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.992016 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfv57\" (UniqueName: \"kubernetes.io/projected/5b01cc32-2ffb-4377-afff-7fbaa3d14de7-kube-api-access-dfv57\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.992028 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wqqg\" (UniqueName: \"kubernetes.io/projected/a3acfc06-6e63-4a08-a201-50a9d6fe8ed5-kube-api-access-6wqqg\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:49 crc kubenswrapper[4799]: I0127 08:07:49.992036 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b01cc32-2ffb-4377-afff-7fbaa3d14de7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:50 crc kubenswrapper[4799]: I0127 08:07:50.159634 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5c72-account-create-update-9x5nw" event={"ID":"dfe230e4-078d-4aeb-858f-296dd5505f4a","Type":"ContainerDied","Data":"84d8578317b5afb8c907124aea142fa8ec7109eae0c6cb66835933447a02b91f"} Jan 27 08:07:50 crc kubenswrapper[4799]: I0127 08:07:50.159681 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84d8578317b5afb8c907124aea142fa8ec7109eae0c6cb66835933447a02b91f" Jan 27 08:07:50 crc kubenswrapper[4799]: I0127 08:07:50.159740 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5c72-account-create-update-9x5nw" Jan 27 08:07:50 crc kubenswrapper[4799]: I0127 08:07:50.162688 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-abfe-account-create-update-l8nsq" event={"ID":"a3acfc06-6e63-4a08-a201-50a9d6fe8ed5","Type":"ContainerDied","Data":"54fcd014e9dd605fcfcc006d8ae148d2434cfd78c9c282f0b63e524ad5caeca4"} Jan 27 08:07:50 crc kubenswrapper[4799]: I0127 08:07:50.162723 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54fcd014e9dd605fcfcc006d8ae148d2434cfd78c9c282f0b63e524ad5caeca4" Jan 27 08:07:50 crc kubenswrapper[4799]: I0127 08:07:50.162699 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-abfe-account-create-update-l8nsq" Jan 27 08:07:50 crc kubenswrapper[4799]: I0127 08:07:50.164150 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jqxqz" Jan 27 08:07:50 crc kubenswrapper[4799]: I0127 08:07:50.164184 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jqxqz" event={"ID":"5b01cc32-2ffb-4377-afff-7fbaa3d14de7","Type":"ContainerDied","Data":"c4bd37ea18ddbed7dbcafd2e7f0d5c916e181cc4aa62aba1489119a94cdcbc9e"} Jan 27 08:07:50 crc kubenswrapper[4799]: I0127 08:07:50.164214 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4bd37ea18ddbed7dbcafd2e7f0d5c916e181cc4aa62aba1489119a94cdcbc9e" Jan 27 08:07:50 crc kubenswrapper[4799]: I0127 08:07:50.170220 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8gkts" event={"ID":"24508566-6f8e-48c4-a3e5-088544cd6b94","Type":"ContainerDied","Data":"aa68947b61b90b18f826ecd07b960ffc3f5c5a7d354af1cb61632df66b134a69"} Jan 27 08:07:50 crc kubenswrapper[4799]: I0127 08:07:50.170254 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa68947b61b90b18f826ecd07b960ffc3f5c5a7d354af1cb61632df66b134a69" Jan 27 08:07:50 crc kubenswrapper[4799]: I0127 08:07:50.170312 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8gkts" Jan 27 08:07:50 crc kubenswrapper[4799]: I0127 08:07:50.172062 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zbptw" event={"ID":"666620a1-4d36-48ba-a226-e4ba6b9d82a0","Type":"ContainerDied","Data":"9536b28742f86277366382b8f6b2d617af1fe28945084736979c69ac4fba7c26"} Jan 27 08:07:50 crc kubenswrapper[4799]: I0127 08:07:50.172112 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9536b28742f86277366382b8f6b2d617af1fe28945084736979c69ac4fba7c26" Jan 27 08:07:50 crc kubenswrapper[4799]: I0127 08:07:50.172084 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zbptw" Jan 27 08:07:50 crc kubenswrapper[4799]: I0127 08:07:50.174114 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3eda-account-create-update-v5sgh" event={"ID":"b9ea3026-b416-47dd-b55a-994533c7f302","Type":"ContainerDied","Data":"2a20fd331e58b4c0a2e162e3819916b67cadf8d311fab99c4544e3e6c2eefcd0"} Jan 27 08:07:50 crc kubenswrapper[4799]: I0127 08:07:50.174137 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a20fd331e58b4c0a2e162e3819916b67cadf8d311fab99c4544e3e6c2eefcd0" Jan 27 08:07:50 crc kubenswrapper[4799]: I0127 08:07:50.174118 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3eda-account-create-update-v5sgh" Jan 27 08:07:50 crc kubenswrapper[4799]: I0127 08:07:50.174272 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:07:50 crc kubenswrapper[4799]: I0127 08:07:50.465192 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0947eb13-2cb0-48b9-944d-4ae4d3db110c" path="/var/lib/kubelet/pods/0947eb13-2cb0-48b9-944d-4ae4d3db110c/volumes" Jan 27 08:07:54 crc kubenswrapper[4799]: I0127 08:07:54.210222 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fq478" event={"ID":"6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37","Type":"ContainerStarted","Data":"4d31ba741b0e686370ed07cbf292129da580b7f81e22219052a6901111ce0158"} Jan 27 08:07:54 crc kubenswrapper[4799]: I0127 08:07:54.236736 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-fq478" podStartSLOduration=2.4641782 podStartE2EDuration="9.236708126s" podCreationTimestamp="2026-01-27 08:07:45 +0000 UTC" firstStartedPulling="2026-01-27 08:07:46.648432316 +0000 UTC m=+1332.959536381" lastFinishedPulling="2026-01-27 08:07:53.420962242 +0000 UTC m=+1339.732066307" observedRunningTime="2026-01-27 08:07:54.232686345 +0000 UTC m=+1340.543790440" watchObservedRunningTime="2026-01-27 08:07:54.236708126 +0000 UTC m=+1340.547812221" Jan 27 08:07:57 crc kubenswrapper[4799]: I0127 08:07:57.104614 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:07:57 crc kubenswrapper[4799]: I0127 08:07:57.168170 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-6bk7k"] Jan 27 08:07:57 crc kubenswrapper[4799]: I0127 08:07:57.168622 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-6bk7k" podUID="5ff0c7f6-f6de-4632-8214-7cc2758b7e4d" containerName="dnsmasq-dns" containerID="cri-o://54b43f79b487fff1b480e14a5d11c70def40ddc73870f53f16f188ce8e2bb699" gracePeriod=10 Jan 27 08:07:57 crc kubenswrapper[4799]: I0127 08:07:57.238472 4799 generic.go:334] "Generic (PLEG): container finished" podID="6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37" containerID="4d31ba741b0e686370ed07cbf292129da580b7f81e22219052a6901111ce0158" exitCode=0 Jan 27 08:07:57 crc kubenswrapper[4799]: I0127 08:07:57.238519 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fq478" event={"ID":"6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37","Type":"ContainerDied","Data":"4d31ba741b0e686370ed07cbf292129da580b7f81e22219052a6901111ce0158"} Jan 27 08:07:57 crc kubenswrapper[4799]: I0127 08:07:57.651040 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-6bk7k" Jan 27 08:07:57 crc kubenswrapper[4799]: I0127 08:07:57.728691 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-ovsdbserver-sb\") pod \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\" (UID: \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\") " Jan 27 08:07:57 crc kubenswrapper[4799]: I0127 08:07:57.728756 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-dns-svc\") pod \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\" (UID: \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\") " Jan 27 08:07:57 crc kubenswrapper[4799]: I0127 08:07:57.728864 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-ovsdbserver-nb\") pod \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\" (UID: \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\") " Jan 27 08:07:57 crc kubenswrapper[4799]: I0127 08:07:57.728949 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-config\") pod \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\" (UID: \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\") " Jan 27 08:07:57 crc kubenswrapper[4799]: I0127 08:07:57.728989 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8xf2\" (UniqueName: \"kubernetes.io/projected/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-kube-api-access-q8xf2\") pod \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\" (UID: \"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d\") " Jan 27 08:07:57 crc kubenswrapper[4799]: I0127 08:07:57.748543 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-kube-api-access-q8xf2" (OuterVolumeSpecName: "kube-api-access-q8xf2") pod "5ff0c7f6-f6de-4632-8214-7cc2758b7e4d" (UID: "5ff0c7f6-f6de-4632-8214-7cc2758b7e4d"). InnerVolumeSpecName "kube-api-access-q8xf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:07:57 crc kubenswrapper[4799]: I0127 08:07:57.785384 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5ff0c7f6-f6de-4632-8214-7cc2758b7e4d" (UID: "5ff0c7f6-f6de-4632-8214-7cc2758b7e4d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:57 crc kubenswrapper[4799]: I0127 08:07:57.793782 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5ff0c7f6-f6de-4632-8214-7cc2758b7e4d" (UID: "5ff0c7f6-f6de-4632-8214-7cc2758b7e4d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:57 crc kubenswrapper[4799]: I0127 08:07:57.800270 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5ff0c7f6-f6de-4632-8214-7cc2758b7e4d" (UID: "5ff0c7f6-f6de-4632-8214-7cc2758b7e4d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:57 crc kubenswrapper[4799]: I0127 08:07:57.806807 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-config" (OuterVolumeSpecName: "config") pod "5ff0c7f6-f6de-4632-8214-7cc2758b7e4d" (UID: "5ff0c7f6-f6de-4632-8214-7cc2758b7e4d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:07:57 crc kubenswrapper[4799]: I0127 08:07:57.830785 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:57 crc kubenswrapper[4799]: I0127 08:07:57.830822 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:57 crc kubenswrapper[4799]: I0127 08:07:57.830834 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:57 crc kubenswrapper[4799]: I0127 08:07:57.830842 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:57 crc kubenswrapper[4799]: I0127 08:07:57.830853 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8xf2\" (UniqueName: \"kubernetes.io/projected/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d-kube-api-access-q8xf2\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.268391 4799 generic.go:334] "Generic (PLEG): container finished" podID="5ff0c7f6-f6de-4632-8214-7cc2758b7e4d" containerID="54b43f79b487fff1b480e14a5d11c70def40ddc73870f53f16f188ce8e2bb699" exitCode=0 Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.268801 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-6bk7k" Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.269191 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-6bk7k" event={"ID":"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d","Type":"ContainerDied","Data":"54b43f79b487fff1b480e14a5d11c70def40ddc73870f53f16f188ce8e2bb699"} Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.269344 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-6bk7k" event={"ID":"5ff0c7f6-f6de-4632-8214-7cc2758b7e4d","Type":"ContainerDied","Data":"2d3f6dd28e3cb5b0b6f00d4e765235888acb96df8df08a975a39e6cb8f799365"} Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.269370 4799 scope.go:117] "RemoveContainer" containerID="54b43f79b487fff1b480e14a5d11c70def40ddc73870f53f16f188ce8e2bb699" Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.298248 4799 scope.go:117] "RemoveContainer" containerID="7ef9c851512b5f799f2cfd7194a07e99d1307e7a015174fbbe88354400b56eb0" Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.327598 4799 scope.go:117] "RemoveContainer" containerID="54b43f79b487fff1b480e14a5d11c70def40ddc73870f53f16f188ce8e2bb699" Jan 27 08:07:58 crc kubenswrapper[4799]: E0127 08:07:58.328881 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54b43f79b487fff1b480e14a5d11c70def40ddc73870f53f16f188ce8e2bb699\": container with ID starting with 54b43f79b487fff1b480e14a5d11c70def40ddc73870f53f16f188ce8e2bb699 not found: ID does not exist" containerID="54b43f79b487fff1b480e14a5d11c70def40ddc73870f53f16f188ce8e2bb699" Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.328913 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54b43f79b487fff1b480e14a5d11c70def40ddc73870f53f16f188ce8e2bb699"} err="failed to get container status \"54b43f79b487fff1b480e14a5d11c70def40ddc73870f53f16f188ce8e2bb699\": rpc error: code = NotFound desc = could not find container \"54b43f79b487fff1b480e14a5d11c70def40ddc73870f53f16f188ce8e2bb699\": container with ID starting with 54b43f79b487fff1b480e14a5d11c70def40ddc73870f53f16f188ce8e2bb699 not found: ID does not exist" Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.328933 4799 scope.go:117] "RemoveContainer" containerID="7ef9c851512b5f799f2cfd7194a07e99d1307e7a015174fbbe88354400b56eb0" Jan 27 08:07:58 crc kubenswrapper[4799]: E0127 08:07:58.329235 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ef9c851512b5f799f2cfd7194a07e99d1307e7a015174fbbe88354400b56eb0\": container with ID starting with 7ef9c851512b5f799f2cfd7194a07e99d1307e7a015174fbbe88354400b56eb0 not found: ID does not exist" containerID="7ef9c851512b5f799f2cfd7194a07e99d1307e7a015174fbbe88354400b56eb0" Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.329256 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef9c851512b5f799f2cfd7194a07e99d1307e7a015174fbbe88354400b56eb0"} err="failed to get container status \"7ef9c851512b5f799f2cfd7194a07e99d1307e7a015174fbbe88354400b56eb0\": rpc error: code = NotFound desc = could not find container \"7ef9c851512b5f799f2cfd7194a07e99d1307e7a015174fbbe88354400b56eb0\": container with ID starting with 7ef9c851512b5f799f2cfd7194a07e99d1307e7a015174fbbe88354400b56eb0 not found: ID does not exist" Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.387188 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-6bk7k"] Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.394792 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-6bk7k"] Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.469468 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ff0c7f6-f6de-4632-8214-7cc2758b7e4d" path="/var/lib/kubelet/pods/5ff0c7f6-f6de-4632-8214-7cc2758b7e4d/volumes" Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.626651 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fq478" Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.644927 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37-combined-ca-bundle\") pod \"6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37\" (UID: \"6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37\") " Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.645169 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fvxk\" (UniqueName: \"kubernetes.io/projected/6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37-kube-api-access-8fvxk\") pod \"6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37\" (UID: \"6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37\") " Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.645207 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37-config-data\") pod \"6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37\" (UID: \"6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37\") " Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.651549 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37-kube-api-access-8fvxk" (OuterVolumeSpecName: "kube-api-access-8fvxk") pod "6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37" (UID: "6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37"). InnerVolumeSpecName "kube-api-access-8fvxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.689863 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37" (UID: "6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.695084 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37-config-data" (OuterVolumeSpecName: "config-data") pod "6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37" (UID: "6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.747397 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.747430 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fvxk\" (UniqueName: \"kubernetes.io/projected/6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37-kube-api-access-8fvxk\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:58 crc kubenswrapper[4799]: I0127 08:07:58.747439 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.285566 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fq478" event={"ID":"6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37","Type":"ContainerDied","Data":"b95d945c4853b9644d7e1c7f7cf45ab8346bcc248ab74443853a874751eb1fce"} Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.285603 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b95d945c4853b9644d7e1c7f7cf45ab8346bcc248ab74443853a874751eb1fce" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.285655 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fq478" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.534226 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-xl844"] Jan 27 08:07:59 crc kubenswrapper[4799]: E0127 08:07:59.534803 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff0c7f6-f6de-4632-8214-7cc2758b7e4d" containerName="init" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.534867 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff0c7f6-f6de-4632-8214-7cc2758b7e4d" containerName="init" Jan 27 08:07:59 crc kubenswrapper[4799]: E0127 08:07:59.534918 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="666620a1-4d36-48ba-a226-e4ba6b9d82a0" containerName="mariadb-database-create" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.534965 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="666620a1-4d36-48ba-a226-e4ba6b9d82a0" containerName="mariadb-database-create" Jan 27 08:07:59 crc kubenswrapper[4799]: E0127 08:07:59.535016 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfe230e4-078d-4aeb-858f-296dd5505f4a" containerName="mariadb-account-create-update" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.535091 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe230e4-078d-4aeb-858f-296dd5505f4a" containerName="mariadb-account-create-update" Jan 27 08:07:59 crc kubenswrapper[4799]: E0127 08:07:59.535155 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24508566-6f8e-48c4-a3e5-088544cd6b94" containerName="mariadb-database-create" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.535203 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="24508566-6f8e-48c4-a3e5-088544cd6b94" containerName="mariadb-database-create" Jan 27 08:07:59 crc kubenswrapper[4799]: E0127 08:07:59.535258 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37" containerName="keystone-db-sync" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.535307 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37" containerName="keystone-db-sync" Jan 27 08:07:59 crc kubenswrapper[4799]: E0127 08:07:59.535376 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3acfc06-6e63-4a08-a201-50a9d6fe8ed5" containerName="mariadb-account-create-update" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.535449 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3acfc06-6e63-4a08-a201-50a9d6fe8ed5" containerName="mariadb-account-create-update" Jan 27 08:07:59 crc kubenswrapper[4799]: E0127 08:07:59.535504 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b01cc32-2ffb-4377-afff-7fbaa3d14de7" containerName="mariadb-database-create" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.535550 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b01cc32-2ffb-4377-afff-7fbaa3d14de7" containerName="mariadb-database-create" Jan 27 08:07:59 crc kubenswrapper[4799]: E0127 08:07:59.535597 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ea3026-b416-47dd-b55a-994533c7f302" containerName="mariadb-account-create-update" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.535642 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ea3026-b416-47dd-b55a-994533c7f302" containerName="mariadb-account-create-update" Jan 27 08:07:59 crc kubenswrapper[4799]: E0127 08:07:59.535729 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0947eb13-2cb0-48b9-944d-4ae4d3db110c" containerName="init" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.535781 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0947eb13-2cb0-48b9-944d-4ae4d3db110c" containerName="init" Jan 27 08:07:59 crc kubenswrapper[4799]: E0127 08:07:59.535843 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0947eb13-2cb0-48b9-944d-4ae4d3db110c" containerName="dnsmasq-dns" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.535890 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0947eb13-2cb0-48b9-944d-4ae4d3db110c" containerName="dnsmasq-dns" Jan 27 08:07:59 crc kubenswrapper[4799]: E0127 08:07:59.535942 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff0c7f6-f6de-4632-8214-7cc2758b7e4d" containerName="dnsmasq-dns" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.535989 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff0c7f6-f6de-4632-8214-7cc2758b7e4d" containerName="dnsmasq-dns" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.536184 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="0947eb13-2cb0-48b9-944d-4ae4d3db110c" containerName="dnsmasq-dns" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.536241 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfe230e4-078d-4aeb-858f-296dd5505f4a" containerName="mariadb-account-create-update" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.536296 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b01cc32-2ffb-4377-afff-7fbaa3d14de7" containerName="mariadb-database-create" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.536364 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="666620a1-4d36-48ba-a226-e4ba6b9d82a0" containerName="mariadb-database-create" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.536426 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="24508566-6f8e-48c4-a3e5-088544cd6b94" containerName="mariadb-database-create" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.536474 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37" containerName="keystone-db-sync" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.536523 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3acfc06-6e63-4a08-a201-50a9d6fe8ed5" containerName="mariadb-account-create-update" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.536718 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9ea3026-b416-47dd-b55a-994533c7f302" containerName="mariadb-account-create-update" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.536781 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ff0c7f6-f6de-4632-8214-7cc2758b7e4d" containerName="dnsmasq-dns" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.537599 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-xl844" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.544555 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-xl844"] Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.569670 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-config\") pod \"dnsmasq-dns-847c4cc679-xl844\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-xl844" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.569900 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-xl844\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-xl844" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.570045 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-xl844\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-xl844" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.570290 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52kt9\" (UniqueName: \"kubernetes.io/projected/f822cb52-beec-45b1-9571-c80f41fc4d9c-kube-api-access-52kt9\") pod \"dnsmasq-dns-847c4cc679-xl844\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-xl844" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.570482 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-xl844\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-xl844" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.570598 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-dns-svc\") pod \"dnsmasq-dns-847c4cc679-xl844\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-xl844" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.586287 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lwsfb"] Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.587307 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lwsfb" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.590423 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.590611 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.590798 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.592756 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.592941 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-shqm7" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.615409 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lwsfb"] Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.672195 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-fernet-keys\") pod \"keystone-bootstrap-lwsfb\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " pod="openstack/keystone-bootstrap-lwsfb" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.672316 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52kt9\" (UniqueName: \"kubernetes.io/projected/f822cb52-beec-45b1-9571-c80f41fc4d9c-kube-api-access-52kt9\") pod \"dnsmasq-dns-847c4cc679-xl844\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-xl844" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.672349 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-xl844\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-xl844" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.672373 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-dns-svc\") pod \"dnsmasq-dns-847c4cc679-xl844\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-xl844" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.672388 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-credential-keys\") pod \"keystone-bootstrap-lwsfb\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " pod="openstack/keystone-bootstrap-lwsfb" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.672406 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-combined-ca-bundle\") pod \"keystone-bootstrap-lwsfb\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " pod="openstack/keystone-bootstrap-lwsfb" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.672428 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-scripts\") pod \"keystone-bootstrap-lwsfb\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " pod="openstack/keystone-bootstrap-lwsfb" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.672453 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-config\") pod \"dnsmasq-dns-847c4cc679-xl844\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-xl844" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.672482 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-xl844\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-xl844" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.672504 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-xl844\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-xl844" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.672519 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml9rr\" (UniqueName: \"kubernetes.io/projected/c09e8874-d97f-4386-abda-e11ecd97a8b1-kube-api-access-ml9rr\") pod \"keystone-bootstrap-lwsfb\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " pod="openstack/keystone-bootstrap-lwsfb" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.672537 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-config-data\") pod \"keystone-bootstrap-lwsfb\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " pod="openstack/keystone-bootstrap-lwsfb" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.673536 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-xl844\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-xl844" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.673635 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-xl844\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-xl844" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.674064 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-dns-svc\") pod \"dnsmasq-dns-847c4cc679-xl844\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-xl844" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.674309 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-config\") pod \"dnsmasq-dns-847c4cc679-xl844\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-xl844" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.674422 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-xl844\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-xl844" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.703349 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52kt9\" (UniqueName: \"kubernetes.io/projected/f822cb52-beec-45b1-9571-c80f41fc4d9c-kube-api-access-52kt9\") pod \"dnsmasq-dns-847c4cc679-xl844\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-xl844" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.771489 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-d9mwm"] Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.772951 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d9mwm" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.774927 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-credential-keys\") pod \"keystone-bootstrap-lwsfb\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " pod="openstack/keystone-bootstrap-lwsfb" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.774967 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-combined-ca-bundle\") pod \"keystone-bootstrap-lwsfb\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " pod="openstack/keystone-bootstrap-lwsfb" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.774992 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-scripts\") pod \"keystone-bootstrap-lwsfb\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " pod="openstack/keystone-bootstrap-lwsfb" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.775036 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml9rr\" (UniqueName: \"kubernetes.io/projected/c09e8874-d97f-4386-abda-e11ecd97a8b1-kube-api-access-ml9rr\") pod \"keystone-bootstrap-lwsfb\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " pod="openstack/keystone-bootstrap-lwsfb" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.775054 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-config-data\") pod \"keystone-bootstrap-lwsfb\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " pod="openstack/keystone-bootstrap-lwsfb" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.775096 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-fernet-keys\") pod \"keystone-bootstrap-lwsfb\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " pod="openstack/keystone-bootstrap-lwsfb" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.781341 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.781589 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.781818 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-55nfh" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.785133 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-combined-ca-bundle\") pod \"keystone-bootstrap-lwsfb\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " pod="openstack/keystone-bootstrap-lwsfb" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.801390 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.809916 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-config-data\") pod \"keystone-bootstrap-lwsfb\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " pod="openstack/keystone-bootstrap-lwsfb" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.812918 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.814265 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml9rr\" (UniqueName: \"kubernetes.io/projected/c09e8874-d97f-4386-abda-e11ecd97a8b1-kube-api-access-ml9rr\") pod \"keystone-bootstrap-lwsfb\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " pod="openstack/keystone-bootstrap-lwsfb" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.815380 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-credential-keys\") pod \"keystone-bootstrap-lwsfb\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " pod="openstack/keystone-bootstrap-lwsfb" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.818452 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.822754 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-fernet-keys\") pod \"keystone-bootstrap-lwsfb\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " pod="openstack/keystone-bootstrap-lwsfb" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.822823 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-d9mwm"] Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.826044 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-scripts\") pod \"keystone-bootstrap-lwsfb\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " pod="openstack/keystone-bootstrap-lwsfb" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.829713 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.858858 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-xl844" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.863881 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.876407 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-combined-ca-bundle\") pod \"cinder-db-sync-d9mwm\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " pod="openstack/cinder-db-sync-d9mwm" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.876714 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqrpb\" (UniqueName: \"kubernetes.io/projected/1e5da918-e50a-4642-947e-2f70675c384a-kube-api-access-bqrpb\") pod \"ceilometer-0\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " pod="openstack/ceilometer-0" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.876814 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " pod="openstack/ceilometer-0" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.876941 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-config-data\") pod \"ceilometer-0\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " pod="openstack/ceilometer-0" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.877043 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-config-data\") pod \"cinder-db-sync-d9mwm\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " pod="openstack/cinder-db-sync-d9mwm" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.877159 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-scripts\") pod \"cinder-db-sync-d9mwm\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " pod="openstack/cinder-db-sync-d9mwm" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.877273 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e5da918-e50a-4642-947e-2f70675c384a-log-httpd\") pod \"ceilometer-0\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " pod="openstack/ceilometer-0" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.877409 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-db-sync-config-data\") pod \"cinder-db-sync-d9mwm\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " pod="openstack/cinder-db-sync-d9mwm" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.877511 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxpvl\" (UniqueName: \"kubernetes.io/projected/84f49a02-4934-43be-aa45-d24a40b20db2-kube-api-access-cxpvl\") pod \"cinder-db-sync-d9mwm\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " pod="openstack/cinder-db-sync-d9mwm" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.877747 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e5da918-e50a-4642-947e-2f70675c384a-run-httpd\") pod \"ceilometer-0\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " pod="openstack/ceilometer-0" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.877855 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-scripts\") pod \"ceilometer-0\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " pod="openstack/ceilometer-0" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.877953 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " pod="openstack/ceilometer-0" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.878061 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84f49a02-4934-43be-aa45-d24a40b20db2-etc-machine-id\") pod \"cinder-db-sync-d9mwm\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " pod="openstack/cinder-db-sync-d9mwm" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.896272 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-n9x6s"] Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.897642 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-n9x6s" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.899473 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.901213 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.905022 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-gxbph" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.910617 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lwsfb" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.937940 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-n9x6s"] Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.955389 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-csjnn"] Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.956602 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-csjnn" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.961305 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-wsvr2" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.961482 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.961630 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.986974 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a7ee0ddb-6bdc-4388-8b45-f58e81417a13-config\") pod \"neutron-db-sync-n9x6s\" (UID: \"a7ee0ddb-6bdc-4388-8b45-f58e81417a13\") " pod="openstack/neutron-db-sync-n9x6s" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.987018 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb663bcc-159b-4604-8582-75a4baff492f-combined-ca-bundle\") pod \"placement-db-sync-csjnn\" (UID: \"fb663bcc-159b-4604-8582-75a4baff492f\") " pod="openstack/placement-db-sync-csjnn" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.987051 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-combined-ca-bundle\") pod \"cinder-db-sync-d9mwm\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " pod="openstack/cinder-db-sync-d9mwm" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.987078 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqrpb\" (UniqueName: \"kubernetes.io/projected/1e5da918-e50a-4642-947e-2f70675c384a-kube-api-access-bqrpb\") pod \"ceilometer-0\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " pod="openstack/ceilometer-0" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.987106 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " pod="openstack/ceilometer-0" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.987140 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ee0ddb-6bdc-4388-8b45-f58e81417a13-combined-ca-bundle\") pod \"neutron-db-sync-n9x6s\" (UID: \"a7ee0ddb-6bdc-4388-8b45-f58e81417a13\") " pod="openstack/neutron-db-sync-n9x6s" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.987209 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-config-data\") pod \"ceilometer-0\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " pod="openstack/ceilometer-0" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.987234 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb663bcc-159b-4604-8582-75a4baff492f-logs\") pod \"placement-db-sync-csjnn\" (UID: \"fb663bcc-159b-4604-8582-75a4baff492f\") " pod="openstack/placement-db-sync-csjnn" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.987259 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-config-data\") pod \"cinder-db-sync-d9mwm\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " pod="openstack/cinder-db-sync-d9mwm" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.987335 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-scripts\") pod \"cinder-db-sync-d9mwm\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " pod="openstack/cinder-db-sync-d9mwm" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.987380 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e5da918-e50a-4642-947e-2f70675c384a-log-httpd\") pod \"ceilometer-0\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " pod="openstack/ceilometer-0" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.987410 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb663bcc-159b-4604-8582-75a4baff492f-config-data\") pod \"placement-db-sync-csjnn\" (UID: \"fb663bcc-159b-4604-8582-75a4baff492f\") " pod="openstack/placement-db-sync-csjnn" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.987690 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbc9r\" (UniqueName: \"kubernetes.io/projected/a7ee0ddb-6bdc-4388-8b45-f58e81417a13-kube-api-access-gbc9r\") pod \"neutron-db-sync-n9x6s\" (UID: \"a7ee0ddb-6bdc-4388-8b45-f58e81417a13\") " pod="openstack/neutron-db-sync-n9x6s" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.987712 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-db-sync-config-data\") pod \"cinder-db-sync-d9mwm\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " pod="openstack/cinder-db-sync-d9mwm" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.987872 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxpvl\" (UniqueName: \"kubernetes.io/projected/84f49a02-4934-43be-aa45-d24a40b20db2-kube-api-access-cxpvl\") pod \"cinder-db-sync-d9mwm\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " pod="openstack/cinder-db-sync-d9mwm" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.987904 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8fnj\" (UniqueName: \"kubernetes.io/projected/fb663bcc-159b-4604-8582-75a4baff492f-kube-api-access-x8fnj\") pod \"placement-db-sync-csjnn\" (UID: \"fb663bcc-159b-4604-8582-75a4baff492f\") " pod="openstack/placement-db-sync-csjnn" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.987926 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e5da918-e50a-4642-947e-2f70675c384a-run-httpd\") pod \"ceilometer-0\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " pod="openstack/ceilometer-0" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.988012 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-scripts\") pod \"ceilometer-0\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " pod="openstack/ceilometer-0" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.988037 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " pod="openstack/ceilometer-0" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.988123 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb663bcc-159b-4604-8582-75a4baff492f-scripts\") pod \"placement-db-sync-csjnn\" (UID: \"fb663bcc-159b-4604-8582-75a4baff492f\") " pod="openstack/placement-db-sync-csjnn" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.988160 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84f49a02-4934-43be-aa45-d24a40b20db2-etc-machine-id\") pod \"cinder-db-sync-d9mwm\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " pod="openstack/cinder-db-sync-d9mwm" Jan 27 08:07:59 crc kubenswrapper[4799]: I0127 08:07:59.989821 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84f49a02-4934-43be-aa45-d24a40b20db2-etc-machine-id\") pod \"cinder-db-sync-d9mwm\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " pod="openstack/cinder-db-sync-d9mwm" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.014823 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-cj8lr"] Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.024720 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e5da918-e50a-4642-947e-2f70675c384a-run-httpd\") pod \"ceilometer-0\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " pod="openstack/ceilometer-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.025057 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e5da918-e50a-4642-947e-2f70675c384a-log-httpd\") pod \"ceilometer-0\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " pod="openstack/ceilometer-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.033832 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cj8lr" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.037199 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.037856 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9dhbt" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.065834 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-config-data\") pod \"ceilometer-0\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " pod="openstack/ceilometer-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.066726 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-combined-ca-bundle\") pod \"cinder-db-sync-d9mwm\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " pod="openstack/cinder-db-sync-d9mwm" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.070920 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-scripts\") pod \"ceilometer-0\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " pod="openstack/ceilometer-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.072592 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " pod="openstack/ceilometer-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.073659 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqrpb\" (UniqueName: \"kubernetes.io/projected/1e5da918-e50a-4642-947e-2f70675c384a-kube-api-access-bqrpb\") pod \"ceilometer-0\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " pod="openstack/ceilometer-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.075285 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxpvl\" (UniqueName: \"kubernetes.io/projected/84f49a02-4934-43be-aa45-d24a40b20db2-kube-api-access-cxpvl\") pod \"cinder-db-sync-d9mwm\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " pod="openstack/cinder-db-sync-d9mwm" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.075484 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-db-sync-config-data\") pod \"cinder-db-sync-d9mwm\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " pod="openstack/cinder-db-sync-d9mwm" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.077071 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-xl844"] Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.077495 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-config-data\") pod \"cinder-db-sync-d9mwm\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " pod="openstack/cinder-db-sync-d9mwm" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.080968 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-scripts\") pod \"cinder-db-sync-d9mwm\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " pod="openstack/cinder-db-sync-d9mwm" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.081525 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " pod="openstack/ceilometer-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.092981 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ee0ddb-6bdc-4388-8b45-f58e81417a13-combined-ca-bundle\") pod \"neutron-db-sync-n9x6s\" (UID: \"a7ee0ddb-6bdc-4388-8b45-f58e81417a13\") " pod="openstack/neutron-db-sync-n9x6s" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.093044 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb663bcc-159b-4604-8582-75a4baff492f-logs\") pod \"placement-db-sync-csjnn\" (UID: \"fb663bcc-159b-4604-8582-75a4baff492f\") " pod="openstack/placement-db-sync-csjnn" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.093080 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6228dcb6-2940-4494-b1fa-838d28618279-db-sync-config-data\") pod \"barbican-db-sync-cj8lr\" (UID: \"6228dcb6-2940-4494-b1fa-838d28618279\") " pod="openstack/barbican-db-sync-cj8lr" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.093105 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6228dcb6-2940-4494-b1fa-838d28618279-combined-ca-bundle\") pod \"barbican-db-sync-cj8lr\" (UID: \"6228dcb6-2940-4494-b1fa-838d28618279\") " pod="openstack/barbican-db-sync-cj8lr" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.093144 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb663bcc-159b-4604-8582-75a4baff492f-config-data\") pod \"placement-db-sync-csjnn\" (UID: \"fb663bcc-159b-4604-8582-75a4baff492f\") " pod="openstack/placement-db-sync-csjnn" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.093176 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbc9r\" (UniqueName: \"kubernetes.io/projected/a7ee0ddb-6bdc-4388-8b45-f58e81417a13-kube-api-access-gbc9r\") pod \"neutron-db-sync-n9x6s\" (UID: \"a7ee0ddb-6bdc-4388-8b45-f58e81417a13\") " pod="openstack/neutron-db-sync-n9x6s" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.093243 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8fnj\" (UniqueName: \"kubernetes.io/projected/fb663bcc-159b-4604-8582-75a4baff492f-kube-api-access-x8fnj\") pod \"placement-db-sync-csjnn\" (UID: \"fb663bcc-159b-4604-8582-75a4baff492f\") " pod="openstack/placement-db-sync-csjnn" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.093280 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb663bcc-159b-4604-8582-75a4baff492f-scripts\") pod \"placement-db-sync-csjnn\" (UID: \"fb663bcc-159b-4604-8582-75a4baff492f\") " pod="openstack/placement-db-sync-csjnn" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.093307 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbvn9\" (UniqueName: \"kubernetes.io/projected/6228dcb6-2940-4494-b1fa-838d28618279-kube-api-access-sbvn9\") pod \"barbican-db-sync-cj8lr\" (UID: \"6228dcb6-2940-4494-b1fa-838d28618279\") " pod="openstack/barbican-db-sync-cj8lr" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.093368 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a7ee0ddb-6bdc-4388-8b45-f58e81417a13-config\") pod \"neutron-db-sync-n9x6s\" (UID: \"a7ee0ddb-6bdc-4388-8b45-f58e81417a13\") " pod="openstack/neutron-db-sync-n9x6s" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.093392 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb663bcc-159b-4604-8582-75a4baff492f-combined-ca-bundle\") pod \"placement-db-sync-csjnn\" (UID: \"fb663bcc-159b-4604-8582-75a4baff492f\") " pod="openstack/placement-db-sync-csjnn" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.094289 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb663bcc-159b-4604-8582-75a4baff492f-logs\") pod \"placement-db-sync-csjnn\" (UID: \"fb663bcc-159b-4604-8582-75a4baff492f\") " pod="openstack/placement-db-sync-csjnn" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.096330 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-csjnn"] Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.098454 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb663bcc-159b-4604-8582-75a4baff492f-config-data\") pod \"placement-db-sync-csjnn\" (UID: \"fb663bcc-159b-4604-8582-75a4baff492f\") " pod="openstack/placement-db-sync-csjnn" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.104807 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb663bcc-159b-4604-8582-75a4baff492f-scripts\") pod \"placement-db-sync-csjnn\" (UID: \"fb663bcc-159b-4604-8582-75a4baff492f\") " pod="openstack/placement-db-sync-csjnn" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.110421 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb663bcc-159b-4604-8582-75a4baff492f-combined-ca-bundle\") pod \"placement-db-sync-csjnn\" (UID: \"fb663bcc-159b-4604-8582-75a4baff492f\") " pod="openstack/placement-db-sync-csjnn" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.118638 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ee0ddb-6bdc-4388-8b45-f58e81417a13-combined-ca-bundle\") pod \"neutron-db-sync-n9x6s\" (UID: \"a7ee0ddb-6bdc-4388-8b45-f58e81417a13\") " pod="openstack/neutron-db-sync-n9x6s" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.122532 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cj8lr"] Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.125908 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8fnj\" (UniqueName: \"kubernetes.io/projected/fb663bcc-159b-4604-8582-75a4baff492f-kube-api-access-x8fnj\") pod \"placement-db-sync-csjnn\" (UID: \"fb663bcc-159b-4604-8582-75a4baff492f\") " pod="openstack/placement-db-sync-csjnn" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.126023 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbc9r\" (UniqueName: \"kubernetes.io/projected/a7ee0ddb-6bdc-4388-8b45-f58e81417a13-kube-api-access-gbc9r\") pod \"neutron-db-sync-n9x6s\" (UID: \"a7ee0ddb-6bdc-4388-8b45-f58e81417a13\") " pod="openstack/neutron-db-sync-n9x6s" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.128206 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a7ee0ddb-6bdc-4388-8b45-f58e81417a13-config\") pod \"neutron-db-sync-n9x6s\" (UID: \"a7ee0ddb-6bdc-4388-8b45-f58e81417a13\") " pod="openstack/neutron-db-sync-n9x6s" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.137466 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-d5zp7"] Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.139983 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.156207 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-d5zp7"] Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.196182 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-d5zp7\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.196252 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbvn9\" (UniqueName: \"kubernetes.io/projected/6228dcb6-2940-4494-b1fa-838d28618279-kube-api-access-sbvn9\") pod \"barbican-db-sync-cj8lr\" (UID: \"6228dcb6-2940-4494-b1fa-838d28618279\") " pod="openstack/barbican-db-sync-cj8lr" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.196413 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-config\") pod \"dnsmasq-dns-785d8bcb8c-d5zp7\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.196452 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fglxv\" (UniqueName: \"kubernetes.io/projected/d322f101-f8ff-4a8b-9acb-9d441cf2367a-kube-api-access-fglxv\") pod \"dnsmasq-dns-785d8bcb8c-d5zp7\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.196482 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-d5zp7\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.196518 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6228dcb6-2940-4494-b1fa-838d28618279-db-sync-config-data\") pod \"barbican-db-sync-cj8lr\" (UID: \"6228dcb6-2940-4494-b1fa-838d28618279\") " pod="openstack/barbican-db-sync-cj8lr" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.196539 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6228dcb6-2940-4494-b1fa-838d28618279-combined-ca-bundle\") pod \"barbican-db-sync-cj8lr\" (UID: \"6228dcb6-2940-4494-b1fa-838d28618279\") " pod="openstack/barbican-db-sync-cj8lr" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.196565 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-d5zp7\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.196585 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-d5zp7\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.199944 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6228dcb6-2940-4494-b1fa-838d28618279-db-sync-config-data\") pod \"barbican-db-sync-cj8lr\" (UID: \"6228dcb6-2940-4494-b1fa-838d28618279\") " pod="openstack/barbican-db-sync-cj8lr" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.201207 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6228dcb6-2940-4494-b1fa-838d28618279-combined-ca-bundle\") pod \"barbican-db-sync-cj8lr\" (UID: \"6228dcb6-2940-4494-b1fa-838d28618279\") " pod="openstack/barbican-db-sync-cj8lr" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.211627 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbvn9\" (UniqueName: \"kubernetes.io/projected/6228dcb6-2940-4494-b1fa-838d28618279-kube-api-access-sbvn9\") pod \"barbican-db-sync-cj8lr\" (UID: \"6228dcb6-2940-4494-b1fa-838d28618279\") " pod="openstack/barbican-db-sync-cj8lr" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.211979 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d9mwm" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.249785 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.268187 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-n9x6s" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.297528 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-d5zp7\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.297620 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-config\") pod \"dnsmasq-dns-785d8bcb8c-d5zp7\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.297654 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fglxv\" (UniqueName: \"kubernetes.io/projected/d322f101-f8ff-4a8b-9acb-9d441cf2367a-kube-api-access-fglxv\") pod \"dnsmasq-dns-785d8bcb8c-d5zp7\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.297703 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-d5zp7\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.298972 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-d5zp7\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.299002 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-d5zp7\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.300209 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-d5zp7\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.300599 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-d5zp7\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.300750 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-d5zp7\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.300991 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-csjnn" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.301630 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-d5zp7\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.299566 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-config\") pod \"dnsmasq-dns-785d8bcb8c-d5zp7\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.325814 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fglxv\" (UniqueName: \"kubernetes.io/projected/d322f101-f8ff-4a8b-9acb-9d441cf2367a-kube-api-access-fglxv\") pod \"dnsmasq-dns-785d8bcb8c-d5zp7\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.361735 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cj8lr" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.442251 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lwsfb"] Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.467168 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.514597 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-xl844"] Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.701905 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.708727 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.711987 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.717276 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.717577 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-j7cm7" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.717694 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.722070 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.750977 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-d9mwm"] Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.811588 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.811640 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/297f7d25-8cc3-4e3f-86dc-08798c8674da-logs\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.811668 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.811717 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/297f7d25-8cc3-4e3f-86dc-08798c8674da-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.811738 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-config-data\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.811758 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.811783 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-scripts\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.811802 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wrns\" (UniqueName: \"kubernetes.io/projected/297f7d25-8cc3-4e3f-86dc-08798c8674da-kube-api-access-7wrns\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.813210 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.814924 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.818272 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.818560 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.836980 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.865959 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.918383 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aabe90ea-0872-4492-b996-fa1b477439b0-logs\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.918466 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.918574 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/297f7d25-8cc3-4e3f-86dc-08798c8674da-logs\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.918614 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.918663 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.918689 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.918725 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.918794 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/297f7d25-8cc3-4e3f-86dc-08798c8674da-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.918829 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.918856 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-config-data\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.918882 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n24j9\" (UniqueName: \"kubernetes.io/projected/aabe90ea-0872-4492-b996-fa1b477439b0-kube-api-access-n24j9\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.918909 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.918945 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-scripts\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.918979 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wrns\" (UniqueName: \"kubernetes.io/projected/297f7d25-8cc3-4e3f-86dc-08798c8674da-kube-api-access-7wrns\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.919031 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.919104 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aabe90ea-0872-4492-b996-fa1b477439b0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.919768 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/297f7d25-8cc3-4e3f-86dc-08798c8674da-logs\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.920432 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/297f7d25-8cc3-4e3f-86dc-08798c8674da-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.921278 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.923827 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.925363 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-config-data\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.925400 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.926080 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-scripts\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.939639 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wrns\" (UniqueName: \"kubernetes.io/projected/297f7d25-8cc3-4e3f-86dc-08798c8674da-kube-api-access-7wrns\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.962975 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-n9x6s"] Jan 27 08:08:00 crc kubenswrapper[4799]: I0127 08:08:00.964146 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.020434 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.020480 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n24j9\" (UniqueName: \"kubernetes.io/projected/aabe90ea-0872-4492-b996-fa1b477439b0-kube-api-access-n24j9\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.020524 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.020550 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aabe90ea-0872-4492-b996-fa1b477439b0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.020595 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aabe90ea-0872-4492-b996-fa1b477439b0-logs\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.020637 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.020663 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.020696 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.024128 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.024263 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aabe90ea-0872-4492-b996-fa1b477439b0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.025682 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aabe90ea-0872-4492-b996-fa1b477439b0-logs\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.028373 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.028823 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.031840 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.035558 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.041623 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.046006 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n24j9\" (UniqueName: \"kubernetes.io/projected/aabe90ea-0872-4492-b996-fa1b477439b0-kube-api-access-n24j9\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.076524 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.085995 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-d5zp7"] Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.095959 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-csjnn"] Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.105450 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cj8lr"] Jan 27 08:08:01 crc kubenswrapper[4799]: W0127 08:08:01.106825 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6228dcb6_2940_4494_b1fa_838d28618279.slice/crio-e0ee3b6638a19dbbd6263622143ad0e219c94b7bc52a1f1ba2e17c19fef4bc9f WatchSource:0}: Error finding container e0ee3b6638a19dbbd6263622143ad0e219c94b7bc52a1f1ba2e17c19fef4bc9f: Status 404 returned error can't find the container with id e0ee3b6638a19dbbd6263622143ad0e219c94b7bc52a1f1ba2e17c19fef4bc9f Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.147739 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.334780 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d9mwm" event={"ID":"84f49a02-4934-43be-aa45-d24a40b20db2","Type":"ContainerStarted","Data":"4ffe9cfae9fd5ea05f57990d10b85c5b38485c3245ac778cbe173fb4e8abbd53"} Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.336949 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-n9x6s" event={"ID":"a7ee0ddb-6bdc-4388-8b45-f58e81417a13","Type":"ContainerStarted","Data":"d3d0c0fe16de7311f2618c3baedd20dd747f2ebffc8915d9deba8c2da79a5917"} Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.336996 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-n9x6s" event={"ID":"a7ee0ddb-6bdc-4388-8b45-f58e81417a13","Type":"ContainerStarted","Data":"7040823b4c81e4d3d76322301e66386c133273e19c98ea8f29b90baeb1de9428"} Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.359674 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cj8lr" event={"ID":"6228dcb6-2940-4494-b1fa-838d28618279","Type":"ContainerStarted","Data":"e0ee3b6638a19dbbd6263622143ad0e219c94b7bc52a1f1ba2e17c19fef4bc9f"} Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.367114 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-n9x6s" podStartSLOduration=2.367095839 podStartE2EDuration="2.367095839s" podCreationTimestamp="2026-01-27 08:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:08:01.36059316 +0000 UTC m=+1347.671697235" watchObservedRunningTime="2026-01-27 08:08:01.367095839 +0000 UTC m=+1347.678199904" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.368398 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-csjnn" event={"ID":"fb663bcc-159b-4604-8582-75a4baff492f","Type":"ContainerStarted","Data":"a5f3e8f19d2a79ae818104dd38ee5b036908942360cbccfec4b0200bc54361dc"} Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.382098 4799 generic.go:334] "Generic (PLEG): container finished" podID="f822cb52-beec-45b1-9571-c80f41fc4d9c" containerID="55087b36507d8c16508fd14ab8f666faca93826be467086c011feedb9e48fb04" exitCode=0 Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.382270 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-xl844" event={"ID":"f822cb52-beec-45b1-9571-c80f41fc4d9c","Type":"ContainerDied","Data":"55087b36507d8c16508fd14ab8f666faca93826be467086c011feedb9e48fb04"} Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.382299 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-xl844" event={"ID":"f822cb52-beec-45b1-9571-c80f41fc4d9c","Type":"ContainerStarted","Data":"8187e0e86f00570981a9517f9aa3c4f660cc13936e25defc0e3f35478b2a1fca"} Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.392105 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lwsfb" event={"ID":"c09e8874-d97f-4386-abda-e11ecd97a8b1","Type":"ContainerStarted","Data":"6cc7749f94db9bd51e61ad464a45ac68c16adae2d011d3816b81dd71f9236ae2"} Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.392150 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lwsfb" event={"ID":"c09e8874-d97f-4386-abda-e11ecd97a8b1","Type":"ContainerStarted","Data":"832bd9b7f5b8d8801567e312a997ea5c7aa0526676bd7a2520e5d0ea97d6125f"} Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.410452 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" event={"ID":"d322f101-f8ff-4a8b-9acb-9d441cf2367a","Type":"ContainerStarted","Data":"04c4336bfba1eecd7923d10b4ea0d74a2e02fec2c4befdb91d1cade662bd29b1"} Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.420595 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e5da918-e50a-4642-947e-2f70675c384a","Type":"ContainerStarted","Data":"4839b3919cc64d6659ca5cb3cd570412337e76fac8662e5e943983a0a24df48d"} Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.446358 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lwsfb" podStartSLOduration=2.446337315 podStartE2EDuration="2.446337315s" podCreationTimestamp="2026-01-27 08:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:08:01.430165068 +0000 UTC m=+1347.741269133" watchObservedRunningTime="2026-01-27 08:08:01.446337315 +0000 UTC m=+1347.757441380" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.618711 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.811454 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-xl844" Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.820694 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.958885 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-config\") pod \"f822cb52-beec-45b1-9571-c80f41fc4d9c\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.959302 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-ovsdbserver-sb\") pod \"f822cb52-beec-45b1-9571-c80f41fc4d9c\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.959463 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-dns-svc\") pod \"f822cb52-beec-45b1-9571-c80f41fc4d9c\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.959524 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-ovsdbserver-nb\") pod \"f822cb52-beec-45b1-9571-c80f41fc4d9c\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.959555 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-dns-swift-storage-0\") pod \"f822cb52-beec-45b1-9571-c80f41fc4d9c\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.959632 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52kt9\" (UniqueName: \"kubernetes.io/projected/f822cb52-beec-45b1-9571-c80f41fc4d9c-kube-api-access-52kt9\") pod \"f822cb52-beec-45b1-9571-c80f41fc4d9c\" (UID: \"f822cb52-beec-45b1-9571-c80f41fc4d9c\") " Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.959894 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 08:08:01 crc kubenswrapper[4799]: I0127 08:08:01.970638 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f822cb52-beec-45b1-9571-c80f41fc4d9c-kube-api-access-52kt9" (OuterVolumeSpecName: "kube-api-access-52kt9") pod "f822cb52-beec-45b1-9571-c80f41fc4d9c" (UID: "f822cb52-beec-45b1-9571-c80f41fc4d9c"). InnerVolumeSpecName "kube-api-access-52kt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.002563 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-config" (OuterVolumeSpecName: "config") pod "f822cb52-beec-45b1-9571-c80f41fc4d9c" (UID: "f822cb52-beec-45b1-9571-c80f41fc4d9c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.019496 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.049850 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f822cb52-beec-45b1-9571-c80f41fc4d9c" (UID: "f822cb52-beec-45b1-9571-c80f41fc4d9c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.050285 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f822cb52-beec-45b1-9571-c80f41fc4d9c" (UID: "f822cb52-beec-45b1-9571-c80f41fc4d9c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.050731 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f822cb52-beec-45b1-9571-c80f41fc4d9c" (UID: "f822cb52-beec-45b1-9571-c80f41fc4d9c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.051822 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.068026 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f822cb52-beec-45b1-9571-c80f41fc4d9c" (UID: "f822cb52-beec-45b1-9571-c80f41fc4d9c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.069063 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.069078 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.069087 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.069096 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.069105 4799 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f822cb52-beec-45b1-9571-c80f41fc4d9c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.069114 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52kt9\" (UniqueName: \"kubernetes.io/projected/f822cb52-beec-45b1-9571-c80f41fc4d9c-kube-api-access-52kt9\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.467617 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aabe90ea-0872-4492-b996-fa1b477439b0","Type":"ContainerStarted","Data":"096b016e7464e4b05920d2cd35d8307afb2ef2827fac41f0606a3002fda6a926"} Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.467835 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"297f7d25-8cc3-4e3f-86dc-08798c8674da","Type":"ContainerStarted","Data":"65460a98d76a59838947150caaa06ee7b8b633080b07f56699faff93b5663aed"} Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.470420 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-xl844" Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.470439 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-xl844" event={"ID":"f822cb52-beec-45b1-9571-c80f41fc4d9c","Type":"ContainerDied","Data":"8187e0e86f00570981a9517f9aa3c4f660cc13936e25defc0e3f35478b2a1fca"} Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.470515 4799 scope.go:117] "RemoveContainer" containerID="55087b36507d8c16508fd14ab8f666faca93826be467086c011feedb9e48fb04" Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.472942 4799 generic.go:334] "Generic (PLEG): container finished" podID="d322f101-f8ff-4a8b-9acb-9d441cf2367a" containerID="6da024e4fa81fd6f3513b14c398eda7842cb22fa5f68b705018cab25a2dd0184" exitCode=0 Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.472973 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" event={"ID":"d322f101-f8ff-4a8b-9acb-9d441cf2367a","Type":"ContainerDied","Data":"6da024e4fa81fd6f3513b14c398eda7842cb22fa5f68b705018cab25a2dd0184"} Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.473003 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" event={"ID":"d322f101-f8ff-4a8b-9acb-9d441cf2367a","Type":"ContainerStarted","Data":"f539f04a1a8918582f8a744cb307a21be74e2701b6c90dd1c0445404688673f1"} Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.502131 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" podStartSLOduration=3.502111297 podStartE2EDuration="3.502111297s" podCreationTimestamp="2026-01-27 08:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:08:02.501989823 +0000 UTC m=+1348.813093888" watchObservedRunningTime="2026-01-27 08:08:02.502111297 +0000 UTC m=+1348.813215362" Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.594753 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-xl844"] Jan 27 08:08:02 crc kubenswrapper[4799]: I0127 08:08:02.609399 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-xl844"] Jan 27 08:08:03 crc kubenswrapper[4799]: I0127 08:08:03.520346 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aabe90ea-0872-4492-b996-fa1b477439b0","Type":"ContainerStarted","Data":"ab6161cacdc5be8c335072de52618091f6c800d3f9fce0d3012c0d6a97633b6c"} Jan 27 08:08:03 crc kubenswrapper[4799]: I0127 08:08:03.522921 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"297f7d25-8cc3-4e3f-86dc-08798c8674da","Type":"ContainerStarted","Data":"fb749da762fc00ef89c9b8e2072bc8f0634dceb49b22a6ca3264c4834d696867"} Jan 27 08:08:03 crc kubenswrapper[4799]: I0127 08:08:03.527208 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:04 crc kubenswrapper[4799]: I0127 08:08:04.462599 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f822cb52-beec-45b1-9571-c80f41fc4d9c" path="/var/lib/kubelet/pods/f822cb52-beec-45b1-9571-c80f41fc4d9c/volumes" Jan 27 08:08:04 crc kubenswrapper[4799]: I0127 08:08:04.549041 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aabe90ea-0872-4492-b996-fa1b477439b0","Type":"ContainerStarted","Data":"7662bf5a01a631ca242870d9efe41aa18691919e16cb40907cd49f50c6c5ed04"} Jan 27 08:08:04 crc kubenswrapper[4799]: I0127 08:08:04.549561 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="aabe90ea-0872-4492-b996-fa1b477439b0" containerName="glance-log" containerID="cri-o://ab6161cacdc5be8c335072de52618091f6c800d3f9fce0d3012c0d6a97633b6c" gracePeriod=30 Jan 27 08:08:04 crc kubenswrapper[4799]: I0127 08:08:04.549849 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="aabe90ea-0872-4492-b996-fa1b477439b0" containerName="glance-httpd" containerID="cri-o://7662bf5a01a631ca242870d9efe41aa18691919e16cb40907cd49f50c6c5ed04" gracePeriod=30 Jan 27 08:08:04 crc kubenswrapper[4799]: I0127 08:08:04.565589 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="297f7d25-8cc3-4e3f-86dc-08798c8674da" containerName="glance-log" containerID="cri-o://fb749da762fc00ef89c9b8e2072bc8f0634dceb49b22a6ca3264c4834d696867" gracePeriod=30 Jan 27 08:08:04 crc kubenswrapper[4799]: I0127 08:08:04.565696 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"297f7d25-8cc3-4e3f-86dc-08798c8674da","Type":"ContainerStarted","Data":"8e9e46a3769d9b3edef87410dd49900a9ece4e302626ffc091611db4feea1fd5"} Jan 27 08:08:04 crc kubenswrapper[4799]: I0127 08:08:04.565780 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="297f7d25-8cc3-4e3f-86dc-08798c8674da" containerName="glance-httpd" containerID="cri-o://8e9e46a3769d9b3edef87410dd49900a9ece4e302626ffc091611db4feea1fd5" gracePeriod=30 Jan 27 08:08:04 crc kubenswrapper[4799]: I0127 08:08:04.647766 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.64774421 podStartE2EDuration="5.64774421s" podCreationTimestamp="2026-01-27 08:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:08:04.612713794 +0000 UTC m=+1350.923817859" watchObservedRunningTime="2026-01-27 08:08:04.64774421 +0000 UTC m=+1350.958848275" Jan 27 08:08:04 crc kubenswrapper[4799]: I0127 08:08:04.652251 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.652239955 podStartE2EDuration="5.652239955s" podCreationTimestamp="2026-01-27 08:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:08:04.633551939 +0000 UTC m=+1350.944656004" watchObservedRunningTime="2026-01-27 08:08:04.652239955 +0000 UTC m=+1350.963344020" Jan 27 08:08:05 crc kubenswrapper[4799]: I0127 08:08:05.607130 4799 generic.go:334] "Generic (PLEG): container finished" podID="aabe90ea-0872-4492-b996-fa1b477439b0" containerID="7662bf5a01a631ca242870d9efe41aa18691919e16cb40907cd49f50c6c5ed04" exitCode=0 Jan 27 08:08:05 crc kubenswrapper[4799]: I0127 08:08:05.607468 4799 generic.go:334] "Generic (PLEG): container finished" podID="aabe90ea-0872-4492-b996-fa1b477439b0" containerID="ab6161cacdc5be8c335072de52618091f6c800d3f9fce0d3012c0d6a97633b6c" exitCode=143 Jan 27 08:08:05 crc kubenswrapper[4799]: I0127 08:08:05.607189 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aabe90ea-0872-4492-b996-fa1b477439b0","Type":"ContainerDied","Data":"7662bf5a01a631ca242870d9efe41aa18691919e16cb40907cd49f50c6c5ed04"} Jan 27 08:08:05 crc kubenswrapper[4799]: I0127 08:08:05.607616 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aabe90ea-0872-4492-b996-fa1b477439b0","Type":"ContainerDied","Data":"ab6161cacdc5be8c335072de52618091f6c800d3f9fce0d3012c0d6a97633b6c"} Jan 27 08:08:05 crc kubenswrapper[4799]: I0127 08:08:05.611215 4799 generic.go:334] "Generic (PLEG): container finished" podID="297f7d25-8cc3-4e3f-86dc-08798c8674da" containerID="8e9e46a3769d9b3edef87410dd49900a9ece4e302626ffc091611db4feea1fd5" exitCode=0 Jan 27 08:08:05 crc kubenswrapper[4799]: I0127 08:08:05.611250 4799 generic.go:334] "Generic (PLEG): container finished" podID="297f7d25-8cc3-4e3f-86dc-08798c8674da" containerID="fb749da762fc00ef89c9b8e2072bc8f0634dceb49b22a6ca3264c4834d696867" exitCode=143 Jan 27 08:08:05 crc kubenswrapper[4799]: I0127 08:08:05.611274 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"297f7d25-8cc3-4e3f-86dc-08798c8674da","Type":"ContainerDied","Data":"8e9e46a3769d9b3edef87410dd49900a9ece4e302626ffc091611db4feea1fd5"} Jan 27 08:08:05 crc kubenswrapper[4799]: I0127 08:08:05.611323 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"297f7d25-8cc3-4e3f-86dc-08798c8674da","Type":"ContainerDied","Data":"fb749da762fc00ef89c9b8e2072bc8f0634dceb49b22a6ca3264c4834d696867"} Jan 27 08:08:06 crc kubenswrapper[4799]: I0127 08:08:06.625386 4799 generic.go:334] "Generic (PLEG): container finished" podID="c09e8874-d97f-4386-abda-e11ecd97a8b1" containerID="6cc7749f94db9bd51e61ad464a45ac68c16adae2d011d3816b81dd71f9236ae2" exitCode=0 Jan 27 08:08:06 crc kubenswrapper[4799]: I0127 08:08:06.625472 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lwsfb" event={"ID":"c09e8874-d97f-4386-abda-e11ecd97a8b1","Type":"ContainerDied","Data":"6cc7749f94db9bd51e61ad464a45ac68c16adae2d011d3816b81dd71f9236ae2"} Jan 27 08:08:10 crc kubenswrapper[4799]: I0127 08:08:10.476429 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:10 crc kubenswrapper[4799]: I0127 08:08:10.555577 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-mgb2c"] Jan 27 08:08:10 crc kubenswrapper[4799]: I0127 08:08:10.555856 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" podUID="f7687713-4d41-4085-aef9-4e0478651f4a" containerName="dnsmasq-dns" containerID="cri-o://a5ebfe33cee6f6d6b3a58ce99d9d667aadb7fd018e2aae4624e00476cff72f9b" gracePeriod=10 Jan 27 08:08:11 crc kubenswrapper[4799]: I0127 08:08:11.693451 4799 generic.go:334] "Generic (PLEG): container finished" podID="f7687713-4d41-4085-aef9-4e0478651f4a" containerID="a5ebfe33cee6f6d6b3a58ce99d9d667aadb7fd018e2aae4624e00476cff72f9b" exitCode=0 Jan 27 08:08:11 crc kubenswrapper[4799]: I0127 08:08:11.693567 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" event={"ID":"f7687713-4d41-4085-aef9-4e0478651f4a","Type":"ContainerDied","Data":"a5ebfe33cee6f6d6b3a58ce99d9d667aadb7fd018e2aae4624e00476cff72f9b"} Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.103953 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" podUID="f7687713-4d41-4085-aef9-4e0478651f4a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: connect: connection refused" Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.523660 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.665670 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-config-data\") pod \"297f7d25-8cc3-4e3f-86dc-08798c8674da\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.665726 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/297f7d25-8cc3-4e3f-86dc-08798c8674da-logs\") pod \"297f7d25-8cc3-4e3f-86dc-08798c8674da\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.665748 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/297f7d25-8cc3-4e3f-86dc-08798c8674da-httpd-run\") pod \"297f7d25-8cc3-4e3f-86dc-08798c8674da\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.665778 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"297f7d25-8cc3-4e3f-86dc-08798c8674da\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.665869 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-scripts\") pod \"297f7d25-8cc3-4e3f-86dc-08798c8674da\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.665909 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-public-tls-certs\") pod \"297f7d25-8cc3-4e3f-86dc-08798c8674da\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.665962 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wrns\" (UniqueName: \"kubernetes.io/projected/297f7d25-8cc3-4e3f-86dc-08798c8674da-kube-api-access-7wrns\") pod \"297f7d25-8cc3-4e3f-86dc-08798c8674da\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.666002 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-combined-ca-bundle\") pod \"297f7d25-8cc3-4e3f-86dc-08798c8674da\" (UID: \"297f7d25-8cc3-4e3f-86dc-08798c8674da\") " Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.667258 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/297f7d25-8cc3-4e3f-86dc-08798c8674da-logs" (OuterVolumeSpecName: "logs") pod "297f7d25-8cc3-4e3f-86dc-08798c8674da" (UID: "297f7d25-8cc3-4e3f-86dc-08798c8674da"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.667625 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/297f7d25-8cc3-4e3f-86dc-08798c8674da-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "297f7d25-8cc3-4e3f-86dc-08798c8674da" (UID: "297f7d25-8cc3-4e3f-86dc-08798c8674da"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.672474 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "297f7d25-8cc3-4e3f-86dc-08798c8674da" (UID: "297f7d25-8cc3-4e3f-86dc-08798c8674da"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.678048 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-scripts" (OuterVolumeSpecName: "scripts") pod "297f7d25-8cc3-4e3f-86dc-08798c8674da" (UID: "297f7d25-8cc3-4e3f-86dc-08798c8674da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.693563 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/297f7d25-8cc3-4e3f-86dc-08798c8674da-kube-api-access-7wrns" (OuterVolumeSpecName: "kube-api-access-7wrns") pod "297f7d25-8cc3-4e3f-86dc-08798c8674da" (UID: "297f7d25-8cc3-4e3f-86dc-08798c8674da"). InnerVolumeSpecName "kube-api-access-7wrns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.718551 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"297f7d25-8cc3-4e3f-86dc-08798c8674da","Type":"ContainerDied","Data":"65460a98d76a59838947150caaa06ee7b8b633080b07f56699faff93b5663aed"} Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.718598 4799 scope.go:117] "RemoveContainer" containerID="8e9e46a3769d9b3edef87410dd49900a9ece4e302626ffc091611db4feea1fd5" Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.718714 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.721167 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "297f7d25-8cc3-4e3f-86dc-08798c8674da" (UID: "297f7d25-8cc3-4e3f-86dc-08798c8674da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.741170 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-config-data" (OuterVolumeSpecName: "config-data") pod "297f7d25-8cc3-4e3f-86dc-08798c8674da" (UID: "297f7d25-8cc3-4e3f-86dc-08798c8674da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.742115 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "297f7d25-8cc3-4e3f-86dc-08798c8674da" (UID: "297f7d25-8cc3-4e3f-86dc-08798c8674da"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.768578 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.768607 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/297f7d25-8cc3-4e3f-86dc-08798c8674da-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.768615 4799 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/297f7d25-8cc3-4e3f-86dc-08798c8674da-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.768647 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.768657 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.768666 4799 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.768675 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wrns\" (UniqueName: \"kubernetes.io/projected/297f7d25-8cc3-4e3f-86dc-08798c8674da-kube-api-access-7wrns\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.768684 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/297f7d25-8cc3-4e3f-86dc-08798c8674da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.789328 4799 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.869751 4799 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:12 crc kubenswrapper[4799]: I0127 08:08:12.991343 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.079885 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.086396 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.123346 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 08:08:13 crc kubenswrapper[4799]: E0127 08:08:13.124395 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f822cb52-beec-45b1-9571-c80f41fc4d9c" containerName="init" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.124412 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f822cb52-beec-45b1-9571-c80f41fc4d9c" containerName="init" Jan 27 08:08:13 crc kubenswrapper[4799]: E0127 08:08:13.124440 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabe90ea-0872-4492-b996-fa1b477439b0" containerName="glance-log" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.124449 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabe90ea-0872-4492-b996-fa1b477439b0" containerName="glance-log" Jan 27 08:08:13 crc kubenswrapper[4799]: E0127 08:08:13.124473 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabe90ea-0872-4492-b996-fa1b477439b0" containerName="glance-httpd" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.124483 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabe90ea-0872-4492-b996-fa1b477439b0" containerName="glance-httpd" Jan 27 08:08:13 crc kubenswrapper[4799]: E0127 08:08:13.124501 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="297f7d25-8cc3-4e3f-86dc-08798c8674da" containerName="glance-httpd" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.124508 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="297f7d25-8cc3-4e3f-86dc-08798c8674da" containerName="glance-httpd" Jan 27 08:08:13 crc kubenswrapper[4799]: E0127 08:08:13.124553 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="297f7d25-8cc3-4e3f-86dc-08798c8674da" containerName="glance-log" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.124560 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="297f7d25-8cc3-4e3f-86dc-08798c8674da" containerName="glance-log" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.124927 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="aabe90ea-0872-4492-b996-fa1b477439b0" containerName="glance-log" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.124955 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="297f7d25-8cc3-4e3f-86dc-08798c8674da" containerName="glance-log" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.124978 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="297f7d25-8cc3-4e3f-86dc-08798c8674da" containerName="glance-httpd" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.124996 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f822cb52-beec-45b1-9571-c80f41fc4d9c" containerName="init" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.125007 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="aabe90ea-0872-4492-b996-fa1b477439b0" containerName="glance-httpd" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.126745 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.130443 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.130937 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.158358 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.174568 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"aabe90ea-0872-4492-b996-fa1b477439b0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.174732 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-internal-tls-certs\") pod \"aabe90ea-0872-4492-b996-fa1b477439b0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.174762 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n24j9\" (UniqueName: \"kubernetes.io/projected/aabe90ea-0872-4492-b996-fa1b477439b0-kube-api-access-n24j9\") pod \"aabe90ea-0872-4492-b996-fa1b477439b0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.174797 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-config-data\") pod \"aabe90ea-0872-4492-b996-fa1b477439b0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.174819 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aabe90ea-0872-4492-b996-fa1b477439b0-httpd-run\") pod \"aabe90ea-0872-4492-b996-fa1b477439b0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.174864 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aabe90ea-0872-4492-b996-fa1b477439b0-logs\") pod \"aabe90ea-0872-4492-b996-fa1b477439b0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.174885 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-scripts\") pod \"aabe90ea-0872-4492-b996-fa1b477439b0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.174916 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-combined-ca-bundle\") pod \"aabe90ea-0872-4492-b996-fa1b477439b0\" (UID: \"aabe90ea-0872-4492-b996-fa1b477439b0\") " Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.178102 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "aabe90ea-0872-4492-b996-fa1b477439b0" (UID: "aabe90ea-0872-4492-b996-fa1b477439b0"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.178437 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aabe90ea-0872-4492-b996-fa1b477439b0-logs" (OuterVolumeSpecName: "logs") pod "aabe90ea-0872-4492-b996-fa1b477439b0" (UID: "aabe90ea-0872-4492-b996-fa1b477439b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.178615 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aabe90ea-0872-4492-b996-fa1b477439b0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "aabe90ea-0872-4492-b996-fa1b477439b0" (UID: "aabe90ea-0872-4492-b996-fa1b477439b0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.186628 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aabe90ea-0872-4492-b996-fa1b477439b0-kube-api-access-n24j9" (OuterVolumeSpecName: "kube-api-access-n24j9") pod "aabe90ea-0872-4492-b996-fa1b477439b0" (UID: "aabe90ea-0872-4492-b996-fa1b477439b0"). InnerVolumeSpecName "kube-api-access-n24j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.190162 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-scripts" (OuterVolumeSpecName: "scripts") pod "aabe90ea-0872-4492-b996-fa1b477439b0" (UID: "aabe90ea-0872-4492-b996-fa1b477439b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.208766 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aabe90ea-0872-4492-b996-fa1b477439b0" (UID: "aabe90ea-0872-4492-b996-fa1b477439b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.225475 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "aabe90ea-0872-4492-b996-fa1b477439b0" (UID: "aabe90ea-0872-4492-b996-fa1b477439b0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.229751 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-config-data" (OuterVolumeSpecName: "config-data") pod "aabe90ea-0872-4492-b996-fa1b477439b0" (UID: "aabe90ea-0872-4492-b996-fa1b477439b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.276602 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-logs\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.276914 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.277103 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.277282 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.277683 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-scripts\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.277839 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-config-data\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.277944 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxx7b\" (UniqueName: \"kubernetes.io/projected/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-kube-api-access-jxx7b\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.278099 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.278290 4799 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.279251 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n24j9\" (UniqueName: \"kubernetes.io/projected/aabe90ea-0872-4492-b996-fa1b477439b0-kube-api-access-n24j9\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.279541 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.279630 4799 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aabe90ea-0872-4492-b996-fa1b477439b0-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.279704 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aabe90ea-0872-4492-b996-fa1b477439b0-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.279876 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.280050 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabe90ea-0872-4492-b996-fa1b477439b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.280381 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.296127 4799 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.382388 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.382480 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-logs\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.382538 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.382576 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.382612 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.382632 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-scripts\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.382655 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-config-data\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.382679 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxx7b\" (UniqueName: \"kubernetes.io/projected/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-kube-api-access-jxx7b\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.382740 4799 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.384643 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-logs\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.385548 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.385658 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.386796 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.386933 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-config-data\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.388535 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.388999 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-scripts\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.407045 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxx7b\" (UniqueName: \"kubernetes.io/projected/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-kube-api-access-jxx7b\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.410066 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.456925 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.727749 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aabe90ea-0872-4492-b996-fa1b477439b0","Type":"ContainerDied","Data":"096b016e7464e4b05920d2cd35d8307afb2ef2827fac41f0606a3002fda6a926"} Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.727857 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.768441 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.775540 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.792392 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.794526 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.797176 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.805894 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.809488 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.893391 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11218246-f6a8-477a-9b9a-7abf0338df9e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.893481 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.893561 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzczc\" (UniqueName: \"kubernetes.io/projected/11218246-f6a8-477a-9b9a-7abf0338df9e-kube-api-access-nzczc\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.893608 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.893635 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11218246-f6a8-477a-9b9a-7abf0338df9e-logs\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.893674 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.893695 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.893716 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.996350 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.996424 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.996454 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.996550 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11218246-f6a8-477a-9b9a-7abf0338df9e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.996594 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.996598 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.996807 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzczc\" (UniqueName: \"kubernetes.io/projected/11218246-f6a8-477a-9b9a-7abf0338df9e-kube-api-access-nzczc\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.996981 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:13 crc kubenswrapper[4799]: I0127 08:08:13.997041 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11218246-f6a8-477a-9b9a-7abf0338df9e-logs\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:14 crc kubenswrapper[4799]: I0127 08:08:14.012770 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11218246-f6a8-477a-9b9a-7abf0338df9e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:14 crc kubenswrapper[4799]: I0127 08:08:14.012946 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11218246-f6a8-477a-9b9a-7abf0338df9e-logs\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:14 crc kubenswrapper[4799]: I0127 08:08:14.022405 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:14 crc kubenswrapper[4799]: I0127 08:08:14.026998 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:14 crc kubenswrapper[4799]: I0127 08:08:14.050810 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:14 crc kubenswrapper[4799]: I0127 08:08:14.051444 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:14 crc kubenswrapper[4799]: I0127 08:08:14.070946 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzczc\" (UniqueName: \"kubernetes.io/projected/11218246-f6a8-477a-9b9a-7abf0338df9e-kube-api-access-nzczc\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:14 crc kubenswrapper[4799]: I0127 08:08:14.084557 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:08:14 crc kubenswrapper[4799]: I0127 08:08:14.115027 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 08:08:14 crc kubenswrapper[4799]: I0127 08:08:14.462012 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="297f7d25-8cc3-4e3f-86dc-08798c8674da" path="/var/lib/kubelet/pods/297f7d25-8cc3-4e3f-86dc-08798c8674da/volumes" Jan 27 08:08:14 crc kubenswrapper[4799]: I0127 08:08:14.463839 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aabe90ea-0872-4492-b996-fa1b477439b0" path="/var/lib/kubelet/pods/aabe90ea-0872-4492-b996-fa1b477439b0/volumes" Jan 27 08:08:15 crc kubenswrapper[4799]: I0127 08:08:15.401288 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lwsfb" Jan 27 08:08:15 crc kubenswrapper[4799]: I0127 08:08:15.522522 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-credential-keys\") pod \"c09e8874-d97f-4386-abda-e11ecd97a8b1\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " Jan 27 08:08:15 crc kubenswrapper[4799]: I0127 08:08:15.522877 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-scripts\") pod \"c09e8874-d97f-4386-abda-e11ecd97a8b1\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " Jan 27 08:08:15 crc kubenswrapper[4799]: I0127 08:08:15.523002 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-fernet-keys\") pod \"c09e8874-d97f-4386-abda-e11ecd97a8b1\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " Jan 27 08:08:15 crc kubenswrapper[4799]: I0127 08:08:15.523035 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml9rr\" (UniqueName: \"kubernetes.io/projected/c09e8874-d97f-4386-abda-e11ecd97a8b1-kube-api-access-ml9rr\") pod \"c09e8874-d97f-4386-abda-e11ecd97a8b1\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " Jan 27 08:08:15 crc kubenswrapper[4799]: I0127 08:08:15.523091 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-config-data\") pod \"c09e8874-d97f-4386-abda-e11ecd97a8b1\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " Jan 27 08:08:15 crc kubenswrapper[4799]: I0127 08:08:15.523137 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-combined-ca-bundle\") pod \"c09e8874-d97f-4386-abda-e11ecd97a8b1\" (UID: \"c09e8874-d97f-4386-abda-e11ecd97a8b1\") " Jan 27 08:08:15 crc kubenswrapper[4799]: I0127 08:08:15.528270 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c09e8874-d97f-4386-abda-e11ecd97a8b1" (UID: "c09e8874-d97f-4386-abda-e11ecd97a8b1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:15 crc kubenswrapper[4799]: I0127 08:08:15.528380 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-scripts" (OuterVolumeSpecName: "scripts") pod "c09e8874-d97f-4386-abda-e11ecd97a8b1" (UID: "c09e8874-d97f-4386-abda-e11ecd97a8b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:15 crc kubenswrapper[4799]: I0127 08:08:15.528462 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c09e8874-d97f-4386-abda-e11ecd97a8b1" (UID: "c09e8874-d97f-4386-abda-e11ecd97a8b1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:15 crc kubenswrapper[4799]: I0127 08:08:15.529664 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c09e8874-d97f-4386-abda-e11ecd97a8b1-kube-api-access-ml9rr" (OuterVolumeSpecName: "kube-api-access-ml9rr") pod "c09e8874-d97f-4386-abda-e11ecd97a8b1" (UID: "c09e8874-d97f-4386-abda-e11ecd97a8b1"). InnerVolumeSpecName "kube-api-access-ml9rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:08:15 crc kubenswrapper[4799]: I0127 08:08:15.558053 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c09e8874-d97f-4386-abda-e11ecd97a8b1" (UID: "c09e8874-d97f-4386-abda-e11ecd97a8b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:15 crc kubenswrapper[4799]: I0127 08:08:15.560912 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-config-data" (OuterVolumeSpecName: "config-data") pod "c09e8874-d97f-4386-abda-e11ecd97a8b1" (UID: "c09e8874-d97f-4386-abda-e11ecd97a8b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:15 crc kubenswrapper[4799]: I0127 08:08:15.625417 4799 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:15 crc kubenswrapper[4799]: I0127 08:08:15.625459 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml9rr\" (UniqueName: \"kubernetes.io/projected/c09e8874-d97f-4386-abda-e11ecd97a8b1-kube-api-access-ml9rr\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:15 crc kubenswrapper[4799]: I0127 08:08:15.625474 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:15 crc kubenswrapper[4799]: I0127 08:08:15.625485 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:15 crc kubenswrapper[4799]: I0127 08:08:15.625499 4799 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:15 crc kubenswrapper[4799]: I0127 08:08:15.625510 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c09e8874-d97f-4386-abda-e11ecd97a8b1-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:15 crc kubenswrapper[4799]: I0127 08:08:15.748768 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lwsfb" Jan 27 08:08:15 crc kubenswrapper[4799]: I0127 08:08:15.750810 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lwsfb" event={"ID":"c09e8874-d97f-4386-abda-e11ecd97a8b1","Type":"ContainerDied","Data":"832bd9b7f5b8d8801567e312a997ea5c7aa0526676bd7a2520e5d0ea97d6125f"} Jan 27 08:08:15 crc kubenswrapper[4799]: I0127 08:08:15.750866 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="832bd9b7f5b8d8801567e312a997ea5c7aa0526676bd7a2520e5d0ea97d6125f" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.491251 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lwsfb"] Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.501784 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lwsfb"] Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.587624 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-g9rt8"] Jan 27 08:08:16 crc kubenswrapper[4799]: E0127 08:08:16.588024 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09e8874-d97f-4386-abda-e11ecd97a8b1" containerName="keystone-bootstrap" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.588049 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09e8874-d97f-4386-abda-e11ecd97a8b1" containerName="keystone-bootstrap" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.588262 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09e8874-d97f-4386-abda-e11ecd97a8b1" containerName="keystone-bootstrap" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.588941 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g9rt8" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.592051 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.595041 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.595148 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.595249 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-shqm7" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.599512 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.600490 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-g9rt8"] Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.751288 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-fernet-keys\") pod \"keystone-bootstrap-g9rt8\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " pod="openstack/keystone-bootstrap-g9rt8" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.751415 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-scripts\") pod \"keystone-bootstrap-g9rt8\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " pod="openstack/keystone-bootstrap-g9rt8" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.751473 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-credential-keys\") pod \"keystone-bootstrap-g9rt8\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " pod="openstack/keystone-bootstrap-g9rt8" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.751524 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-combined-ca-bundle\") pod \"keystone-bootstrap-g9rt8\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " pod="openstack/keystone-bootstrap-g9rt8" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.751634 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-config-data\") pod \"keystone-bootstrap-g9rt8\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " pod="openstack/keystone-bootstrap-g9rt8" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.751670 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqdgm\" (UniqueName: \"kubernetes.io/projected/13e399ce-00b2-45ea-980b-338dda00c87d-kube-api-access-zqdgm\") pod \"keystone-bootstrap-g9rt8\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " pod="openstack/keystone-bootstrap-g9rt8" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.853415 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-scripts\") pod \"keystone-bootstrap-g9rt8\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " pod="openstack/keystone-bootstrap-g9rt8" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.853453 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-credential-keys\") pod \"keystone-bootstrap-g9rt8\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " pod="openstack/keystone-bootstrap-g9rt8" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.853492 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-combined-ca-bundle\") pod \"keystone-bootstrap-g9rt8\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " pod="openstack/keystone-bootstrap-g9rt8" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.853584 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-config-data\") pod \"keystone-bootstrap-g9rt8\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " pod="openstack/keystone-bootstrap-g9rt8" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.853611 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqdgm\" (UniqueName: \"kubernetes.io/projected/13e399ce-00b2-45ea-980b-338dda00c87d-kube-api-access-zqdgm\") pod \"keystone-bootstrap-g9rt8\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " pod="openstack/keystone-bootstrap-g9rt8" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.853652 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-fernet-keys\") pod \"keystone-bootstrap-g9rt8\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " pod="openstack/keystone-bootstrap-g9rt8" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.859240 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-scripts\") pod \"keystone-bootstrap-g9rt8\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " pod="openstack/keystone-bootstrap-g9rt8" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.860583 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-combined-ca-bundle\") pod \"keystone-bootstrap-g9rt8\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " pod="openstack/keystone-bootstrap-g9rt8" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.861231 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-config-data\") pod \"keystone-bootstrap-g9rt8\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " pod="openstack/keystone-bootstrap-g9rt8" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.862040 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-credential-keys\") pod \"keystone-bootstrap-g9rt8\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " pod="openstack/keystone-bootstrap-g9rt8" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.863114 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-fernet-keys\") pod \"keystone-bootstrap-g9rt8\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " pod="openstack/keystone-bootstrap-g9rt8" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.871661 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqdgm\" (UniqueName: \"kubernetes.io/projected/13e399ce-00b2-45ea-980b-338dda00c87d-kube-api-access-zqdgm\") pod \"keystone-bootstrap-g9rt8\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " pod="openstack/keystone-bootstrap-g9rt8" Jan 27 08:08:16 crc kubenswrapper[4799]: I0127 08:08:16.912463 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g9rt8" Jan 27 08:08:18 crc kubenswrapper[4799]: I0127 08:08:18.460410 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c09e8874-d97f-4386-abda-e11ecd97a8b1" path="/var/lib/kubelet/pods/c09e8874-d97f-4386-abda-e11ecd97a8b1/volumes" Jan 27 08:08:22 crc kubenswrapper[4799]: I0127 08:08:22.104207 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" podUID="f7687713-4d41-4085-aef9-4e0478651f4a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: i/o timeout" Jan 27 08:08:23 crc kubenswrapper[4799]: I0127 08:08:23.779664 4799 scope.go:117] "RemoveContainer" containerID="fb749da762fc00ef89c9b8e2072bc8f0634dceb49b22a6ca3264c4834d696867" Jan 27 08:08:23 crc kubenswrapper[4799]: I0127 08:08:23.828229 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" event={"ID":"f7687713-4d41-4085-aef9-4e0478651f4a","Type":"ContainerDied","Data":"173a1b751cc4eb1695ca6bd2a04167926c216b34df295a8fe106d78d88229ed8"} Jan 27 08:08:23 crc kubenswrapper[4799]: I0127 08:08:23.828271 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="173a1b751cc4eb1695ca6bd2a04167926c216b34df295a8fe106d78d88229ed8" Jan 27 08:08:23 crc kubenswrapper[4799]: I0127 08:08:23.895978 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:08:23 crc kubenswrapper[4799]: I0127 08:08:23.982561 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-ovsdbserver-sb\") pod \"f7687713-4d41-4085-aef9-4e0478651f4a\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " Jan 27 08:08:23 crc kubenswrapper[4799]: I0127 08:08:23.982623 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-dns-svc\") pod \"f7687713-4d41-4085-aef9-4e0478651f4a\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " Jan 27 08:08:23 crc kubenswrapper[4799]: I0127 08:08:23.982675 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-config\") pod \"f7687713-4d41-4085-aef9-4e0478651f4a\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " Jan 27 08:08:23 crc kubenswrapper[4799]: I0127 08:08:23.982706 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-dns-swift-storage-0\") pod \"f7687713-4d41-4085-aef9-4e0478651f4a\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " Jan 27 08:08:23 crc kubenswrapper[4799]: I0127 08:08:23.983979 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-ovsdbserver-nb\") pod \"f7687713-4d41-4085-aef9-4e0478651f4a\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " Jan 27 08:08:23 crc kubenswrapper[4799]: I0127 08:08:23.984356 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66bh4\" (UniqueName: \"kubernetes.io/projected/f7687713-4d41-4085-aef9-4e0478651f4a-kube-api-access-66bh4\") pod \"f7687713-4d41-4085-aef9-4e0478651f4a\" (UID: \"f7687713-4d41-4085-aef9-4e0478651f4a\") " Jan 27 08:08:23 crc kubenswrapper[4799]: I0127 08:08:23.997137 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7687713-4d41-4085-aef9-4e0478651f4a-kube-api-access-66bh4" (OuterVolumeSpecName: "kube-api-access-66bh4") pod "f7687713-4d41-4085-aef9-4e0478651f4a" (UID: "f7687713-4d41-4085-aef9-4e0478651f4a"). InnerVolumeSpecName "kube-api-access-66bh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:08:24 crc kubenswrapper[4799]: I0127 08:08:24.030999 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f7687713-4d41-4085-aef9-4e0478651f4a" (UID: "f7687713-4d41-4085-aef9-4e0478651f4a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:24 crc kubenswrapper[4799]: I0127 08:08:24.030999 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f7687713-4d41-4085-aef9-4e0478651f4a" (UID: "f7687713-4d41-4085-aef9-4e0478651f4a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:24 crc kubenswrapper[4799]: I0127 08:08:24.033954 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f7687713-4d41-4085-aef9-4e0478651f4a" (UID: "f7687713-4d41-4085-aef9-4e0478651f4a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:24 crc kubenswrapper[4799]: I0127 08:08:24.034558 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-config" (OuterVolumeSpecName: "config") pod "f7687713-4d41-4085-aef9-4e0478651f4a" (UID: "f7687713-4d41-4085-aef9-4e0478651f4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:24 crc kubenswrapper[4799]: I0127 08:08:24.041609 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f7687713-4d41-4085-aef9-4e0478651f4a" (UID: "f7687713-4d41-4085-aef9-4e0478651f4a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:24 crc kubenswrapper[4799]: I0127 08:08:24.086538 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:24 crc kubenswrapper[4799]: I0127 08:08:24.086568 4799 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:24 crc kubenswrapper[4799]: I0127 08:08:24.086581 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:24 crc kubenswrapper[4799]: I0127 08:08:24.086590 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66bh4\" (UniqueName: \"kubernetes.io/projected/f7687713-4d41-4085-aef9-4e0478651f4a-kube-api-access-66bh4\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:24 crc kubenswrapper[4799]: I0127 08:08:24.086599 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:24 crc kubenswrapper[4799]: I0127 08:08:24.086607 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7687713-4d41-4085-aef9-4e0478651f4a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:24 crc kubenswrapper[4799]: E0127 08:08:24.650959 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 27 08:08:24 crc kubenswrapper[4799]: E0127 08:08:24.651200 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbvn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-cj8lr_openstack(6228dcb6-2940-4494-b1fa-838d28618279): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 08:08:24 crc kubenswrapper[4799]: E0127 08:08:24.653119 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-cj8lr" podUID="6228dcb6-2940-4494-b1fa-838d28618279" Jan 27 08:08:24 crc kubenswrapper[4799]: I0127 08:08:24.836136 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" Jan 27 08:08:24 crc kubenswrapper[4799]: E0127 08:08:24.840332 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-cj8lr" podUID="6228dcb6-2940-4494-b1fa-838d28618279" Jan 27 08:08:24 crc kubenswrapper[4799]: I0127 08:08:24.897169 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-mgb2c"] Jan 27 08:08:24 crc kubenswrapper[4799]: I0127 08:08:24.906209 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-mgb2c"] Jan 27 08:08:25 crc kubenswrapper[4799]: I0127 08:08:25.631341 4799 scope.go:117] "RemoveContainer" containerID="7662bf5a01a631ca242870d9efe41aa18691919e16cb40907cd49f50c6c5ed04" Jan 27 08:08:25 crc kubenswrapper[4799]: E0127 08:08:25.638199 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 27 08:08:25 crc kubenswrapper[4799]: E0127 08:08:25.638371 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxpvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-d9mwm_openstack(84f49a02-4934-43be-aa45-d24a40b20db2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 08:08:25 crc kubenswrapper[4799]: E0127 08:08:25.639592 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-d9mwm" podUID="84f49a02-4934-43be-aa45-d24a40b20db2" Jan 27 08:08:25 crc kubenswrapper[4799]: I0127 08:08:25.825491 4799 scope.go:117] "RemoveContainer" containerID="ab6161cacdc5be8c335072de52618091f6c800d3f9fce0d3012c0d6a97633b6c" Jan 27 08:08:25 crc kubenswrapper[4799]: E0127 08:08:25.851524 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-d9mwm" podUID="84f49a02-4934-43be-aa45-d24a40b20db2" Jan 27 08:08:26 crc kubenswrapper[4799]: I0127 08:08:26.168492 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 08:08:26 crc kubenswrapper[4799]: W0127 08:08:26.188176 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda72dcfc8_bda0_475d_ab6f_3c8a3ba8da55.slice/crio-3526e57ba100926dddd6c03cfa6436d8966cdcaccecd679f269948f9af4db9b9 WatchSource:0}: Error finding container 3526e57ba100926dddd6c03cfa6436d8966cdcaccecd679f269948f9af4db9b9: Status 404 returned error can't find the container with id 3526e57ba100926dddd6c03cfa6436d8966cdcaccecd679f269948f9af4db9b9 Jan 27 08:08:26 crc kubenswrapper[4799]: I0127 08:08:26.205243 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-g9rt8"] Jan 27 08:08:26 crc kubenswrapper[4799]: W0127 08:08:26.210259 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13e399ce_00b2_45ea_980b_338dda00c87d.slice/crio-6adf9526e70d9cf54728c70f382fe1b971134ec91be5f3d2f7e98430e5a2f0ef WatchSource:0}: Error finding container 6adf9526e70d9cf54728c70f382fe1b971134ec91be5f3d2f7e98430e5a2f0ef: Status 404 returned error can't find the container with id 6adf9526e70d9cf54728c70f382fe1b971134ec91be5f3d2f7e98430e5a2f0ef Jan 27 08:08:26 crc kubenswrapper[4799]: I0127 08:08:26.462023 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7687713-4d41-4085-aef9-4e0478651f4a" path="/var/lib/kubelet/pods/f7687713-4d41-4085-aef9-4e0478651f4a/volumes" Jan 27 08:08:26 crc kubenswrapper[4799]: I0127 08:08:26.857440 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g9rt8" event={"ID":"13e399ce-00b2-45ea-980b-338dda00c87d","Type":"ContainerStarted","Data":"4914f62a9fc682d3a594411b0b616df024ad8aaae1de80500b12fdffdccb724b"} Jan 27 08:08:26 crc kubenswrapper[4799]: I0127 08:08:26.857774 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g9rt8" event={"ID":"13e399ce-00b2-45ea-980b-338dda00c87d","Type":"ContainerStarted","Data":"6adf9526e70d9cf54728c70f382fe1b971134ec91be5f3d2f7e98430e5a2f0ef"} Jan 27 08:08:26 crc kubenswrapper[4799]: I0127 08:08:26.861531 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e5da918-e50a-4642-947e-2f70675c384a","Type":"ContainerStarted","Data":"1187d2907da98f5618a1286568dd296c6a90299e773f7c552dbd8b8ddc4a3f97"} Jan 27 08:08:26 crc kubenswrapper[4799]: I0127 08:08:26.863150 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55","Type":"ContainerStarted","Data":"42cf2852ef566080aba6b920ff65efbe0457b3c568931d5800463ccba29c7866"} Jan 27 08:08:26 crc kubenswrapper[4799]: I0127 08:08:26.863183 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55","Type":"ContainerStarted","Data":"3526e57ba100926dddd6c03cfa6436d8966cdcaccecd679f269948f9af4db9b9"} Jan 27 08:08:26 crc kubenswrapper[4799]: I0127 08:08:26.864496 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-csjnn" event={"ID":"fb663bcc-159b-4604-8582-75a4baff492f","Type":"ContainerStarted","Data":"03bbab10f777d692f5a10d2c248adb7caebe96e8ec90d34037a6c351ed59f741"} Jan 27 08:08:26 crc kubenswrapper[4799]: I0127 08:08:26.880523 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-g9rt8" podStartSLOduration=10.880502138 podStartE2EDuration="10.880502138s" podCreationTimestamp="2026-01-27 08:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:08:26.873463629 +0000 UTC m=+1373.184567694" watchObservedRunningTime="2026-01-27 08:08:26.880502138 +0000 UTC m=+1373.191606203" Jan 27 08:08:26 crc kubenswrapper[4799]: I0127 08:08:26.895046 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-csjnn" podStartSLOduration=3.399031826 podStartE2EDuration="27.895028438s" podCreationTimestamp="2026-01-27 08:07:59 +0000 UTC" firstStartedPulling="2026-01-27 08:08:01.113811186 +0000 UTC m=+1347.424915251" lastFinishedPulling="2026-01-27 08:08:25.609807798 +0000 UTC m=+1371.920911863" observedRunningTime="2026-01-27 08:08:26.888875892 +0000 UTC m=+1373.199979947" watchObservedRunningTime="2026-01-27 08:08:26.895028438 +0000 UTC m=+1373.206132503" Jan 27 08:08:27 crc kubenswrapper[4799]: I0127 08:08:27.105765 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-mgb2c" podUID="f7687713-4d41-4085-aef9-4e0478651f4a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: i/o timeout" Jan 27 08:08:27 crc kubenswrapper[4799]: I0127 08:08:27.248510 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 08:08:27 crc kubenswrapper[4799]: I0127 08:08:27.875227 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55","Type":"ContainerStarted","Data":"54ef10f706939c87bfaffb4c06838550699f2c3a45c42880cb9e3be9fc0383d2"} Jan 27 08:08:27 crc kubenswrapper[4799]: I0127 08:08:27.878401 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"11218246-f6a8-477a-9b9a-7abf0338df9e","Type":"ContainerStarted","Data":"f0f1a27d1c3775d4f9bb3826cb3def99570c42e6ed6f54bd1e8c144f71e8c3ee"} Jan 27 08:08:27 crc kubenswrapper[4799]: I0127 08:08:27.878428 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"11218246-f6a8-477a-9b9a-7abf0338df9e","Type":"ContainerStarted","Data":"e8faad5e363e437f9979d58e910514d2a8c32a0a5ab6a39fd753111896f99dbe"} Jan 27 08:08:27 crc kubenswrapper[4799]: I0127 08:08:27.882064 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e5da918-e50a-4642-947e-2f70675c384a","Type":"ContainerStarted","Data":"045ed831d12ac9472a219e8b71d36cf6431ef93f0958a22272b066f27968894e"} Jan 27 08:08:27 crc kubenswrapper[4799]: I0127 08:08:27.912823 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=14.912768924 podStartE2EDuration="14.912768924s" podCreationTimestamp="2026-01-27 08:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:08:27.90480248 +0000 UTC m=+1374.215906545" watchObservedRunningTime="2026-01-27 08:08:27.912768924 +0000 UTC m=+1374.223873029" Jan 27 08:08:29 crc kubenswrapper[4799]: I0127 08:08:29.901687 4799 generic.go:334] "Generic (PLEG): container finished" podID="fb663bcc-159b-4604-8582-75a4baff492f" containerID="03bbab10f777d692f5a10d2c248adb7caebe96e8ec90d34037a6c351ed59f741" exitCode=0 Jan 27 08:08:29 crc kubenswrapper[4799]: I0127 08:08:29.901886 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-csjnn" event={"ID":"fb663bcc-159b-4604-8582-75a4baff492f","Type":"ContainerDied","Data":"03bbab10f777d692f5a10d2c248adb7caebe96e8ec90d34037a6c351ed59f741"} Jan 27 08:08:29 crc kubenswrapper[4799]: I0127 08:08:29.906988 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"11218246-f6a8-477a-9b9a-7abf0338df9e","Type":"ContainerStarted","Data":"58103e21c893ba0c7f7e115f0cb776fe7e3182e09f7ad2ca9104804b9087f777"} Jan 27 08:08:29 crc kubenswrapper[4799]: I0127 08:08:29.913465 4799 generic.go:334] "Generic (PLEG): container finished" podID="13e399ce-00b2-45ea-980b-338dda00c87d" containerID="4914f62a9fc682d3a594411b0b616df024ad8aaae1de80500b12fdffdccb724b" exitCode=0 Jan 27 08:08:29 crc kubenswrapper[4799]: I0127 08:08:29.913515 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g9rt8" event={"ID":"13e399ce-00b2-45ea-980b-338dda00c87d","Type":"ContainerDied","Data":"4914f62a9fc682d3a594411b0b616df024ad8aaae1de80500b12fdffdccb724b"} Jan 27 08:08:29 crc kubenswrapper[4799]: I0127 08:08:29.953817 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=16.953793855 podStartE2EDuration="16.953793855s" podCreationTimestamp="2026-01-27 08:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:08:29.943417957 +0000 UTC m=+1376.254522052" watchObservedRunningTime="2026-01-27 08:08:29.953793855 +0000 UTC m=+1376.264897920" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.522756 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-csjnn" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.535917 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g9rt8" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.642868 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-credential-keys\") pod \"13e399ce-00b2-45ea-980b-338dda00c87d\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.642946 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8fnj\" (UniqueName: \"kubernetes.io/projected/fb663bcc-159b-4604-8582-75a4baff492f-kube-api-access-x8fnj\") pod \"fb663bcc-159b-4604-8582-75a4baff492f\" (UID: \"fb663bcc-159b-4604-8582-75a4baff492f\") " Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.643037 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb663bcc-159b-4604-8582-75a4baff492f-config-data\") pod \"fb663bcc-159b-4604-8582-75a4baff492f\" (UID: \"fb663bcc-159b-4604-8582-75a4baff492f\") " Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.643061 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-fernet-keys\") pod \"13e399ce-00b2-45ea-980b-338dda00c87d\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.643097 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-scripts\") pod \"13e399ce-00b2-45ea-980b-338dda00c87d\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.643131 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb663bcc-159b-4604-8582-75a4baff492f-logs\") pod \"fb663bcc-159b-4604-8582-75a4baff492f\" (UID: \"fb663bcc-159b-4604-8582-75a4baff492f\") " Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.643158 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-combined-ca-bundle\") pod \"13e399ce-00b2-45ea-980b-338dda00c87d\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.643184 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqdgm\" (UniqueName: \"kubernetes.io/projected/13e399ce-00b2-45ea-980b-338dda00c87d-kube-api-access-zqdgm\") pod \"13e399ce-00b2-45ea-980b-338dda00c87d\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.643213 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb663bcc-159b-4604-8582-75a4baff492f-combined-ca-bundle\") pod \"fb663bcc-159b-4604-8582-75a4baff492f\" (UID: \"fb663bcc-159b-4604-8582-75a4baff492f\") " Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.643259 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb663bcc-159b-4604-8582-75a4baff492f-scripts\") pod \"fb663bcc-159b-4604-8582-75a4baff492f\" (UID: \"fb663bcc-159b-4604-8582-75a4baff492f\") " Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.643282 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-config-data\") pod \"13e399ce-00b2-45ea-980b-338dda00c87d\" (UID: \"13e399ce-00b2-45ea-980b-338dda00c87d\") " Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.645278 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb663bcc-159b-4604-8582-75a4baff492f-logs" (OuterVolumeSpecName: "logs") pod "fb663bcc-159b-4604-8582-75a4baff492f" (UID: "fb663bcc-159b-4604-8582-75a4baff492f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.650408 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb663bcc-159b-4604-8582-75a4baff492f-scripts" (OuterVolumeSpecName: "scripts") pod "fb663bcc-159b-4604-8582-75a4baff492f" (UID: "fb663bcc-159b-4604-8582-75a4baff492f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.651095 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb663bcc-159b-4604-8582-75a4baff492f-kube-api-access-x8fnj" (OuterVolumeSpecName: "kube-api-access-x8fnj") pod "fb663bcc-159b-4604-8582-75a4baff492f" (UID: "fb663bcc-159b-4604-8582-75a4baff492f"). InnerVolumeSpecName "kube-api-access-x8fnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.651657 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13e399ce-00b2-45ea-980b-338dda00c87d-kube-api-access-zqdgm" (OuterVolumeSpecName: "kube-api-access-zqdgm") pod "13e399ce-00b2-45ea-980b-338dda00c87d" (UID: "13e399ce-00b2-45ea-980b-338dda00c87d"). InnerVolumeSpecName "kube-api-access-zqdgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.657774 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-scripts" (OuterVolumeSpecName: "scripts") pod "13e399ce-00b2-45ea-980b-338dda00c87d" (UID: "13e399ce-00b2-45ea-980b-338dda00c87d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.657805 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "13e399ce-00b2-45ea-980b-338dda00c87d" (UID: "13e399ce-00b2-45ea-980b-338dda00c87d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.657896 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "13e399ce-00b2-45ea-980b-338dda00c87d" (UID: "13e399ce-00b2-45ea-980b-338dda00c87d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.675911 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13e399ce-00b2-45ea-980b-338dda00c87d" (UID: "13e399ce-00b2-45ea-980b-338dda00c87d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.678314 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb663bcc-159b-4604-8582-75a4baff492f-config-data" (OuterVolumeSpecName: "config-data") pod "fb663bcc-159b-4604-8582-75a4baff492f" (UID: "fb663bcc-159b-4604-8582-75a4baff492f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.680000 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-config-data" (OuterVolumeSpecName: "config-data") pod "13e399ce-00b2-45ea-980b-338dda00c87d" (UID: "13e399ce-00b2-45ea-980b-338dda00c87d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.687559 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb663bcc-159b-4604-8582-75a4baff492f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb663bcc-159b-4604-8582-75a4baff492f" (UID: "fb663bcc-159b-4604-8582-75a4baff492f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.745052 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb663bcc-159b-4604-8582-75a4baff492f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.745095 4799 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.745109 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.745121 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb663bcc-159b-4604-8582-75a4baff492f-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.745133 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.745147 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqdgm\" (UniqueName: \"kubernetes.io/projected/13e399ce-00b2-45ea-980b-338dda00c87d-kube-api-access-zqdgm\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.745160 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb663bcc-159b-4604-8582-75a4baff492f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.745172 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb663bcc-159b-4604-8582-75a4baff492f-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.745182 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.745193 4799 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/13e399ce-00b2-45ea-980b-338dda00c87d-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.745206 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8fnj\" (UniqueName: \"kubernetes.io/projected/fb663bcc-159b-4604-8582-75a4baff492f-kube-api-access-x8fnj\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.943042 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g9rt8" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.943443 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g9rt8" event={"ID":"13e399ce-00b2-45ea-980b-338dda00c87d","Type":"ContainerDied","Data":"6adf9526e70d9cf54728c70f382fe1b971134ec91be5f3d2f7e98430e5a2f0ef"} Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.943487 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6adf9526e70d9cf54728c70f382fe1b971134ec91be5f3d2f7e98430e5a2f0ef" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.950520 4799 generic.go:334] "Generic (PLEG): container finished" podID="a7ee0ddb-6bdc-4388-8b45-f58e81417a13" containerID="d3d0c0fe16de7311f2618c3baedd20dd747f2ebffc8915d9deba8c2da79a5917" exitCode=0 Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.950607 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-n9x6s" event={"ID":"a7ee0ddb-6bdc-4388-8b45-f58e81417a13","Type":"ContainerDied","Data":"d3d0c0fe16de7311f2618c3baedd20dd747f2ebffc8915d9deba8c2da79a5917"} Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.952755 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-csjnn" event={"ID":"fb663bcc-159b-4604-8582-75a4baff492f","Type":"ContainerDied","Data":"a5f3e8f19d2a79ae818104dd38ee5b036908942360cbccfec4b0200bc54361dc"} Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.952784 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5f3e8f19d2a79ae818104dd38ee5b036908942360cbccfec4b0200bc54361dc" Jan 27 08:08:31 crc kubenswrapper[4799]: I0127 08:08:31.952829 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-csjnn" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.022292 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7c8985574d-z64hk"] Jan 27 08:08:32 crc kubenswrapper[4799]: E0127 08:08:32.022767 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13e399ce-00b2-45ea-980b-338dda00c87d" containerName="keystone-bootstrap" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.022792 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="13e399ce-00b2-45ea-980b-338dda00c87d" containerName="keystone-bootstrap" Jan 27 08:08:32 crc kubenswrapper[4799]: E0127 08:08:32.022809 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7687713-4d41-4085-aef9-4e0478651f4a" containerName="dnsmasq-dns" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.022818 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7687713-4d41-4085-aef9-4e0478651f4a" containerName="dnsmasq-dns" Jan 27 08:08:32 crc kubenswrapper[4799]: E0127 08:08:32.022837 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb663bcc-159b-4604-8582-75a4baff492f" containerName="placement-db-sync" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.022845 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb663bcc-159b-4604-8582-75a4baff492f" containerName="placement-db-sync" Jan 27 08:08:32 crc kubenswrapper[4799]: E0127 08:08:32.022877 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7687713-4d41-4085-aef9-4e0478651f4a" containerName="init" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.022885 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7687713-4d41-4085-aef9-4e0478651f4a" containerName="init" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.023056 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb663bcc-159b-4604-8582-75a4baff492f" containerName="placement-db-sync" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.023074 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7687713-4d41-4085-aef9-4e0478651f4a" containerName="dnsmasq-dns" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.023097 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="13e399ce-00b2-45ea-980b-338dda00c87d" containerName="keystone-bootstrap" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.024224 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.028982 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.029472 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.029732 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-wsvr2" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.029853 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.030067 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.031856 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7c8985574d-z64hk"] Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.110137 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7d94bcc8dc-5hh96"] Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.111138 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.115830 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.116092 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.116391 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-shqm7" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.116516 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.116647 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.116798 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.145557 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7d94bcc8dc-5hh96"] Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.152624 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-internal-tls-certs\") pod \"placement-7c8985574d-z64hk\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.152673 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-config-data\") pod \"placement-7c8985574d-z64hk\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.152704 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-combined-ca-bundle\") pod \"placement-7c8985574d-z64hk\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.152724 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c28c66b4-aa13-41ed-8045-b6f131d48146-logs\") pod \"placement-7c8985574d-z64hk\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.152757 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-public-tls-certs\") pod \"placement-7c8985574d-z64hk\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.152842 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vms2q\" (UniqueName: \"kubernetes.io/projected/c28c66b4-aa13-41ed-8045-b6f131d48146-kube-api-access-vms2q\") pod \"placement-7c8985574d-z64hk\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.152867 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-scripts\") pod \"placement-7c8985574d-z64hk\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.254196 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vms2q\" (UniqueName: \"kubernetes.io/projected/c28c66b4-aa13-41ed-8045-b6f131d48146-kube-api-access-vms2q\") pod \"placement-7c8985574d-z64hk\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.254240 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-fernet-keys\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.254261 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-credential-keys\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.254293 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-scripts\") pod \"placement-7c8985574d-z64hk\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.254423 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-config-data\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.254460 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-scripts\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.254509 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-internal-tls-certs\") pod \"placement-7c8985574d-z64hk\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.254528 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-public-tls-certs\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.254552 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-internal-tls-certs\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.254568 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-config-data\") pod \"placement-7c8985574d-z64hk\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.254585 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-combined-ca-bundle\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.254612 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-combined-ca-bundle\") pod \"placement-7c8985574d-z64hk\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.254632 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c28c66b4-aa13-41ed-8045-b6f131d48146-logs\") pod \"placement-7c8985574d-z64hk\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.254661 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s6tb\" (UniqueName: \"kubernetes.io/projected/b32c7a11-1bfb-494f-a2d9-8800ba707e94-kube-api-access-7s6tb\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.254683 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-public-tls-certs\") pod \"placement-7c8985574d-z64hk\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.258536 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-internal-tls-certs\") pod \"placement-7c8985574d-z64hk\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.258663 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-public-tls-certs\") pod \"placement-7c8985574d-z64hk\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.260012 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c28c66b4-aa13-41ed-8045-b6f131d48146-logs\") pod \"placement-7c8985574d-z64hk\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.263080 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-scripts\") pod \"placement-7c8985574d-z64hk\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.263542 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-config-data\") pod \"placement-7c8985574d-z64hk\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.264854 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-combined-ca-bundle\") pod \"placement-7c8985574d-z64hk\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.280687 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vms2q\" (UniqueName: \"kubernetes.io/projected/c28c66b4-aa13-41ed-8045-b6f131d48146-kube-api-access-vms2q\") pod \"placement-7c8985574d-z64hk\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.352320 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.360150 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-scripts\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.361102 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-public-tls-certs\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.361137 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-internal-tls-certs\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.361158 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-combined-ca-bundle\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.361223 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s6tb\" (UniqueName: \"kubernetes.io/projected/b32c7a11-1bfb-494f-a2d9-8800ba707e94-kube-api-access-7s6tb\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.361290 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-fernet-keys\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.361367 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-credential-keys\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.361452 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-config-data\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.365158 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-internal-tls-certs\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.365584 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-public-tls-certs\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.366102 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-fernet-keys\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.366265 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-config-data\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.366576 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-credential-keys\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.366846 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-scripts\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.367530 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-combined-ca-bundle\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.381151 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s6tb\" (UniqueName: \"kubernetes.io/projected/b32c7a11-1bfb-494f-a2d9-8800ba707e94-kube-api-access-7s6tb\") pod \"keystone-7d94bcc8dc-5hh96\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:32 crc kubenswrapper[4799]: I0127 08:08:32.438941 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:33 crc kubenswrapper[4799]: I0127 08:08:33.281950 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-n9x6s" Jan 27 08:08:33 crc kubenswrapper[4799]: I0127 08:08:33.391547 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a7ee0ddb-6bdc-4388-8b45-f58e81417a13-config\") pod \"a7ee0ddb-6bdc-4388-8b45-f58e81417a13\" (UID: \"a7ee0ddb-6bdc-4388-8b45-f58e81417a13\") " Jan 27 08:08:33 crc kubenswrapper[4799]: I0127 08:08:33.392004 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbc9r\" (UniqueName: \"kubernetes.io/projected/a7ee0ddb-6bdc-4388-8b45-f58e81417a13-kube-api-access-gbc9r\") pod \"a7ee0ddb-6bdc-4388-8b45-f58e81417a13\" (UID: \"a7ee0ddb-6bdc-4388-8b45-f58e81417a13\") " Jan 27 08:08:33 crc kubenswrapper[4799]: I0127 08:08:33.392036 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ee0ddb-6bdc-4388-8b45-f58e81417a13-combined-ca-bundle\") pod \"a7ee0ddb-6bdc-4388-8b45-f58e81417a13\" (UID: \"a7ee0ddb-6bdc-4388-8b45-f58e81417a13\") " Jan 27 08:08:33 crc kubenswrapper[4799]: I0127 08:08:33.399638 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7ee0ddb-6bdc-4388-8b45-f58e81417a13-kube-api-access-gbc9r" (OuterVolumeSpecName: "kube-api-access-gbc9r") pod "a7ee0ddb-6bdc-4388-8b45-f58e81417a13" (UID: "a7ee0ddb-6bdc-4388-8b45-f58e81417a13"). InnerVolumeSpecName "kube-api-access-gbc9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:08:33 crc kubenswrapper[4799]: I0127 08:08:33.424458 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ee0ddb-6bdc-4388-8b45-f58e81417a13-config" (OuterVolumeSpecName: "config") pod "a7ee0ddb-6bdc-4388-8b45-f58e81417a13" (UID: "a7ee0ddb-6bdc-4388-8b45-f58e81417a13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:33 crc kubenswrapper[4799]: I0127 08:08:33.424497 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ee0ddb-6bdc-4388-8b45-f58e81417a13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7ee0ddb-6bdc-4388-8b45-f58e81417a13" (UID: "a7ee0ddb-6bdc-4388-8b45-f58e81417a13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:33 crc kubenswrapper[4799]: I0127 08:08:33.457854 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 08:08:33 crc kubenswrapper[4799]: I0127 08:08:33.457896 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 08:08:33 crc kubenswrapper[4799]: I0127 08:08:33.493948 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbc9r\" (UniqueName: \"kubernetes.io/projected/a7ee0ddb-6bdc-4388-8b45-f58e81417a13-kube-api-access-gbc9r\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:33 crc kubenswrapper[4799]: I0127 08:08:33.493990 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ee0ddb-6bdc-4388-8b45-f58e81417a13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:33 crc kubenswrapper[4799]: I0127 08:08:33.494004 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a7ee0ddb-6bdc-4388-8b45-f58e81417a13-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:33 crc kubenswrapper[4799]: I0127 08:08:33.494426 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 08:08:33 crc kubenswrapper[4799]: I0127 08:08:33.506923 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 08:08:33 crc kubenswrapper[4799]: I0127 08:08:33.533897 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7d94bcc8dc-5hh96"] Jan 27 08:08:33 crc kubenswrapper[4799]: W0127 08:08:33.536825 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb32c7a11_1bfb_494f_a2d9_8800ba707e94.slice/crio-49ea0e43eac8e94ff1f377c43617cfa0891f723aa81cd12f6093408c92e1c4a6 WatchSource:0}: Error finding container 49ea0e43eac8e94ff1f377c43617cfa0891f723aa81cd12f6093408c92e1c4a6: Status 404 returned error can't find the container with id 49ea0e43eac8e94ff1f377c43617cfa0891f723aa81cd12f6093408c92e1c4a6 Jan 27 08:08:33 crc kubenswrapper[4799]: I0127 08:08:33.626164 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7c8985574d-z64hk"] Jan 27 08:08:33 crc kubenswrapper[4799]: I0127 08:08:33.993813 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-n9x6s" event={"ID":"a7ee0ddb-6bdc-4388-8b45-f58e81417a13","Type":"ContainerDied","Data":"7040823b4c81e4d3d76322301e66386c133273e19c98ea8f29b90baeb1de9428"} Jan 27 08:08:33 crc kubenswrapper[4799]: I0127 08:08:33.993870 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7040823b4c81e4d3d76322301e66386c133273e19c98ea8f29b90baeb1de9428" Jan 27 08:08:33 crc kubenswrapper[4799]: I0127 08:08:33.993975 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-n9x6s" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.001768 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c8985574d-z64hk" event={"ID":"c28c66b4-aa13-41ed-8045-b6f131d48146","Type":"ContainerStarted","Data":"801f7188ac9b031ee354e36b6557438a9bb09e2611de5ef5ebabce452d6cad4b"} Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.001852 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c8985574d-z64hk" event={"ID":"c28c66b4-aa13-41ed-8045-b6f131d48146","Type":"ContainerStarted","Data":"028a6f7811fa87472ac04365809f2fa32017c274a71bdd8731df32a4ae803c9a"} Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.005980 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e5da918-e50a-4642-947e-2f70675c384a","Type":"ContainerStarted","Data":"1c74fc2090f97519cbf9706c495b923132e1047c2bdf13fd312d9fc586b10fca"} Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.008250 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7d94bcc8dc-5hh96" event={"ID":"b32c7a11-1bfb-494f-a2d9-8800ba707e94","Type":"ContainerStarted","Data":"8ac05fa5a627833e782a394e656005413a8d6b8562b382febb6252fc92879e3a"} Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.008286 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7d94bcc8dc-5hh96" event={"ID":"b32c7a11-1bfb-494f-a2d9-8800ba707e94","Type":"ContainerStarted","Data":"49ea0e43eac8e94ff1f377c43617cfa0891f723aa81cd12f6093408c92e1c4a6"} Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.009018 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.009053 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.009067 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.042955 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7d94bcc8dc-5hh96" podStartSLOduration=2.042932588 podStartE2EDuration="2.042932588s" podCreationTimestamp="2026-01-27 08:08:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:08:34.034488811 +0000 UTC m=+1380.345592876" watchObservedRunningTime="2026-01-27 08:08:34.042932588 +0000 UTC m=+1380.354036653" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.116115 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.116168 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.177633 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-pvktj"] Jan 27 08:08:34 crc kubenswrapper[4799]: E0127 08:08:34.178164 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ee0ddb-6bdc-4388-8b45-f58e81417a13" containerName="neutron-db-sync" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.178187 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ee0ddb-6bdc-4388-8b45-f58e81417a13" containerName="neutron-db-sync" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.178473 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7ee0ddb-6bdc-4388-8b45-f58e81417a13" containerName="neutron-db-sync" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.179594 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.186198 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-pvktj"] Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.203101 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.203217 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.210724 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24vdp\" (UniqueName: \"kubernetes.io/projected/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-kube-api-access-24vdp\") pod \"dnsmasq-dns-55f844cf75-pvktj\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.210769 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-pvktj\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.210813 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-config\") pod \"dnsmasq-dns-55f844cf75-pvktj\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.210829 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-dns-svc\") pod \"dnsmasq-dns-55f844cf75-pvktj\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.210845 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-pvktj\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.210874 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-pvktj\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.313685 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-config\") pod \"dnsmasq-dns-55f844cf75-pvktj\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.313728 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-dns-svc\") pod \"dnsmasq-dns-55f844cf75-pvktj\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.313752 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-pvktj\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.313789 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-pvktj\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.313898 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24vdp\" (UniqueName: \"kubernetes.io/projected/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-kube-api-access-24vdp\") pod \"dnsmasq-dns-55f844cf75-pvktj\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.313929 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-pvktj\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.315051 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-pvktj\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.315740 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-config\") pod \"dnsmasq-dns-55f844cf75-pvktj\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.316604 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-dns-svc\") pod \"dnsmasq-dns-55f844cf75-pvktj\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.317127 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-pvktj\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.317214 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-pvktj\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.341546 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24vdp\" (UniqueName: \"kubernetes.io/projected/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-kube-api-access-24vdp\") pod \"dnsmasq-dns-55f844cf75-pvktj\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.402378 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6685f45956-srl2k"] Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.403959 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6685f45956-srl2k" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.408901 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-gxbph" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.409105 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.409214 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.419364 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6685f45956-srl2k"] Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.436955 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.510872 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.520118 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-combined-ca-bundle\") pod \"neutron-6685f45956-srl2k\" (UID: \"e295299c-8c47-48e4-a231-a1582bc0f3af\") " pod="openstack/neutron-6685f45956-srl2k" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.520952 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-ovndb-tls-certs\") pod \"neutron-6685f45956-srl2k\" (UID: \"e295299c-8c47-48e4-a231-a1582bc0f3af\") " pod="openstack/neutron-6685f45956-srl2k" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.520996 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-config\") pod \"neutron-6685f45956-srl2k\" (UID: \"e295299c-8c47-48e4-a231-a1582bc0f3af\") " pod="openstack/neutron-6685f45956-srl2k" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.521133 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-httpd-config\") pod \"neutron-6685f45956-srl2k\" (UID: \"e295299c-8c47-48e4-a231-a1582bc0f3af\") " pod="openstack/neutron-6685f45956-srl2k" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.521162 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk9x7\" (UniqueName: \"kubernetes.io/projected/e295299c-8c47-48e4-a231-a1582bc0f3af-kube-api-access-hk9x7\") pod \"neutron-6685f45956-srl2k\" (UID: \"e295299c-8c47-48e4-a231-a1582bc0f3af\") " pod="openstack/neutron-6685f45956-srl2k" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.622132 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-ovndb-tls-certs\") pod \"neutron-6685f45956-srl2k\" (UID: \"e295299c-8c47-48e4-a231-a1582bc0f3af\") " pod="openstack/neutron-6685f45956-srl2k" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.622180 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-config\") pod \"neutron-6685f45956-srl2k\" (UID: \"e295299c-8c47-48e4-a231-a1582bc0f3af\") " pod="openstack/neutron-6685f45956-srl2k" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.622250 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-httpd-config\") pod \"neutron-6685f45956-srl2k\" (UID: \"e295299c-8c47-48e4-a231-a1582bc0f3af\") " pod="openstack/neutron-6685f45956-srl2k" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.622266 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk9x7\" (UniqueName: \"kubernetes.io/projected/e295299c-8c47-48e4-a231-a1582bc0f3af-kube-api-access-hk9x7\") pod \"neutron-6685f45956-srl2k\" (UID: \"e295299c-8c47-48e4-a231-a1582bc0f3af\") " pod="openstack/neutron-6685f45956-srl2k" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.622289 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-combined-ca-bundle\") pod \"neutron-6685f45956-srl2k\" (UID: \"e295299c-8c47-48e4-a231-a1582bc0f3af\") " pod="openstack/neutron-6685f45956-srl2k" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.631040 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-config\") pod \"neutron-6685f45956-srl2k\" (UID: \"e295299c-8c47-48e4-a231-a1582bc0f3af\") " pod="openstack/neutron-6685f45956-srl2k" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.633815 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-httpd-config\") pod \"neutron-6685f45956-srl2k\" (UID: \"e295299c-8c47-48e4-a231-a1582bc0f3af\") " pod="openstack/neutron-6685f45956-srl2k" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.639346 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-ovndb-tls-certs\") pod \"neutron-6685f45956-srl2k\" (UID: \"e295299c-8c47-48e4-a231-a1582bc0f3af\") " pod="openstack/neutron-6685f45956-srl2k" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.660197 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-combined-ca-bundle\") pod \"neutron-6685f45956-srl2k\" (UID: \"e295299c-8c47-48e4-a231-a1582bc0f3af\") " pod="openstack/neutron-6685f45956-srl2k" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.693807 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk9x7\" (UniqueName: \"kubernetes.io/projected/e295299c-8c47-48e4-a231-a1582bc0f3af-kube-api-access-hk9x7\") pod \"neutron-6685f45956-srl2k\" (UID: \"e295299c-8c47-48e4-a231-a1582bc0f3af\") " pod="openstack/neutron-6685f45956-srl2k" Jan 27 08:08:34 crc kubenswrapper[4799]: I0127 08:08:34.814692 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6685f45956-srl2k" Jan 27 08:08:35 crc kubenswrapper[4799]: I0127 08:08:35.024060 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c8985574d-z64hk" event={"ID":"c28c66b4-aa13-41ed-8045-b6f131d48146","Type":"ContainerStarted","Data":"c2b2e7928c28e65fd10ed3d02c25b2c2d9f228016a43047cdc597b20a6fb8409"} Jan 27 08:08:35 crc kubenswrapper[4799]: I0127 08:08:35.024711 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 08:08:35 crc kubenswrapper[4799]: I0127 08:08:35.024730 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 08:08:35 crc kubenswrapper[4799]: I0127 08:08:35.060155 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7c8985574d-z64hk" podStartSLOduration=4.060132119 podStartE2EDuration="4.060132119s" podCreationTimestamp="2026-01-27 08:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:08:35.054101468 +0000 UTC m=+1381.365205553" watchObservedRunningTime="2026-01-27 08:08:35.060132119 +0000 UTC m=+1381.371236184" Jan 27 08:08:35 crc kubenswrapper[4799]: I0127 08:08:35.209758 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-pvktj"] Jan 27 08:08:35 crc kubenswrapper[4799]: W0127 08:08:35.210804 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fde0b34_cfe7_4ce2_9377_53e6ef0be697.slice/crio-4d3f4e17d0235ea45375360c62c49b8e10993eeb2e57d7632df928f502dd7386 WatchSource:0}: Error finding container 4d3f4e17d0235ea45375360c62c49b8e10993eeb2e57d7632df928f502dd7386: Status 404 returned error can't find the container with id 4d3f4e17d0235ea45375360c62c49b8e10993eeb2e57d7632df928f502dd7386 Jan 27 08:08:35 crc kubenswrapper[4799]: I0127 08:08:35.536031 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6685f45956-srl2k"] Jan 27 08:08:35 crc kubenswrapper[4799]: W0127 08:08:35.576813 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode295299c_8c47_48e4_a231_a1582bc0f3af.slice/crio-eee0ca3f7d4b1697fae6ae9e3aba5439c1de61b242b5b1d0b59d301a6e05392e WatchSource:0}: Error finding container eee0ca3f7d4b1697fae6ae9e3aba5439c1de61b242b5b1d0b59d301a6e05392e: Status 404 returned error can't find the container with id eee0ca3f7d4b1697fae6ae9e3aba5439c1de61b242b5b1d0b59d301a6e05392e Jan 27 08:08:36 crc kubenswrapper[4799]: I0127 08:08:36.034015 4799 generic.go:334] "Generic (PLEG): container finished" podID="5fde0b34-cfe7-4ce2-9377-53e6ef0be697" containerID="0a79c9b957938ffdc9e403ec034b3f2a8a9d57e148d4dd598917333b5b2bbdfd" exitCode=0 Jan 27 08:08:36 crc kubenswrapper[4799]: I0127 08:08:36.034073 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-pvktj" event={"ID":"5fde0b34-cfe7-4ce2-9377-53e6ef0be697","Type":"ContainerDied","Data":"0a79c9b957938ffdc9e403ec034b3f2a8a9d57e148d4dd598917333b5b2bbdfd"} Jan 27 08:08:36 crc kubenswrapper[4799]: I0127 08:08:36.034096 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-pvktj" event={"ID":"5fde0b34-cfe7-4ce2-9377-53e6ef0be697","Type":"ContainerStarted","Data":"4d3f4e17d0235ea45375360c62c49b8e10993eeb2e57d7632df928f502dd7386"} Jan 27 08:08:36 crc kubenswrapper[4799]: I0127 08:08:36.044136 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6685f45956-srl2k" event={"ID":"e295299c-8c47-48e4-a231-a1582bc0f3af","Type":"ContainerStarted","Data":"0fa1d70bab19c7e9facfc74062a516796a67474b586c1c2b568545daf1caa22c"} Jan 27 08:08:36 crc kubenswrapper[4799]: I0127 08:08:36.044216 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6685f45956-srl2k" event={"ID":"e295299c-8c47-48e4-a231-a1582bc0f3af","Type":"ContainerStarted","Data":"eee0ca3f7d4b1697fae6ae9e3aba5439c1de61b242b5b1d0b59d301a6e05392e"} Jan 27 08:08:36 crc kubenswrapper[4799]: I0127 08:08:36.044247 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:36 crc kubenswrapper[4799]: I0127 08:08:36.044285 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:08:36 crc kubenswrapper[4799]: I0127 08:08:36.462895 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 08:08:36 crc kubenswrapper[4799]: I0127 08:08:36.463279 4799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 08:08:36 crc kubenswrapper[4799]: I0127 08:08:36.473829 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.059347 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-pvktj" event={"ID":"5fde0b34-cfe7-4ce2-9377-53e6ef0be697","Type":"ContainerStarted","Data":"89bbf42d7a5fe7fdb366285edcfeddd80e430533e822721ce0f47fede0c4a885"} Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.059452 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.064058 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6685f45956-srl2k" event={"ID":"e295299c-8c47-48e4-a231-a1582bc0f3af","Type":"ContainerStarted","Data":"de406fdc38c24232226c538331e728ce7a4f2f2db044e6fa2f7d9269f79f75d9"} Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.087504 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-pvktj" podStartSLOduration=3.087476643 podStartE2EDuration="3.087476643s" podCreationTimestamp="2026-01-27 08:08:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:08:37.077024403 +0000 UTC m=+1383.388128478" watchObservedRunningTime="2026-01-27 08:08:37.087476643 +0000 UTC m=+1383.398580708" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.122653 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6685f45956-srl2k" podStartSLOduration=3.122630176 podStartE2EDuration="3.122630176s" podCreationTimestamp="2026-01-27 08:08:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:08:37.108259921 +0000 UTC m=+1383.419364006" watchObservedRunningTime="2026-01-27 08:08:37.122630176 +0000 UTC m=+1383.433734241" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.556105 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.570230 4799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.654714 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5ff7b8d449-xjt48"] Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.656330 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.660868 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.660968 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.682384 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5ff7b8d449-xjt48"] Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.732702 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-ovndb-tls-certs\") pod \"neutron-5ff7b8d449-xjt48\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.733139 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-config\") pod \"neutron-5ff7b8d449-xjt48\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.733273 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-httpd-config\") pod \"neutron-5ff7b8d449-xjt48\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.733409 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttqs7\" (UniqueName: \"kubernetes.io/projected/2db9ba76-0532-4ed0-972e-fd5452048b97-kube-api-access-ttqs7\") pod \"neutron-5ff7b8d449-xjt48\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.733626 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-combined-ca-bundle\") pod \"neutron-5ff7b8d449-xjt48\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.733780 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-internal-tls-certs\") pod \"neutron-5ff7b8d449-xjt48\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.733938 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-public-tls-certs\") pod \"neutron-5ff7b8d449-xjt48\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.774490 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.840350 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-public-tls-certs\") pod \"neutron-5ff7b8d449-xjt48\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.840426 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-ovndb-tls-certs\") pod \"neutron-5ff7b8d449-xjt48\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.840455 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-config\") pod \"neutron-5ff7b8d449-xjt48\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.840481 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-httpd-config\") pod \"neutron-5ff7b8d449-xjt48\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.840498 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttqs7\" (UniqueName: \"kubernetes.io/projected/2db9ba76-0532-4ed0-972e-fd5452048b97-kube-api-access-ttqs7\") pod \"neutron-5ff7b8d449-xjt48\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.840552 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-combined-ca-bundle\") pod \"neutron-5ff7b8d449-xjt48\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.840587 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-internal-tls-certs\") pod \"neutron-5ff7b8d449-xjt48\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.849337 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-internal-tls-certs\") pod \"neutron-5ff7b8d449-xjt48\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.854791 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-public-tls-certs\") pod \"neutron-5ff7b8d449-xjt48\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.863275 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-httpd-config\") pod \"neutron-5ff7b8d449-xjt48\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.863500 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-config\") pod \"neutron-5ff7b8d449-xjt48\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.871107 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttqs7\" (UniqueName: \"kubernetes.io/projected/2db9ba76-0532-4ed0-972e-fd5452048b97-kube-api-access-ttqs7\") pod \"neutron-5ff7b8d449-xjt48\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.874178 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-ovndb-tls-certs\") pod \"neutron-5ff7b8d449-xjt48\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.876184 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-combined-ca-bundle\") pod \"neutron-5ff7b8d449-xjt48\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:37 crc kubenswrapper[4799]: I0127 08:08:37.987942 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:38 crc kubenswrapper[4799]: I0127 08:08:38.099213 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6685f45956-srl2k" Jan 27 08:08:38 crc kubenswrapper[4799]: I0127 08:08:38.617918 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5ff7b8d449-xjt48"] Jan 27 08:08:38 crc kubenswrapper[4799]: W0127 08:08:38.631427 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2db9ba76_0532_4ed0_972e_fd5452048b97.slice/crio-47094d7eb8b960ef039bf2ba003d57c2df443425fa0077685754b2e42227a3fa WatchSource:0}: Error finding container 47094d7eb8b960ef039bf2ba003d57c2df443425fa0077685754b2e42227a3fa: Status 404 returned error can't find the container with id 47094d7eb8b960ef039bf2ba003d57c2df443425fa0077685754b2e42227a3fa Jan 27 08:08:39 crc kubenswrapper[4799]: I0127 08:08:39.116584 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ff7b8d449-xjt48" event={"ID":"2db9ba76-0532-4ed0-972e-fd5452048b97","Type":"ContainerStarted","Data":"47094d7eb8b960ef039bf2ba003d57c2df443425fa0077685754b2e42227a3fa"} Jan 27 08:08:39 crc kubenswrapper[4799]: I0127 08:08:39.118020 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d9mwm" event={"ID":"84f49a02-4934-43be-aa45-d24a40b20db2","Type":"ContainerStarted","Data":"0ac7a7f07ad4bec91f6ec2aa1d2412ca93a2b5761f8883e09f189453418e7eb4"} Jan 27 08:08:39 crc kubenswrapper[4799]: I0127 08:08:39.123782 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cj8lr" event={"ID":"6228dcb6-2940-4494-b1fa-838d28618279","Type":"ContainerStarted","Data":"6765331be43b26f21ba17b7ff2bdfd6c2d9d758d2339f7ca8171b493a229c5ea"} Jan 27 08:08:39 crc kubenswrapper[4799]: I0127 08:08:39.153049 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-d9mwm" podStartSLOduration=3.452665539 podStartE2EDuration="40.153026603s" podCreationTimestamp="2026-01-27 08:07:59 +0000 UTC" firstStartedPulling="2026-01-27 08:08:00.747910066 +0000 UTC m=+1347.059014131" lastFinishedPulling="2026-01-27 08:08:37.44827113 +0000 UTC m=+1383.759375195" observedRunningTime="2026-01-27 08:08:39.141919015 +0000 UTC m=+1385.453023080" watchObservedRunningTime="2026-01-27 08:08:39.153026603 +0000 UTC m=+1385.464130668" Jan 27 08:08:39 crc kubenswrapper[4799]: I0127 08:08:39.172907 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-cj8lr" podStartSLOduration=3.295281772 podStartE2EDuration="40.172883115s" podCreationTimestamp="2026-01-27 08:07:59 +0000 UTC" firstStartedPulling="2026-01-27 08:08:01.112448328 +0000 UTC m=+1347.423552393" lastFinishedPulling="2026-01-27 08:08:37.990049671 +0000 UTC m=+1384.301153736" observedRunningTime="2026-01-27 08:08:39.16488091 +0000 UTC m=+1385.475984995" watchObservedRunningTime="2026-01-27 08:08:39.172883115 +0000 UTC m=+1385.483987180" Jan 27 08:08:42 crc kubenswrapper[4799]: I0127 08:08:42.150660 4799 generic.go:334] "Generic (PLEG): container finished" podID="6228dcb6-2940-4494-b1fa-838d28618279" containerID="6765331be43b26f21ba17b7ff2bdfd6c2d9d758d2339f7ca8171b493a229c5ea" exitCode=0 Jan 27 08:08:42 crc kubenswrapper[4799]: I0127 08:08:42.150875 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cj8lr" event={"ID":"6228dcb6-2940-4494-b1fa-838d28618279","Type":"ContainerDied","Data":"6765331be43b26f21ba17b7ff2bdfd6c2d9d758d2339f7ca8171b493a229c5ea"} Jan 27 08:08:43 crc kubenswrapper[4799]: I0127 08:08:43.458581 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cj8lr" Jan 27 08:08:43 crc kubenswrapper[4799]: I0127 08:08:43.567612 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6228dcb6-2940-4494-b1fa-838d28618279-db-sync-config-data\") pod \"6228dcb6-2940-4494-b1fa-838d28618279\" (UID: \"6228dcb6-2940-4494-b1fa-838d28618279\") " Jan 27 08:08:43 crc kubenswrapper[4799]: I0127 08:08:43.567796 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6228dcb6-2940-4494-b1fa-838d28618279-combined-ca-bundle\") pod \"6228dcb6-2940-4494-b1fa-838d28618279\" (UID: \"6228dcb6-2940-4494-b1fa-838d28618279\") " Jan 27 08:08:43 crc kubenswrapper[4799]: I0127 08:08:43.567880 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbvn9\" (UniqueName: \"kubernetes.io/projected/6228dcb6-2940-4494-b1fa-838d28618279-kube-api-access-sbvn9\") pod \"6228dcb6-2940-4494-b1fa-838d28618279\" (UID: \"6228dcb6-2940-4494-b1fa-838d28618279\") " Jan 27 08:08:43 crc kubenswrapper[4799]: I0127 08:08:43.573072 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6228dcb6-2940-4494-b1fa-838d28618279-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6228dcb6-2940-4494-b1fa-838d28618279" (UID: "6228dcb6-2940-4494-b1fa-838d28618279"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:43 crc kubenswrapper[4799]: I0127 08:08:43.573562 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6228dcb6-2940-4494-b1fa-838d28618279-kube-api-access-sbvn9" (OuterVolumeSpecName: "kube-api-access-sbvn9") pod "6228dcb6-2940-4494-b1fa-838d28618279" (UID: "6228dcb6-2940-4494-b1fa-838d28618279"). InnerVolumeSpecName "kube-api-access-sbvn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:08:43 crc kubenswrapper[4799]: I0127 08:08:43.598693 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6228dcb6-2940-4494-b1fa-838d28618279-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6228dcb6-2940-4494-b1fa-838d28618279" (UID: "6228dcb6-2940-4494-b1fa-838d28618279"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:43 crc kubenswrapper[4799]: I0127 08:08:43.672402 4799 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6228dcb6-2940-4494-b1fa-838d28618279-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:43 crc kubenswrapper[4799]: I0127 08:08:43.672740 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6228dcb6-2940-4494-b1fa-838d28618279-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:43 crc kubenswrapper[4799]: I0127 08:08:43.672754 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbvn9\" (UniqueName: \"kubernetes.io/projected/6228dcb6-2940-4494-b1fa-838d28618279-kube-api-access-sbvn9\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.174954 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e5da918-e50a-4642-947e-2f70675c384a","Type":"ContainerStarted","Data":"680d4908c5490e9e4c5591e9c593d3f826b7132939a4323896e0d00999dca20e"} Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.175371 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e5da918-e50a-4642-947e-2f70675c384a" containerName="proxy-httpd" containerID="cri-o://680d4908c5490e9e4c5591e9c593d3f826b7132939a4323896e0d00999dca20e" gracePeriod=30 Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.175422 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e5da918-e50a-4642-947e-2f70675c384a" containerName="sg-core" containerID="cri-o://1c74fc2090f97519cbf9706c495b923132e1047c2bdf13fd312d9fc586b10fca" gracePeriod=30 Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.175630 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e5da918-e50a-4642-947e-2f70675c384a" containerName="ceilometer-notification-agent" containerID="cri-o://045ed831d12ac9472a219e8b71d36cf6431ef93f0958a22272b066f27968894e" gracePeriod=30 Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.179891 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e5da918-e50a-4642-947e-2f70675c384a" containerName="ceilometer-central-agent" containerID="cri-o://1187d2907da98f5618a1286568dd296c6a90299e773f7c552dbd8b8ddc4a3f97" gracePeriod=30 Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.189601 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ff7b8d449-xjt48" event={"ID":"2db9ba76-0532-4ed0-972e-fd5452048b97","Type":"ContainerStarted","Data":"7a13a4dba57a64680601c65f46bc4e1fd1ddd9881983073fa8db00588d91d96c"} Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.189655 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ff7b8d449-xjt48" event={"ID":"2db9ba76-0532-4ed0-972e-fd5452048b97","Type":"ContainerStarted","Data":"0dcbd436a5762bcc61d230602c621a9b9849e2da017a7bac5c585459bb6be746"} Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.189763 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.197332 4799 generic.go:334] "Generic (PLEG): container finished" podID="84f49a02-4934-43be-aa45-d24a40b20db2" containerID="0ac7a7f07ad4bec91f6ec2aa1d2412ca93a2b5761f8883e09f189453418e7eb4" exitCode=0 Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.197449 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d9mwm" event={"ID":"84f49a02-4934-43be-aa45-d24a40b20db2","Type":"ContainerDied","Data":"0ac7a7f07ad4bec91f6ec2aa1d2412ca93a2b5761f8883e09f189453418e7eb4"} Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.206457 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cj8lr" event={"ID":"6228dcb6-2940-4494-b1fa-838d28618279","Type":"ContainerDied","Data":"e0ee3b6638a19dbbd6263622143ad0e219c94b7bc52a1f1ba2e17c19fef4bc9f"} Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.206514 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0ee3b6638a19dbbd6263622143ad0e219c94b7bc52a1f1ba2e17c19fef4bc9f" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.206585 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cj8lr" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.212708 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.03590989 podStartE2EDuration="45.212674924s" podCreationTimestamp="2026-01-27 08:07:59 +0000 UTC" firstStartedPulling="2026-01-27 08:08:00.879404332 +0000 UTC m=+1347.190508397" lastFinishedPulling="2026-01-27 08:08:43.056169366 +0000 UTC m=+1389.367273431" observedRunningTime="2026-01-27 08:08:44.210123766 +0000 UTC m=+1390.521227881" watchObservedRunningTime="2026-01-27 08:08:44.212674924 +0000 UTC m=+1390.523779029" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.267968 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5ff7b8d449-xjt48" podStartSLOduration=7.267951477 podStartE2EDuration="7.267951477s" podCreationTimestamp="2026-01-27 08:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:08:44.265537352 +0000 UTC m=+1390.576641427" watchObservedRunningTime="2026-01-27 08:08:44.267951477 +0000 UTC m=+1390.579055542" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.435112 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-85fc64b547-v7lvv"] Jan 27 08:08:44 crc kubenswrapper[4799]: E0127 08:08:44.435566 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6228dcb6-2940-4494-b1fa-838d28618279" containerName="barbican-db-sync" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.435591 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6228dcb6-2940-4494-b1fa-838d28618279" containerName="barbican-db-sync" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.435815 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6228dcb6-2940-4494-b1fa-838d28618279" containerName="barbican-db-sync" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.436950 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-85fc64b547-v7lvv" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.449294 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9dhbt" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.450067 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.466102 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.483226 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-85fc64b547-v7lvv"] Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.496502 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57b30668-20df-41a6-80b4-ee59aea714dc-config-data\") pod \"barbican-worker-85fc64b547-v7lvv\" (UID: \"57b30668-20df-41a6-80b4-ee59aea714dc\") " pod="openstack/barbican-worker-85fc64b547-v7lvv" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.496840 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57b30668-20df-41a6-80b4-ee59aea714dc-logs\") pod \"barbican-worker-85fc64b547-v7lvv\" (UID: \"57b30668-20df-41a6-80b4-ee59aea714dc\") " pod="openstack/barbican-worker-85fc64b547-v7lvv" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.496939 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57b30668-20df-41a6-80b4-ee59aea714dc-combined-ca-bundle\") pod \"barbican-worker-85fc64b547-v7lvv\" (UID: \"57b30668-20df-41a6-80b4-ee59aea714dc\") " pod="openstack/barbican-worker-85fc64b547-v7lvv" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.497050 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57b30668-20df-41a6-80b4-ee59aea714dc-config-data-custom\") pod \"barbican-worker-85fc64b547-v7lvv\" (UID: \"57b30668-20df-41a6-80b4-ee59aea714dc\") " pod="openstack/barbican-worker-85fc64b547-v7lvv" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.497193 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7swqq\" (UniqueName: \"kubernetes.io/projected/57b30668-20df-41a6-80b4-ee59aea714dc-kube-api-access-7swqq\") pod \"barbican-worker-85fc64b547-v7lvv\" (UID: \"57b30668-20df-41a6-80b4-ee59aea714dc\") " pod="openstack/barbican-worker-85fc64b547-v7lvv" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.514841 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-b7647d64-tp8mw"] Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.524609 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.525393 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.530556 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.537489 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-b7647d64-tp8mw"] Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.610318 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d63e438-475a-4686-861e-5fba1fcb6767-config-data-custom\") pod \"barbican-keystone-listener-b7647d64-tp8mw\" (UID: \"2d63e438-475a-4686-861e-5fba1fcb6767\") " pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.610404 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d63e438-475a-4686-861e-5fba1fcb6767-config-data\") pod \"barbican-keystone-listener-b7647d64-tp8mw\" (UID: \"2d63e438-475a-4686-861e-5fba1fcb6767\") " pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.610433 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d63e438-475a-4686-861e-5fba1fcb6767-combined-ca-bundle\") pod \"barbican-keystone-listener-b7647d64-tp8mw\" (UID: \"2d63e438-475a-4686-861e-5fba1fcb6767\") " pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.610473 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d63e438-475a-4686-861e-5fba1fcb6767-logs\") pod \"barbican-keystone-listener-b7647d64-tp8mw\" (UID: \"2d63e438-475a-4686-861e-5fba1fcb6767\") " pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.610538 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57b30668-20df-41a6-80b4-ee59aea714dc-config-data\") pod \"barbican-worker-85fc64b547-v7lvv\" (UID: \"57b30668-20df-41a6-80b4-ee59aea714dc\") " pod="openstack/barbican-worker-85fc64b547-v7lvv" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.610604 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcz95\" (UniqueName: \"kubernetes.io/projected/2d63e438-475a-4686-861e-5fba1fcb6767-kube-api-access-gcz95\") pod \"barbican-keystone-listener-b7647d64-tp8mw\" (UID: \"2d63e438-475a-4686-861e-5fba1fcb6767\") " pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.610669 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57b30668-20df-41a6-80b4-ee59aea714dc-logs\") pod \"barbican-worker-85fc64b547-v7lvv\" (UID: \"57b30668-20df-41a6-80b4-ee59aea714dc\") " pod="openstack/barbican-worker-85fc64b547-v7lvv" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.610689 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57b30668-20df-41a6-80b4-ee59aea714dc-combined-ca-bundle\") pod \"barbican-worker-85fc64b547-v7lvv\" (UID: \"57b30668-20df-41a6-80b4-ee59aea714dc\") " pod="openstack/barbican-worker-85fc64b547-v7lvv" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.610719 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57b30668-20df-41a6-80b4-ee59aea714dc-config-data-custom\") pod \"barbican-worker-85fc64b547-v7lvv\" (UID: \"57b30668-20df-41a6-80b4-ee59aea714dc\") " pod="openstack/barbican-worker-85fc64b547-v7lvv" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.613456 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7swqq\" (UniqueName: \"kubernetes.io/projected/57b30668-20df-41a6-80b4-ee59aea714dc-kube-api-access-7swqq\") pod \"barbican-worker-85fc64b547-v7lvv\" (UID: \"57b30668-20df-41a6-80b4-ee59aea714dc\") " pod="openstack/barbican-worker-85fc64b547-v7lvv" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.614519 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57b30668-20df-41a6-80b4-ee59aea714dc-logs\") pod \"barbican-worker-85fc64b547-v7lvv\" (UID: \"57b30668-20df-41a6-80b4-ee59aea714dc\") " pod="openstack/barbican-worker-85fc64b547-v7lvv" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.653569 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57b30668-20df-41a6-80b4-ee59aea714dc-config-data\") pod \"barbican-worker-85fc64b547-v7lvv\" (UID: \"57b30668-20df-41a6-80b4-ee59aea714dc\") " pod="openstack/barbican-worker-85fc64b547-v7lvv" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.654606 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57b30668-20df-41a6-80b4-ee59aea714dc-combined-ca-bundle\") pod \"barbican-worker-85fc64b547-v7lvv\" (UID: \"57b30668-20df-41a6-80b4-ee59aea714dc\") " pod="openstack/barbican-worker-85fc64b547-v7lvv" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.654705 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57b30668-20df-41a6-80b4-ee59aea714dc-config-data-custom\") pod \"barbican-worker-85fc64b547-v7lvv\" (UID: \"57b30668-20df-41a6-80b4-ee59aea714dc\") " pod="openstack/barbican-worker-85fc64b547-v7lvv" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.654857 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7swqq\" (UniqueName: \"kubernetes.io/projected/57b30668-20df-41a6-80b4-ee59aea714dc-kube-api-access-7swqq\") pod \"barbican-worker-85fc64b547-v7lvv\" (UID: \"57b30668-20df-41a6-80b4-ee59aea714dc\") " pod="openstack/barbican-worker-85fc64b547-v7lvv" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.685580 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-d5zp7"] Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.685849 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" podUID="d322f101-f8ff-4a8b-9acb-9d441cf2367a" containerName="dnsmasq-dns" containerID="cri-o://f539f04a1a8918582f8a744cb307a21be74e2701b6c90dd1c0445404688673f1" gracePeriod=10 Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.725730 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d63e438-475a-4686-861e-5fba1fcb6767-config-data-custom\") pod \"barbican-keystone-listener-b7647d64-tp8mw\" (UID: \"2d63e438-475a-4686-861e-5fba1fcb6767\") " pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.725871 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d63e438-475a-4686-861e-5fba1fcb6767-config-data\") pod \"barbican-keystone-listener-b7647d64-tp8mw\" (UID: \"2d63e438-475a-4686-861e-5fba1fcb6767\") " pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.725909 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d63e438-475a-4686-861e-5fba1fcb6767-combined-ca-bundle\") pod \"barbican-keystone-listener-b7647d64-tp8mw\" (UID: \"2d63e438-475a-4686-861e-5fba1fcb6767\") " pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.725971 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d63e438-475a-4686-861e-5fba1fcb6767-logs\") pod \"barbican-keystone-listener-b7647d64-tp8mw\" (UID: \"2d63e438-475a-4686-861e-5fba1fcb6767\") " pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.726175 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcz95\" (UniqueName: \"kubernetes.io/projected/2d63e438-475a-4686-861e-5fba1fcb6767-kube-api-access-gcz95\") pod \"barbican-keystone-listener-b7647d64-tp8mw\" (UID: \"2d63e438-475a-4686-861e-5fba1fcb6767\") " pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.734043 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d63e438-475a-4686-861e-5fba1fcb6767-config-data-custom\") pod \"barbican-keystone-listener-b7647d64-tp8mw\" (UID: \"2d63e438-475a-4686-861e-5fba1fcb6767\") " pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.734803 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d63e438-475a-4686-861e-5fba1fcb6767-logs\") pod \"barbican-keystone-listener-b7647d64-tp8mw\" (UID: \"2d63e438-475a-4686-861e-5fba1fcb6767\") " pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.739339 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d63e438-475a-4686-861e-5fba1fcb6767-config-data\") pod \"barbican-keystone-listener-b7647d64-tp8mw\" (UID: \"2d63e438-475a-4686-861e-5fba1fcb6767\") " pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.741147 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d63e438-475a-4686-861e-5fba1fcb6767-combined-ca-bundle\") pod \"barbican-keystone-listener-b7647d64-tp8mw\" (UID: \"2d63e438-475a-4686-861e-5fba1fcb6767\") " pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.753790 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcz95\" (UniqueName: \"kubernetes.io/projected/2d63e438-475a-4686-861e-5fba1fcb6767-kube-api-access-gcz95\") pod \"barbican-keystone-listener-b7647d64-tp8mw\" (UID: \"2d63e438-475a-4686-861e-5fba1fcb6767\") " pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.755266 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-85fc64b547-v7lvv" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.761242 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-6cq8f"] Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.763286 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.779689 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-6cq8f"] Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.795178 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5986c44cf4-xhtf8"] Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.797109 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.799145 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.818470 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5986c44cf4-xhtf8"] Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.828680 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-6cq8f\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.828782 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-config\") pod \"dnsmasq-dns-85ff748b95-6cq8f\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.828817 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54204b59-e201-4fe0-b86b-20a807415269-logs\") pod \"barbican-api-5986c44cf4-xhtf8\" (UID: \"54204b59-e201-4fe0-b86b-20a807415269\") " pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.828880 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54204b59-e201-4fe0-b86b-20a807415269-combined-ca-bundle\") pod \"barbican-api-5986c44cf4-xhtf8\" (UID: \"54204b59-e201-4fe0-b86b-20a807415269\") " pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.828993 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hnmv\" (UniqueName: \"kubernetes.io/projected/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-kube-api-access-8hnmv\") pod \"dnsmasq-dns-85ff748b95-6cq8f\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.829117 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-6cq8f\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.829237 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbkmz\" (UniqueName: \"kubernetes.io/projected/54204b59-e201-4fe0-b86b-20a807415269-kube-api-access-gbkmz\") pod \"barbican-api-5986c44cf4-xhtf8\" (UID: \"54204b59-e201-4fe0-b86b-20a807415269\") " pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.829402 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-6cq8f\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.829559 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-dns-svc\") pod \"dnsmasq-dns-85ff748b95-6cq8f\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.829731 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54204b59-e201-4fe0-b86b-20a807415269-config-data\") pod \"barbican-api-5986c44cf4-xhtf8\" (UID: \"54204b59-e201-4fe0-b86b-20a807415269\") " pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.829802 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54204b59-e201-4fe0-b86b-20a807415269-config-data-custom\") pod \"barbican-api-5986c44cf4-xhtf8\" (UID: \"54204b59-e201-4fe0-b86b-20a807415269\") " pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.853775 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.930914 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbkmz\" (UniqueName: \"kubernetes.io/projected/54204b59-e201-4fe0-b86b-20a807415269-kube-api-access-gbkmz\") pod \"barbican-api-5986c44cf4-xhtf8\" (UID: \"54204b59-e201-4fe0-b86b-20a807415269\") " pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.931231 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-6cq8f\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.931330 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-dns-svc\") pod \"dnsmasq-dns-85ff748b95-6cq8f\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.931372 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54204b59-e201-4fe0-b86b-20a807415269-config-data\") pod \"barbican-api-5986c44cf4-xhtf8\" (UID: \"54204b59-e201-4fe0-b86b-20a807415269\") " pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.931401 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54204b59-e201-4fe0-b86b-20a807415269-config-data-custom\") pod \"barbican-api-5986c44cf4-xhtf8\" (UID: \"54204b59-e201-4fe0-b86b-20a807415269\") " pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.931444 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-6cq8f\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.931477 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-config\") pod \"dnsmasq-dns-85ff748b95-6cq8f\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.931505 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54204b59-e201-4fe0-b86b-20a807415269-logs\") pod \"barbican-api-5986c44cf4-xhtf8\" (UID: \"54204b59-e201-4fe0-b86b-20a807415269\") " pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.931526 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54204b59-e201-4fe0-b86b-20a807415269-combined-ca-bundle\") pod \"barbican-api-5986c44cf4-xhtf8\" (UID: \"54204b59-e201-4fe0-b86b-20a807415269\") " pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.931563 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hnmv\" (UniqueName: \"kubernetes.io/projected/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-kube-api-access-8hnmv\") pod \"dnsmasq-dns-85ff748b95-6cq8f\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.931597 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-6cq8f\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.937802 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-6cq8f\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.947216 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54204b59-e201-4fe0-b86b-20a807415269-logs\") pod \"barbican-api-5986c44cf4-xhtf8\" (UID: \"54204b59-e201-4fe0-b86b-20a807415269\") " pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.947656 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-dns-svc\") pod \"dnsmasq-dns-85ff748b95-6cq8f\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.955019 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-config\") pod \"dnsmasq-dns-85ff748b95-6cq8f\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.964689 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbkmz\" (UniqueName: \"kubernetes.io/projected/54204b59-e201-4fe0-b86b-20a807415269-kube-api-access-gbkmz\") pod \"barbican-api-5986c44cf4-xhtf8\" (UID: \"54204b59-e201-4fe0-b86b-20a807415269\") " pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.965745 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54204b59-e201-4fe0-b86b-20a807415269-config-data\") pod \"barbican-api-5986c44cf4-xhtf8\" (UID: \"54204b59-e201-4fe0-b86b-20a807415269\") " pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.966951 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54204b59-e201-4fe0-b86b-20a807415269-config-data-custom\") pod \"barbican-api-5986c44cf4-xhtf8\" (UID: \"54204b59-e201-4fe0-b86b-20a807415269\") " pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.967510 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hnmv\" (UniqueName: \"kubernetes.io/projected/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-kube-api-access-8hnmv\") pod \"dnsmasq-dns-85ff748b95-6cq8f\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.971872 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-6cq8f\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.980477 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-6cq8f\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:44 crc kubenswrapper[4799]: I0127 08:08:44.986888 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54204b59-e201-4fe0-b86b-20a807415269-combined-ca-bundle\") pod \"barbican-api-5986c44cf4-xhtf8\" (UID: \"54204b59-e201-4fe0-b86b-20a807415269\") " pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.094240 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.117428 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:08:45 crc kubenswrapper[4799]: E0127 08:08:45.198101 4799 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e5da918_e50a_4642_947e_2f70675c384a.slice/crio-conmon-1187d2907da98f5618a1286568dd296c6a90299e773f7c552dbd8b8ddc4a3f97.scope\": RecentStats: unable to find data in memory cache]" Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.325693 4799 generic.go:334] "Generic (PLEG): container finished" podID="d322f101-f8ff-4a8b-9acb-9d441cf2367a" containerID="f539f04a1a8918582f8a744cb307a21be74e2701b6c90dd1c0445404688673f1" exitCode=0 Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.325759 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" event={"ID":"d322f101-f8ff-4a8b-9acb-9d441cf2367a","Type":"ContainerDied","Data":"f539f04a1a8918582f8a744cb307a21be74e2701b6c90dd1c0445404688673f1"} Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.386363 4799 generic.go:334] "Generic (PLEG): container finished" podID="1e5da918-e50a-4642-947e-2f70675c384a" containerID="680d4908c5490e9e4c5591e9c593d3f826b7132939a4323896e0d00999dca20e" exitCode=0 Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.386410 4799 generic.go:334] "Generic (PLEG): container finished" podID="1e5da918-e50a-4642-947e-2f70675c384a" containerID="1c74fc2090f97519cbf9706c495b923132e1047c2bdf13fd312d9fc586b10fca" exitCode=2 Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.386419 4799 generic.go:334] "Generic (PLEG): container finished" podID="1e5da918-e50a-4642-947e-2f70675c384a" containerID="1187d2907da98f5618a1286568dd296c6a90299e773f7c552dbd8b8ddc4a3f97" exitCode=0 Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.386614 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e5da918-e50a-4642-947e-2f70675c384a","Type":"ContainerDied","Data":"680d4908c5490e9e4c5591e9c593d3f826b7132939a4323896e0d00999dca20e"} Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.386642 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e5da918-e50a-4642-947e-2f70675c384a","Type":"ContainerDied","Data":"1c74fc2090f97519cbf9706c495b923132e1047c2bdf13fd312d9fc586b10fca"} Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.386653 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e5da918-e50a-4642-947e-2f70675c384a","Type":"ContainerDied","Data":"1187d2907da98f5618a1286568dd296c6a90299e773f7c552dbd8b8ddc4a3f97"} Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.457753 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-85fc64b547-v7lvv"] Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.561806 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.652171 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-dns-svc\") pod \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.699910 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d322f101-f8ff-4a8b-9acb-9d441cf2367a" (UID: "d322f101-f8ff-4a8b-9acb-9d441cf2367a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:45 crc kubenswrapper[4799]: W0127 08:08:45.729977 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d63e438_475a_4686_861e_5fba1fcb6767.slice/crio-1212791c981bc88fb260f2c57b5b8b9e0d847ee1813fc85b8dd8607887fbc597 WatchSource:0}: Error finding container 1212791c981bc88fb260f2c57b5b8b9e0d847ee1813fc85b8dd8607887fbc597: Status 404 returned error can't find the container with id 1212791c981bc88fb260f2c57b5b8b9e0d847ee1813fc85b8dd8607887fbc597 Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.738248 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-b7647d64-tp8mw"] Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.753935 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-ovsdbserver-sb\") pod \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.754008 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-dns-swift-storage-0\") pod \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.754083 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fglxv\" (UniqueName: \"kubernetes.io/projected/d322f101-f8ff-4a8b-9acb-9d441cf2367a-kube-api-access-fglxv\") pod \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.754102 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-ovsdbserver-nb\") pod \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.754145 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-config\") pod \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\" (UID: \"d322f101-f8ff-4a8b-9acb-9d441cf2367a\") " Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.755324 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.763023 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d322f101-f8ff-4a8b-9acb-9d441cf2367a-kube-api-access-fglxv" (OuterVolumeSpecName: "kube-api-access-fglxv") pod "d322f101-f8ff-4a8b-9acb-9d441cf2367a" (UID: "d322f101-f8ff-4a8b-9acb-9d441cf2367a"). InnerVolumeSpecName "kube-api-access-fglxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.799199 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d322f101-f8ff-4a8b-9acb-9d441cf2367a" (UID: "d322f101-f8ff-4a8b-9acb-9d441cf2367a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.808647 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d322f101-f8ff-4a8b-9acb-9d441cf2367a" (UID: "d322f101-f8ff-4a8b-9acb-9d441cf2367a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.812510 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d322f101-f8ff-4a8b-9acb-9d441cf2367a" (UID: "d322f101-f8ff-4a8b-9acb-9d441cf2367a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.827676 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-config" (OuterVolumeSpecName: "config") pod "d322f101-f8ff-4a8b-9acb-9d441cf2367a" (UID: "d322f101-f8ff-4a8b-9acb-9d441cf2367a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.856868 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.856904 4799 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.856914 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fglxv\" (UniqueName: \"kubernetes.io/projected/d322f101-f8ff-4a8b-9acb-9d441cf2367a-kube-api-access-fglxv\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.856923 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.856935 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d322f101-f8ff-4a8b-9acb-9d441cf2367a-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.891847 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d9mwm" Jan 27 08:08:45 crc kubenswrapper[4799]: W0127 08:08:45.927583 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3caf4a8e_4d8e_4e9d_8a4f_3e417db021db.slice/crio-077f2c429661f86333434e4f56c731d5c623ec95ab9ecbdb5c549c8b1090a9dc WatchSource:0}: Error finding container 077f2c429661f86333434e4f56c731d5c623ec95ab9ecbdb5c549c8b1090a9dc: Status 404 returned error can't find the container with id 077f2c429661f86333434e4f56c731d5c623ec95ab9ecbdb5c549c8b1090a9dc Jan 27 08:08:45 crc kubenswrapper[4799]: I0127 08:08:45.950214 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-6cq8f"] Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.036884 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5986c44cf4-xhtf8"] Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.060633 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-combined-ca-bundle\") pod \"84f49a02-4934-43be-aa45-d24a40b20db2\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.060783 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84f49a02-4934-43be-aa45-d24a40b20db2-etc-machine-id\") pod \"84f49a02-4934-43be-aa45-d24a40b20db2\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.060802 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-scripts\") pod \"84f49a02-4934-43be-aa45-d24a40b20db2\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.060855 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-config-data\") pod \"84f49a02-4934-43be-aa45-d24a40b20db2\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.060869 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-db-sync-config-data\") pod \"84f49a02-4934-43be-aa45-d24a40b20db2\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.061010 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxpvl\" (UniqueName: \"kubernetes.io/projected/84f49a02-4934-43be-aa45-d24a40b20db2-kube-api-access-cxpvl\") pod \"84f49a02-4934-43be-aa45-d24a40b20db2\" (UID: \"84f49a02-4934-43be-aa45-d24a40b20db2\") " Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.061527 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84f49a02-4934-43be-aa45-d24a40b20db2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "84f49a02-4934-43be-aa45-d24a40b20db2" (UID: "84f49a02-4934-43be-aa45-d24a40b20db2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.076593 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84f49a02-4934-43be-aa45-d24a40b20db2-kube-api-access-cxpvl" (OuterVolumeSpecName: "kube-api-access-cxpvl") pod "84f49a02-4934-43be-aa45-d24a40b20db2" (UID: "84f49a02-4934-43be-aa45-d24a40b20db2"). InnerVolumeSpecName "kube-api-access-cxpvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.090843 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-scripts" (OuterVolumeSpecName: "scripts") pod "84f49a02-4934-43be-aa45-d24a40b20db2" (UID: "84f49a02-4934-43be-aa45-d24a40b20db2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.091222 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "84f49a02-4934-43be-aa45-d24a40b20db2" (UID: "84f49a02-4934-43be-aa45-d24a40b20db2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.130737 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84f49a02-4934-43be-aa45-d24a40b20db2" (UID: "84f49a02-4934-43be-aa45-d24a40b20db2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.149198 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-config-data" (OuterVolumeSpecName: "config-data") pod "84f49a02-4934-43be-aa45-d24a40b20db2" (UID: "84f49a02-4934-43be-aa45-d24a40b20db2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.163275 4799 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.163323 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.163334 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxpvl\" (UniqueName: \"kubernetes.io/projected/84f49a02-4934-43be-aa45-d24a40b20db2-kube-api-access-cxpvl\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.163348 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.163356 4799 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84f49a02-4934-43be-aa45-d24a40b20db2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.163364 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84f49a02-4934-43be-aa45-d24a40b20db2-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.407546 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" event={"ID":"2d63e438-475a-4686-861e-5fba1fcb6767","Type":"ContainerStarted","Data":"1212791c981bc88fb260f2c57b5b8b9e0d847ee1813fc85b8dd8607887fbc597"} Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.416364 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.416368 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" event={"ID":"d322f101-f8ff-4a8b-9acb-9d441cf2367a","Type":"ContainerDied","Data":"04c4336bfba1eecd7923d10b4ea0d74a2e02fec2c4befdb91d1cade662bd29b1"} Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.417537 4799 scope.go:117] "RemoveContainer" containerID="f539f04a1a8918582f8a744cb307a21be74e2701b6c90dd1c0445404688673f1" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.436700 4799 generic.go:334] "Generic (PLEG): container finished" podID="1e5da918-e50a-4642-947e-2f70675c384a" containerID="045ed831d12ac9472a219e8b71d36cf6431ef93f0958a22272b066f27968894e" exitCode=0 Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.436842 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e5da918-e50a-4642-947e-2f70675c384a","Type":"ContainerDied","Data":"045ed831d12ac9472a219e8b71d36cf6431ef93f0958a22272b066f27968894e"} Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.447177 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d9mwm" event={"ID":"84f49a02-4934-43be-aa45-d24a40b20db2","Type":"ContainerDied","Data":"4ffe9cfae9fd5ea05f57990d10b85c5b38485c3245ac778cbe173fb4e8abbd53"} Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.447221 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ffe9cfae9fd5ea05f57990d10b85c5b38485c3245ac778cbe173fb4e8abbd53" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.447392 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d9mwm" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.536228 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-85fc64b547-v7lvv" event={"ID":"57b30668-20df-41a6-80b4-ee59aea714dc","Type":"ContainerStarted","Data":"560b70a2b269e41b89f2f6f6d53fbb201d6dec617563b16c36013515d335ff36"} Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.536645 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5986c44cf4-xhtf8" event={"ID":"54204b59-e201-4fe0-b86b-20a807415269","Type":"ContainerStarted","Data":"631c599c146c5a3cfba30ba7ef2b01d015922e2be50ab73f4d54d009b33ffc82"} Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.536660 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" event={"ID":"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db","Type":"ContainerStarted","Data":"077f2c429661f86333434e4f56c731d5c623ec95ab9ecbdb5c549c8b1090a9dc"} Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.562553 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-d5zp7"] Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.605661 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-d5zp7"] Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.614769 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 08:08:46 crc kubenswrapper[4799]: E0127 08:08:46.615152 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84f49a02-4934-43be-aa45-d24a40b20db2" containerName="cinder-db-sync" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.615168 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="84f49a02-4934-43be-aa45-d24a40b20db2" containerName="cinder-db-sync" Jan 27 08:08:46 crc kubenswrapper[4799]: E0127 08:08:46.615179 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d322f101-f8ff-4a8b-9acb-9d441cf2367a" containerName="init" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.615187 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d322f101-f8ff-4a8b-9acb-9d441cf2367a" containerName="init" Jan 27 08:08:46 crc kubenswrapper[4799]: E0127 08:08:46.615208 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d322f101-f8ff-4a8b-9acb-9d441cf2367a" containerName="dnsmasq-dns" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.615215 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d322f101-f8ff-4a8b-9acb-9d441cf2367a" containerName="dnsmasq-dns" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.615381 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="d322f101-f8ff-4a8b-9acb-9d441cf2367a" containerName="dnsmasq-dns" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.615418 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="84f49a02-4934-43be-aa45-d24a40b20db2" containerName="cinder-db-sync" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.616333 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.628026 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.629842 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-55nfh" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.630208 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.630491 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.659321 4799 scope.go:117] "RemoveContainer" containerID="6da024e4fa81fd6f3513b14c398eda7842cb22fa5f68b705018cab25a2dd0184" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.703784 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.733315 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-6cq8f"] Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.741078 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-m98pb"] Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.742464 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.750315 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-m98pb"] Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.790035 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.790078 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-config-data\") pod \"cinder-scheduler-0\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.790099 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7dq8\" (UniqueName: \"kubernetes.io/projected/7e958b88-23dd-4e18-bd39-497a621e39ba-kube-api-access-r7dq8\") pod \"cinder-scheduler-0\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.790126 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.790149 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-scripts\") pod \"cinder-scheduler-0\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.790281 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e958b88-23dd-4e18-bd39-497a621e39ba-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.813630 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.821230 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 08:08:46 crc kubenswrapper[4799]: E0127 08:08:46.821679 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e5da918-e50a-4642-947e-2f70675c384a" containerName="proxy-httpd" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.821694 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e5da918-e50a-4642-947e-2f70675c384a" containerName="proxy-httpd" Jan 27 08:08:46 crc kubenswrapper[4799]: E0127 08:08:46.821707 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e5da918-e50a-4642-947e-2f70675c384a" containerName="sg-core" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.821714 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e5da918-e50a-4642-947e-2f70675c384a" containerName="sg-core" Jan 27 08:08:46 crc kubenswrapper[4799]: E0127 08:08:46.821737 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e5da918-e50a-4642-947e-2f70675c384a" containerName="ceilometer-central-agent" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.821747 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e5da918-e50a-4642-947e-2f70675c384a" containerName="ceilometer-central-agent" Jan 27 08:08:46 crc kubenswrapper[4799]: E0127 08:08:46.821762 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e5da918-e50a-4642-947e-2f70675c384a" containerName="ceilometer-notification-agent" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.821769 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e5da918-e50a-4642-947e-2f70675c384a" containerName="ceilometer-notification-agent" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.821974 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e5da918-e50a-4642-947e-2f70675c384a" containerName="proxy-httpd" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.821992 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e5da918-e50a-4642-947e-2f70675c384a" containerName="ceilometer-notification-agent" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.822017 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e5da918-e50a-4642-947e-2f70675c384a" containerName="sg-core" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.822038 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e5da918-e50a-4642-947e-2f70675c384a" containerName="ceilometer-central-agent" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.823621 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.828754 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.844707 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.892242 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e958b88-23dd-4e18-bd39-497a621e39ba-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.892328 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.892347 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-config-data\") pod \"cinder-scheduler-0\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.892365 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dq8\" (UniqueName: \"kubernetes.io/projected/7e958b88-23dd-4e18-bd39-497a621e39ba-kube-api-access-r7dq8\") pod \"cinder-scheduler-0\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.892387 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-m98pb\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.892410 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-m98pb\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.892426 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.892449 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-m98pb\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.892468 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-scripts\") pod \"cinder-scheduler-0\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.892498 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-config\") pod \"dnsmasq-dns-5c9776ccc5-m98pb\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.892507 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e958b88-23dd-4e18-bd39-497a621e39ba-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.892529 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-m98pb\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.892553 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx7kg\" (UniqueName: \"kubernetes.io/projected/38d14031-750d-40d9-9894-e7e81fcb6538-kube-api-access-sx7kg\") pod \"dnsmasq-dns-5c9776ccc5-m98pb\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.900985 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-scripts\") pod \"cinder-scheduler-0\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.906820 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.907095 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-config-data\") pod \"cinder-scheduler-0\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.910878 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.911113 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7dq8\" (UniqueName: \"kubernetes.io/projected/7e958b88-23dd-4e18-bd39-497a621e39ba-kube-api-access-r7dq8\") pod \"cinder-scheduler-0\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.948989 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.993516 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e5da918-e50a-4642-947e-2f70675c384a-run-httpd\") pod \"1e5da918-e50a-4642-947e-2f70675c384a\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.993596 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-combined-ca-bundle\") pod \"1e5da918-e50a-4642-947e-2f70675c384a\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.993693 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e5da918-e50a-4642-947e-2f70675c384a-log-httpd\") pod \"1e5da918-e50a-4642-947e-2f70675c384a\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.993814 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-config-data\") pod \"1e5da918-e50a-4642-947e-2f70675c384a\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.993912 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-scripts\") pod \"1e5da918-e50a-4642-947e-2f70675c384a\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.994025 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqrpb\" (UniqueName: \"kubernetes.io/projected/1e5da918-e50a-4642-947e-2f70675c384a-kube-api-access-bqrpb\") pod \"1e5da918-e50a-4642-947e-2f70675c384a\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.994095 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-sg-core-conf-yaml\") pod \"1e5da918-e50a-4642-947e-2f70675c384a\" (UID: \"1e5da918-e50a-4642-947e-2f70675c384a\") " Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.994573 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc22b588-9cca-4284-9e73-27a8cd5113b4-logs\") pod \"cinder-api-0\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " pod="openstack/cinder-api-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.994694 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-m98pb\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.994735 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e5da918-e50a-4642-947e-2f70675c384a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1e5da918-e50a-4642-947e-2f70675c384a" (UID: "1e5da918-e50a-4642-947e-2f70675c384a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.994754 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " pod="openstack/cinder-api-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.994853 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-m98pb\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.994901 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-m98pb\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.994935 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-config-data-custom\") pod \"cinder-api-0\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " pod="openstack/cinder-api-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.994984 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-config-data\") pod \"cinder-api-0\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " pod="openstack/cinder-api-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.995013 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mwr5\" (UniqueName: \"kubernetes.io/projected/fc22b588-9cca-4284-9e73-27a8cd5113b4-kube-api-access-8mwr5\") pod \"cinder-api-0\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " pod="openstack/cinder-api-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.995031 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-config\") pod \"dnsmasq-dns-5c9776ccc5-m98pb\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.995054 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc22b588-9cca-4284-9e73-27a8cd5113b4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " pod="openstack/cinder-api-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.995074 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e5da918-e50a-4642-947e-2f70675c384a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1e5da918-e50a-4642-947e-2f70675c384a" (UID: "1e5da918-e50a-4642-947e-2f70675c384a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.995845 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-config\") pod \"dnsmasq-dns-5c9776ccc5-m98pb\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.996141 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-m98pb\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.996196 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx7kg\" (UniqueName: \"kubernetes.io/projected/38d14031-750d-40d9-9894-e7e81fcb6538-kube-api-access-sx7kg\") pod \"dnsmasq-dns-5c9776ccc5-m98pb\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.996221 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-scripts\") pod \"cinder-api-0\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " pod="openstack/cinder-api-0" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.996320 4799 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e5da918-e50a-4642-947e-2f70675c384a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.996336 4799 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e5da918-e50a-4642-947e-2f70675c384a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.996499 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-m98pb\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.997480 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-m98pb\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.998473 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-m98pb\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:08:46 crc kubenswrapper[4799]: I0127 08:08:46.999164 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-m98pb\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.011383 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e5da918-e50a-4642-947e-2f70675c384a-kube-api-access-bqrpb" (OuterVolumeSpecName: "kube-api-access-bqrpb") pod "1e5da918-e50a-4642-947e-2f70675c384a" (UID: "1e5da918-e50a-4642-947e-2f70675c384a"). InnerVolumeSpecName "kube-api-access-bqrpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.015734 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx7kg\" (UniqueName: \"kubernetes.io/projected/38d14031-750d-40d9-9894-e7e81fcb6538-kube-api-access-sx7kg\") pod \"dnsmasq-dns-5c9776ccc5-m98pb\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.016059 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-scripts" (OuterVolumeSpecName: "scripts") pod "1e5da918-e50a-4642-947e-2f70675c384a" (UID: "1e5da918-e50a-4642-947e-2f70675c384a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.031375 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1e5da918-e50a-4642-947e-2f70675c384a" (UID: "1e5da918-e50a-4642-947e-2f70675c384a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.078454 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.095644 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e5da918-e50a-4642-947e-2f70675c384a" (UID: "1e5da918-e50a-4642-947e-2f70675c384a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.098462 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " pod="openstack/cinder-api-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.098524 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-config-data-custom\") pod \"cinder-api-0\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " pod="openstack/cinder-api-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.098558 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-config-data\") pod \"cinder-api-0\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " pod="openstack/cinder-api-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.098584 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mwr5\" (UniqueName: \"kubernetes.io/projected/fc22b588-9cca-4284-9e73-27a8cd5113b4-kube-api-access-8mwr5\") pod \"cinder-api-0\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " pod="openstack/cinder-api-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.098611 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc22b588-9cca-4284-9e73-27a8cd5113b4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " pod="openstack/cinder-api-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.098662 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-scripts\") pod \"cinder-api-0\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " pod="openstack/cinder-api-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.098728 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc22b588-9cca-4284-9e73-27a8cd5113b4-logs\") pod \"cinder-api-0\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " pod="openstack/cinder-api-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.098824 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqrpb\" (UniqueName: \"kubernetes.io/projected/1e5da918-e50a-4642-947e-2f70675c384a-kube-api-access-bqrpb\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.098842 4799 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.098853 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.098865 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.099168 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc22b588-9cca-4284-9e73-27a8cd5113b4-logs\") pod \"cinder-api-0\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " pod="openstack/cinder-api-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.101488 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc22b588-9cca-4284-9e73-27a8cd5113b4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " pod="openstack/cinder-api-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.112973 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-scripts\") pod \"cinder-api-0\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " pod="openstack/cinder-api-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.113048 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-config-data\") pod \"cinder-api-0\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " pod="openstack/cinder-api-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.113441 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " pod="openstack/cinder-api-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.115029 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-config-data-custom\") pod \"cinder-api-0\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " pod="openstack/cinder-api-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.134066 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mwr5\" (UniqueName: \"kubernetes.io/projected/fc22b588-9cca-4284-9e73-27a8cd5113b4-kube-api-access-8mwr5\") pod \"cinder-api-0\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " pod="openstack/cinder-api-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.152477 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-config-data" (OuterVolumeSpecName: "config-data") pod "1e5da918-e50a-4642-947e-2f70675c384a" (UID: "1e5da918-e50a-4642-947e-2f70675c384a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.162797 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.208012 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5da918-e50a-4642-947e-2f70675c384a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.481847 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e5da918-e50a-4642-947e-2f70675c384a","Type":"ContainerDied","Data":"4839b3919cc64d6659ca5cb3cd570412337e76fac8662e5e943983a0a24df48d"} Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.482129 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.482214 4799 scope.go:117] "RemoveContainer" containerID="680d4908c5490e9e4c5591e9c593d3f826b7132939a4323896e0d00999dca20e" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.485444 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5986c44cf4-xhtf8" event={"ID":"54204b59-e201-4fe0-b86b-20a807415269","Type":"ContainerStarted","Data":"39276b2c3c01c8673aba78103bc42ac3840d0c04da54ce6044f126303eb51030"} Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.485475 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5986c44cf4-xhtf8" event={"ID":"54204b59-e201-4fe0-b86b-20a807415269","Type":"ContainerStarted","Data":"9df4b0110e1f07ef202f1ce30cb3c791bd7cb5b94cfc83252634b17863fedc83"} Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.485617 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.490073 4799 generic.go:334] "Generic (PLEG): container finished" podID="3caf4a8e-4d8e-4e9d-8a4f-3e417db021db" containerID="1390a8258331a1500dc1106f9c4346a1283b0ed4c55cf33dfbb69a22af588234" exitCode=0 Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.490134 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" event={"ID":"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db","Type":"ContainerDied","Data":"1390a8258331a1500dc1106f9c4346a1283b0ed4c55cf33dfbb69a22af588234"} Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.527511 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.546058 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5986c44cf4-xhtf8" podStartSLOduration=3.546038697 podStartE2EDuration="3.546038697s" podCreationTimestamp="2026-01-27 08:08:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:08:47.510310768 +0000 UTC m=+1393.821414833" watchObservedRunningTime="2026-01-27 08:08:47.546038697 +0000 UTC m=+1393.857142762" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.559802 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.606376 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.634272 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.636565 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.638855 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.639089 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.653886 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.716422 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0e296f-2929-4aa2-8272-ccba5119c5d1-log-httpd\") pod \"ceilometer-0\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.716517 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdh74\" (UniqueName: \"kubernetes.io/projected/bb0e296f-2929-4aa2-8272-ccba5119c5d1-kube-api-access-jdh74\") pod \"ceilometer-0\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.716559 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0e296f-2929-4aa2-8272-ccba5119c5d1-run-httpd\") pod \"ceilometer-0\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.716583 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.716614 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-scripts\") pod \"ceilometer-0\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.716681 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-config-data\") pod \"ceilometer-0\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.716694 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.818703 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdh74\" (UniqueName: \"kubernetes.io/projected/bb0e296f-2929-4aa2-8272-ccba5119c5d1-kube-api-access-jdh74\") pod \"ceilometer-0\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.818818 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0e296f-2929-4aa2-8272-ccba5119c5d1-run-httpd\") pod \"ceilometer-0\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.818838 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.818867 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-scripts\") pod \"ceilometer-0\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.818905 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-config-data\") pod \"ceilometer-0\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.818924 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.818986 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0e296f-2929-4aa2-8272-ccba5119c5d1-log-httpd\") pod \"ceilometer-0\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.819749 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0e296f-2929-4aa2-8272-ccba5119c5d1-run-httpd\") pod \"ceilometer-0\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.819780 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0e296f-2929-4aa2-8272-ccba5119c5d1-log-httpd\") pod \"ceilometer-0\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.824445 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.827555 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-scripts\") pod \"ceilometer-0\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.827973 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-config-data\") pod \"ceilometer-0\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.828581 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.833952 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdh74\" (UniqueName: \"kubernetes.io/projected/bb0e296f-2929-4aa2-8272-ccba5119c5d1-kube-api-access-jdh74\") pod \"ceilometer-0\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " pod="openstack/ceilometer-0" Jan 27 08:08:47 crc kubenswrapper[4799]: I0127 08:08:47.961487 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:08:48 crc kubenswrapper[4799]: I0127 08:08:48.517745 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e5da918-e50a-4642-947e-2f70675c384a" path="/var/lib/kubelet/pods/1e5da918-e50a-4642-947e-2f70675c384a/volumes" Jan 27 08:08:48 crc kubenswrapper[4799]: I0127 08:08:48.520293 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d322f101-f8ff-4a8b-9acb-9d441cf2367a" path="/var/lib/kubelet/pods/d322f101-f8ff-4a8b-9acb-9d441cf2367a/volumes" Jan 27 08:08:48 crc kubenswrapper[4799]: I0127 08:08:48.538258 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:08:48 crc kubenswrapper[4799]: I0127 08:08:48.574623 4799 scope.go:117] "RemoveContainer" containerID="1c74fc2090f97519cbf9706c495b923132e1047c2bdf13fd312d9fc586b10fca" Jan 27 08:08:48 crc kubenswrapper[4799]: I0127 08:08:48.735072 4799 scope.go:117] "RemoveContainer" containerID="045ed831d12ac9472a219e8b71d36cf6431ef93f0958a22272b066f27968894e" Jan 27 08:08:48 crc kubenswrapper[4799]: I0127 08:08:48.803577 4799 scope.go:117] "RemoveContainer" containerID="1187d2907da98f5618a1286568dd296c6a90299e773f7c552dbd8b8ddc4a3f97" Jan 27 08:08:49 crc kubenswrapper[4799]: I0127 08:08:49.122439 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-m98pb"] Jan 27 08:08:49 crc kubenswrapper[4799]: I0127 08:08:49.135045 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 08:08:49 crc kubenswrapper[4799]: W0127 08:08:49.140518 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc22b588_9cca_4284_9e73_27a8cd5113b4.slice/crio-b41de7f518a2905328f2318dcee5711824bd69611476aded558978f156a4c498 WatchSource:0}: Error finding container b41de7f518a2905328f2318dcee5711824bd69611476aded558978f156a4c498: Status 404 returned error can't find the container with id b41de7f518a2905328f2318dcee5711824bd69611476aded558978f156a4c498 Jan 27 08:08:49 crc kubenswrapper[4799]: I0127 08:08:49.281044 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:08:49 crc kubenswrapper[4799]: W0127 08:08:49.341677 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb0e296f_2929_4aa2_8272_ccba5119c5d1.slice/crio-8bdcc3ee3773b1282bd0d09dafd29b88728bb613d4313035f5cf7b9fbd79726c WatchSource:0}: Error finding container 8bdcc3ee3773b1282bd0d09dafd29b88728bb613d4313035f5cf7b9fbd79726c: Status 404 returned error can't find the container with id 8bdcc3ee3773b1282bd0d09dafd29b88728bb613d4313035f5cf7b9fbd79726c Jan 27 08:08:49 crc kubenswrapper[4799]: I0127 08:08:49.549396 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb0e296f-2929-4aa2-8272-ccba5119c5d1","Type":"ContainerStarted","Data":"8bdcc3ee3773b1282bd0d09dafd29b88728bb613d4313035f5cf7b9fbd79726c"} Jan 27 08:08:49 crc kubenswrapper[4799]: I0127 08:08:49.550598 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-85fc64b547-v7lvv" event={"ID":"57b30668-20df-41a6-80b4-ee59aea714dc","Type":"ContainerStarted","Data":"5d5582170f2ff95c4d921e6cfd3e75f6a45a23ac72d43f17c5d9b3b7565c91e8"} Jan 27 08:08:49 crc kubenswrapper[4799]: I0127 08:08:49.550626 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-85fc64b547-v7lvv" event={"ID":"57b30668-20df-41a6-80b4-ee59aea714dc","Type":"ContainerStarted","Data":"f7e155f75ed90bc86ce2b18a04f696ca9d79b6490d4aead206b8756ba19fd468"} Jan 27 08:08:49 crc kubenswrapper[4799]: I0127 08:08:49.552496 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fc22b588-9cca-4284-9e73-27a8cd5113b4","Type":"ContainerStarted","Data":"b41de7f518a2905328f2318dcee5711824bd69611476aded558978f156a4c498"} Jan 27 08:08:49 crc kubenswrapper[4799]: I0127 08:08:49.554235 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" event={"ID":"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db","Type":"ContainerStarted","Data":"98db836429a22b94120171becdacdf2f0650432c4800222f7f2a9b57049fc394"} Jan 27 08:08:49 crc kubenswrapper[4799]: I0127 08:08:49.554395 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" podUID="3caf4a8e-4d8e-4e9d-8a4f-3e417db021db" containerName="dnsmasq-dns" containerID="cri-o://98db836429a22b94120171becdacdf2f0650432c4800222f7f2a9b57049fc394" gracePeriod=10 Jan 27 08:08:49 crc kubenswrapper[4799]: I0127 08:08:49.554641 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:49 crc kubenswrapper[4799]: I0127 08:08:49.556478 4799 generic.go:334] "Generic (PLEG): container finished" podID="38d14031-750d-40d9-9894-e7e81fcb6538" containerID="4a1b97fbce6eba591e536c18ca8edce65cec1e1907b9883c719af90b0583a283" exitCode=0 Jan 27 08:08:49 crc kubenswrapper[4799]: I0127 08:08:49.556517 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" event={"ID":"38d14031-750d-40d9-9894-e7e81fcb6538","Type":"ContainerDied","Data":"4a1b97fbce6eba591e536c18ca8edce65cec1e1907b9883c719af90b0583a283"} Jan 27 08:08:49 crc kubenswrapper[4799]: I0127 08:08:49.556532 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" event={"ID":"38d14031-750d-40d9-9894-e7e81fcb6538","Type":"ContainerStarted","Data":"725c1ce1458fe76229638d83fcba2a275566be8f03a632787824873108d01d89"} Jan 27 08:08:49 crc kubenswrapper[4799]: I0127 08:08:49.562792 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7e958b88-23dd-4e18-bd39-497a621e39ba","Type":"ContainerStarted","Data":"ce5ed8cbf1c981e8f69be408abb217475196048da86dde8c27a1e9e1c31caaab"} Jan 27 08:08:49 crc kubenswrapper[4799]: I0127 08:08:49.577624 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" event={"ID":"2d63e438-475a-4686-861e-5fba1fcb6767","Type":"ContainerStarted","Data":"f1755401242c059dad35b5bc55293a45e3c2338cff7e596ef0a63fa469e5e36a"} Jan 27 08:08:49 crc kubenswrapper[4799]: I0127 08:08:49.577663 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" event={"ID":"2d63e438-475a-4686-861e-5fba1fcb6767","Type":"ContainerStarted","Data":"a22901acd6a50b1282be5aa7812b776f75d8908b54e3c27f5cd0447e61303815"} Jan 27 08:08:49 crc kubenswrapper[4799]: I0127 08:08:49.593168 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-85fc64b547-v7lvv" podStartSLOduration=2.522106644 podStartE2EDuration="5.593151921s" podCreationTimestamp="2026-01-27 08:08:44 +0000 UTC" firstStartedPulling="2026-01-27 08:08:45.55963337 +0000 UTC m=+1391.870737435" lastFinishedPulling="2026-01-27 08:08:48.630678647 +0000 UTC m=+1394.941782712" observedRunningTime="2026-01-27 08:08:49.570552495 +0000 UTC m=+1395.881656570" watchObservedRunningTime="2026-01-27 08:08:49.593151921 +0000 UTC m=+1395.904255976" Jan 27 08:08:49 crc kubenswrapper[4799]: I0127 08:08:49.625540 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" podStartSLOduration=5.625519639 podStartE2EDuration="5.625519639s" podCreationTimestamp="2026-01-27 08:08:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:08:49.614391111 +0000 UTC m=+1395.925495196" watchObservedRunningTime="2026-01-27 08:08:49.625519639 +0000 UTC m=+1395.936623704" Jan 27 08:08:49 crc kubenswrapper[4799]: I0127 08:08:49.647489 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" podStartSLOduration=2.7497622 podStartE2EDuration="5.647466888s" podCreationTimestamp="2026-01-27 08:08:44 +0000 UTC" firstStartedPulling="2026-01-27 08:08:45.735920498 +0000 UTC m=+1392.047024563" lastFinishedPulling="2026-01-27 08:08:48.633625186 +0000 UTC m=+1394.944729251" observedRunningTime="2026-01-27 08:08:49.642375811 +0000 UTC m=+1395.953479876" watchObservedRunningTime="2026-01-27 08:08:49.647466888 +0000 UTC m=+1395.958570953" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.337468 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.471589 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-d5zp7" podUID="d322f101-f8ff-4a8b-9acb-9d441cf2367a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: i/o timeout" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.479147 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-ovsdbserver-nb\") pod \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.479211 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-dns-swift-storage-0\") pod \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.479251 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-ovsdbserver-sb\") pod \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.479276 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hnmv\" (UniqueName: \"kubernetes.io/projected/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-kube-api-access-8hnmv\") pod \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.479352 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-dns-svc\") pod \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.479418 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-config\") pod \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\" (UID: \"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db\") " Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.507180 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-kube-api-access-8hnmv" (OuterVolumeSpecName: "kube-api-access-8hnmv") pod "3caf4a8e-4d8e-4e9d-8a4f-3e417db021db" (UID: "3caf4a8e-4d8e-4e9d-8a4f-3e417db021db"). InnerVolumeSpecName "kube-api-access-8hnmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.582744 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hnmv\" (UniqueName: \"kubernetes.io/projected/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-kube-api-access-8hnmv\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.593098 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3caf4a8e-4d8e-4e9d-8a4f-3e417db021db" (UID: "3caf4a8e-4d8e-4e9d-8a4f-3e417db021db"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.611660 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fc22b588-9cca-4284-9e73-27a8cd5113b4","Type":"ContainerStarted","Data":"f12ae2f0a085df7fc9fc23c1597081e929160c65d8835f8c24d590bf3e10e2c1"} Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.615921 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3caf4a8e-4d8e-4e9d-8a4f-3e417db021db" (UID: "3caf4a8e-4d8e-4e9d-8a4f-3e417db021db"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.623608 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3caf4a8e-4d8e-4e9d-8a4f-3e417db021db" (UID: "3caf4a8e-4d8e-4e9d-8a4f-3e417db021db"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.623790 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-config" (OuterVolumeSpecName: "config") pod "3caf4a8e-4d8e-4e9d-8a4f-3e417db021db" (UID: "3caf4a8e-4d8e-4e9d-8a4f-3e417db021db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.629746 4799 generic.go:334] "Generic (PLEG): container finished" podID="3caf4a8e-4d8e-4e9d-8a4f-3e417db021db" containerID="98db836429a22b94120171becdacdf2f0650432c4800222f7f2a9b57049fc394" exitCode=0 Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.629801 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" event={"ID":"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db","Type":"ContainerDied","Data":"98db836429a22b94120171becdacdf2f0650432c4800222f7f2a9b57049fc394"} Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.629827 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" event={"ID":"3caf4a8e-4d8e-4e9d-8a4f-3e417db021db","Type":"ContainerDied","Data":"077f2c429661f86333434e4f56c731d5c623ec95ab9ecbdb5c549c8b1090a9dc"} Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.629845 4799 scope.go:117] "RemoveContainer" containerID="98db836429a22b94120171becdacdf2f0650432c4800222f7f2a9b57049fc394" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.629937 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-6cq8f" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.634446 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" event={"ID":"38d14031-750d-40d9-9894-e7e81fcb6538","Type":"ContainerStarted","Data":"607f87d36e1034d05e1aad79171a81d35a409182a32c94104026b68380ebce9b"} Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.634531 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.641991 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7e958b88-23dd-4e18-bd39-497a621e39ba","Type":"ContainerStarted","Data":"047ceb50f91c218a380edad162192dbdf640620894152032f327c52058ba614b"} Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.644596 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb0e296f-2929-4aa2-8272-ccba5119c5d1","Type":"ContainerStarted","Data":"d5fcc17cd2e0493914de2652c1cfb3cff509e9cf606385252ab3c15566bd0ef5"} Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.658768 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3caf4a8e-4d8e-4e9d-8a4f-3e417db021db" (UID: "3caf4a8e-4d8e-4e9d-8a4f-3e417db021db"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.670982 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" podStartSLOduration=4.670965579 podStartE2EDuration="4.670965579s" podCreationTimestamp="2026-01-27 08:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:08:50.662908552 +0000 UTC m=+1396.974012637" watchObservedRunningTime="2026-01-27 08:08:50.670965579 +0000 UTC m=+1396.982069644" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.684444 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.684551 4799 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.684567 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.684578 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.684587 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.710612 4799 scope.go:117] "RemoveContainer" containerID="1390a8258331a1500dc1106f9c4346a1283b0ed4c55cf33dfbb69a22af588234" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.746127 4799 scope.go:117] "RemoveContainer" containerID="98db836429a22b94120171becdacdf2f0650432c4800222f7f2a9b57049fc394" Jan 27 08:08:50 crc kubenswrapper[4799]: E0127 08:08:50.746613 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98db836429a22b94120171becdacdf2f0650432c4800222f7f2a9b57049fc394\": container with ID starting with 98db836429a22b94120171becdacdf2f0650432c4800222f7f2a9b57049fc394 not found: ID does not exist" containerID="98db836429a22b94120171becdacdf2f0650432c4800222f7f2a9b57049fc394" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.746647 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98db836429a22b94120171becdacdf2f0650432c4800222f7f2a9b57049fc394"} err="failed to get container status \"98db836429a22b94120171becdacdf2f0650432c4800222f7f2a9b57049fc394\": rpc error: code = NotFound desc = could not find container \"98db836429a22b94120171becdacdf2f0650432c4800222f7f2a9b57049fc394\": container with ID starting with 98db836429a22b94120171becdacdf2f0650432c4800222f7f2a9b57049fc394 not found: ID does not exist" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.746669 4799 scope.go:117] "RemoveContainer" containerID="1390a8258331a1500dc1106f9c4346a1283b0ed4c55cf33dfbb69a22af588234" Jan 27 08:08:50 crc kubenswrapper[4799]: E0127 08:08:50.746986 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1390a8258331a1500dc1106f9c4346a1283b0ed4c55cf33dfbb69a22af588234\": container with ID starting with 1390a8258331a1500dc1106f9c4346a1283b0ed4c55cf33dfbb69a22af588234 not found: ID does not exist" containerID="1390a8258331a1500dc1106f9c4346a1283b0ed4c55cf33dfbb69a22af588234" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.747010 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1390a8258331a1500dc1106f9c4346a1283b0ed4c55cf33dfbb69a22af588234"} err="failed to get container status \"1390a8258331a1500dc1106f9c4346a1283b0ed4c55cf33dfbb69a22af588234\": rpc error: code = NotFound desc = could not find container \"1390a8258331a1500dc1106f9c4346a1283b0ed4c55cf33dfbb69a22af588234\": container with ID starting with 1390a8258331a1500dc1106f9c4346a1283b0ed4c55cf33dfbb69a22af588234 not found: ID does not exist" Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.978229 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-6cq8f"] Jan 27 08:08:50 crc kubenswrapper[4799]: I0127 08:08:50.987497 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-6cq8f"] Jan 27 08:08:51 crc kubenswrapper[4799]: I0127 08:08:51.207674 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 08:08:51 crc kubenswrapper[4799]: I0127 08:08:51.653955 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb0e296f-2929-4aa2-8272-ccba5119c5d1","Type":"ContainerStarted","Data":"bc5cfc2195df2652b912c7113441fe825feb0d54d5789c2a1d5b77087667f4cd"} Jan 27 08:08:51 crc kubenswrapper[4799]: I0127 08:08:51.656841 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fc22b588-9cca-4284-9e73-27a8cd5113b4","Type":"ContainerStarted","Data":"3bc9a11e0b217e455af269de46dce45c3ca0dd9bdd24d0ba0c99a78c91eb24d8"} Jan 27 08:08:51 crc kubenswrapper[4799]: I0127 08:08:51.656974 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 08:08:51 crc kubenswrapper[4799]: I0127 08:08:51.656992 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="fc22b588-9cca-4284-9e73-27a8cd5113b4" containerName="cinder-api-log" containerID="cri-o://f12ae2f0a085df7fc9fc23c1597081e929160c65d8835f8c24d590bf3e10e2c1" gracePeriod=30 Jan 27 08:08:51 crc kubenswrapper[4799]: I0127 08:08:51.657097 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="fc22b588-9cca-4284-9e73-27a8cd5113b4" containerName="cinder-api" containerID="cri-o://3bc9a11e0b217e455af269de46dce45c3ca0dd9bdd24d0ba0c99a78c91eb24d8" gracePeriod=30 Jan 27 08:08:51 crc kubenswrapper[4799]: I0127 08:08:51.667735 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7e958b88-23dd-4e18-bd39-497a621e39ba","Type":"ContainerStarted","Data":"718ac6a54cd361b5a309e7f6dff7c144b9b872120e27139014c94af079e81c92"} Jan 27 08:08:51 crc kubenswrapper[4799]: I0127 08:08:51.705338 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.900346583 podStartE2EDuration="5.705320731s" podCreationTimestamp="2026-01-27 08:08:46 +0000 UTC" firstStartedPulling="2026-01-27 08:08:48.556338364 +0000 UTC m=+1394.867442429" lastFinishedPulling="2026-01-27 08:08:49.361312512 +0000 UTC m=+1395.672416577" observedRunningTime="2026-01-27 08:08:51.704830267 +0000 UTC m=+1398.015934352" watchObservedRunningTime="2026-01-27 08:08:51.705320731 +0000 UTC m=+1398.016424796" Jan 27 08:08:51 crc kubenswrapper[4799]: I0127 08:08:51.707367 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.707359365 podStartE2EDuration="5.707359365s" podCreationTimestamp="2026-01-27 08:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:08:51.690496823 +0000 UTC m=+1398.001600888" watchObservedRunningTime="2026-01-27 08:08:51.707359365 +0000 UTC m=+1398.018463430" Jan 27 08:08:51 crc kubenswrapper[4799]: I0127 08:08:51.950356 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 08:08:52 crc kubenswrapper[4799]: I0127 08:08:52.465925 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3caf4a8e-4d8e-4e9d-8a4f-3e417db021db" path="/var/lib/kubelet/pods/3caf4a8e-4d8e-4e9d-8a4f-3e417db021db/volumes" Jan 27 08:08:52 crc kubenswrapper[4799]: I0127 08:08:52.677590 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb0e296f-2929-4aa2-8272-ccba5119c5d1","Type":"ContainerStarted","Data":"5f1f4d9b278d2eec2980ea28c2616ad243d63e73f554f6c36db050eeeaf0755b"} Jan 27 08:08:52 crc kubenswrapper[4799]: I0127 08:08:52.679919 4799 generic.go:334] "Generic (PLEG): container finished" podID="fc22b588-9cca-4284-9e73-27a8cd5113b4" containerID="f12ae2f0a085df7fc9fc23c1597081e929160c65d8835f8c24d590bf3e10e2c1" exitCode=143 Jan 27 08:08:52 crc kubenswrapper[4799]: I0127 08:08:52.680238 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fc22b588-9cca-4284-9e73-27a8cd5113b4","Type":"ContainerDied","Data":"f12ae2f0a085df7fc9fc23c1597081e929160c65d8835f8c24d590bf3e10e2c1"} Jan 27 08:08:52 crc kubenswrapper[4799]: I0127 08:08:52.797694 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-85c6c54fbb-zhvhw"] Jan 27 08:08:52 crc kubenswrapper[4799]: E0127 08:08:52.798120 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3caf4a8e-4d8e-4e9d-8a4f-3e417db021db" containerName="init" Jan 27 08:08:52 crc kubenswrapper[4799]: I0127 08:08:52.798141 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3caf4a8e-4d8e-4e9d-8a4f-3e417db021db" containerName="init" Jan 27 08:08:52 crc kubenswrapper[4799]: E0127 08:08:52.798172 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3caf4a8e-4d8e-4e9d-8a4f-3e417db021db" containerName="dnsmasq-dns" Jan 27 08:08:52 crc kubenswrapper[4799]: I0127 08:08:52.798180 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3caf4a8e-4d8e-4e9d-8a4f-3e417db021db" containerName="dnsmasq-dns" Jan 27 08:08:52 crc kubenswrapper[4799]: I0127 08:08:52.798407 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="3caf4a8e-4d8e-4e9d-8a4f-3e417db021db" containerName="dnsmasq-dns" Jan 27 08:08:52 crc kubenswrapper[4799]: I0127 08:08:52.799520 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:52 crc kubenswrapper[4799]: I0127 08:08:52.801620 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 27 08:08:52 crc kubenswrapper[4799]: I0127 08:08:52.801760 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 27 08:08:52 crc kubenswrapper[4799]: I0127 08:08:52.819120 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85c6c54fbb-zhvhw"] Jan 27 08:08:52 crc kubenswrapper[4799]: I0127 08:08:52.940399 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dbfc3a0-883d-46a6-af9b-879efb42840e-logs\") pod \"barbican-api-85c6c54fbb-zhvhw\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:52 crc kubenswrapper[4799]: I0127 08:08:52.940736 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-config-data\") pod \"barbican-api-85c6c54fbb-zhvhw\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:52 crc kubenswrapper[4799]: I0127 08:08:52.940780 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-public-tls-certs\") pod \"barbican-api-85c6c54fbb-zhvhw\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:52 crc kubenswrapper[4799]: I0127 08:08:52.940810 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-combined-ca-bundle\") pod \"barbican-api-85c6c54fbb-zhvhw\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:52 crc kubenswrapper[4799]: I0127 08:08:52.940856 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-config-data-custom\") pod \"barbican-api-85c6c54fbb-zhvhw\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:52 crc kubenswrapper[4799]: I0127 08:08:52.940880 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k76kw\" (UniqueName: \"kubernetes.io/projected/0dbfc3a0-883d-46a6-af9b-879efb42840e-kube-api-access-k76kw\") pod \"barbican-api-85c6c54fbb-zhvhw\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:52 crc kubenswrapper[4799]: I0127 08:08:52.940927 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-internal-tls-certs\") pod \"barbican-api-85c6c54fbb-zhvhw\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.042491 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-config-data\") pod \"barbican-api-85c6c54fbb-zhvhw\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.042544 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dbfc3a0-883d-46a6-af9b-879efb42840e-logs\") pod \"barbican-api-85c6c54fbb-zhvhw\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.042603 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-public-tls-certs\") pod \"barbican-api-85c6c54fbb-zhvhw\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.042644 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-combined-ca-bundle\") pod \"barbican-api-85c6c54fbb-zhvhw\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.042680 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-config-data-custom\") pod \"barbican-api-85c6c54fbb-zhvhw\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.042717 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k76kw\" (UniqueName: \"kubernetes.io/projected/0dbfc3a0-883d-46a6-af9b-879efb42840e-kube-api-access-k76kw\") pod \"barbican-api-85c6c54fbb-zhvhw\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.042756 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-internal-tls-certs\") pod \"barbican-api-85c6c54fbb-zhvhw\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.043344 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dbfc3a0-883d-46a6-af9b-879efb42840e-logs\") pod \"barbican-api-85c6c54fbb-zhvhw\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.050100 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-internal-tls-certs\") pod \"barbican-api-85c6c54fbb-zhvhw\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.050148 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-config-data\") pod \"barbican-api-85c6c54fbb-zhvhw\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.050771 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-combined-ca-bundle\") pod \"barbican-api-85c6c54fbb-zhvhw\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.052925 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-config-data-custom\") pod \"barbican-api-85c6c54fbb-zhvhw\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.054020 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-public-tls-certs\") pod \"barbican-api-85c6c54fbb-zhvhw\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.066414 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k76kw\" (UniqueName: \"kubernetes.io/projected/0dbfc3a0-883d-46a6-af9b-879efb42840e-kube-api-access-k76kw\") pod \"barbican-api-85c6c54fbb-zhvhw\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.122041 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.645011 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85c6c54fbb-zhvhw"] Jan 27 08:08:53 crc kubenswrapper[4799]: W0127 08:08:53.649845 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0dbfc3a0_883d_46a6_af9b_879efb42840e.slice/crio-4ea8cf7ad52dfadf665eee540a9fb0534c8e26a962cae0d4cd1154cb7bc37cce WatchSource:0}: Error finding container 4ea8cf7ad52dfadf665eee540a9fb0534c8e26a962cae0d4cd1154cb7bc37cce: Status 404 returned error can't find the container with id 4ea8cf7ad52dfadf665eee540a9fb0534c8e26a962cae0d4cd1154cb7bc37cce Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.650375 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.715138 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb0e296f-2929-4aa2-8272-ccba5119c5d1","Type":"ContainerStarted","Data":"a94e94816abc1214f1fbc4f4dc33ae8edcdbea0de33802dc3a02f58c48a15a7a"} Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.717005 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85c6c54fbb-zhvhw" event={"ID":"0dbfc3a0-883d-46a6-af9b-879efb42840e","Type":"ContainerStarted","Data":"4ea8cf7ad52dfadf665eee540a9fb0534c8e26a962cae0d4cd1154cb7bc37cce"} Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.721421 4799 generic.go:334] "Generic (PLEG): container finished" podID="fc22b588-9cca-4284-9e73-27a8cd5113b4" containerID="3bc9a11e0b217e455af269de46dce45c3ca0dd9bdd24d0ba0c99a78c91eb24d8" exitCode=0 Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.722056 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fc22b588-9cca-4284-9e73-27a8cd5113b4","Type":"ContainerDied","Data":"3bc9a11e0b217e455af269de46dce45c3ca0dd9bdd24d0ba0c99a78c91eb24d8"} Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.722116 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fc22b588-9cca-4284-9e73-27a8cd5113b4","Type":"ContainerDied","Data":"b41de7f518a2905328f2318dcee5711824bd69611476aded558978f156a4c498"} Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.722136 4799 scope.go:117] "RemoveContainer" containerID="3bc9a11e0b217e455af269de46dce45c3ca0dd9bdd24d0ba0c99a78c91eb24d8" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.722202 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.741084 4799 scope.go:117] "RemoveContainer" containerID="f12ae2f0a085df7fc9fc23c1597081e929160c65d8835f8c24d590bf3e10e2c1" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.752175 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-config-data\") pod \"fc22b588-9cca-4284-9e73-27a8cd5113b4\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.752538 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc22b588-9cca-4284-9e73-27a8cd5113b4-logs\") pod \"fc22b588-9cca-4284-9e73-27a8cd5113b4\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.752737 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-combined-ca-bundle\") pod \"fc22b588-9cca-4284-9e73-27a8cd5113b4\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.752856 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-scripts\") pod \"fc22b588-9cca-4284-9e73-27a8cd5113b4\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.752948 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc22b588-9cca-4284-9e73-27a8cd5113b4-etc-machine-id\") pod \"fc22b588-9cca-4284-9e73-27a8cd5113b4\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.753042 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc22b588-9cca-4284-9e73-27a8cd5113b4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fc22b588-9cca-4284-9e73-27a8cd5113b4" (UID: "fc22b588-9cca-4284-9e73-27a8cd5113b4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.753070 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-config-data-custom\") pod \"fc22b588-9cca-4284-9e73-27a8cd5113b4\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.753231 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc22b588-9cca-4284-9e73-27a8cd5113b4-logs" (OuterVolumeSpecName: "logs") pod "fc22b588-9cca-4284-9e73-27a8cd5113b4" (UID: "fc22b588-9cca-4284-9e73-27a8cd5113b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.753312 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mwr5\" (UniqueName: \"kubernetes.io/projected/fc22b588-9cca-4284-9e73-27a8cd5113b4-kube-api-access-8mwr5\") pod \"fc22b588-9cca-4284-9e73-27a8cd5113b4\" (UID: \"fc22b588-9cca-4284-9e73-27a8cd5113b4\") " Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.754019 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc22b588-9cca-4284-9e73-27a8cd5113b4-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.754046 4799 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc22b588-9cca-4284-9e73-27a8cd5113b4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.759599 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fc22b588-9cca-4284-9e73-27a8cd5113b4" (UID: "fc22b588-9cca-4284-9e73-27a8cd5113b4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.760147 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-scripts" (OuterVolumeSpecName: "scripts") pod "fc22b588-9cca-4284-9e73-27a8cd5113b4" (UID: "fc22b588-9cca-4284-9e73-27a8cd5113b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.763425 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc22b588-9cca-4284-9e73-27a8cd5113b4-kube-api-access-8mwr5" (OuterVolumeSpecName: "kube-api-access-8mwr5") pod "fc22b588-9cca-4284-9e73-27a8cd5113b4" (UID: "fc22b588-9cca-4284-9e73-27a8cd5113b4"). InnerVolumeSpecName "kube-api-access-8mwr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.766353 4799 scope.go:117] "RemoveContainer" containerID="3bc9a11e0b217e455af269de46dce45c3ca0dd9bdd24d0ba0c99a78c91eb24d8" Jan 27 08:08:53 crc kubenswrapper[4799]: E0127 08:08:53.766798 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bc9a11e0b217e455af269de46dce45c3ca0dd9bdd24d0ba0c99a78c91eb24d8\": container with ID starting with 3bc9a11e0b217e455af269de46dce45c3ca0dd9bdd24d0ba0c99a78c91eb24d8 not found: ID does not exist" containerID="3bc9a11e0b217e455af269de46dce45c3ca0dd9bdd24d0ba0c99a78c91eb24d8" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.766823 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bc9a11e0b217e455af269de46dce45c3ca0dd9bdd24d0ba0c99a78c91eb24d8"} err="failed to get container status \"3bc9a11e0b217e455af269de46dce45c3ca0dd9bdd24d0ba0c99a78c91eb24d8\": rpc error: code = NotFound desc = could not find container \"3bc9a11e0b217e455af269de46dce45c3ca0dd9bdd24d0ba0c99a78c91eb24d8\": container with ID starting with 3bc9a11e0b217e455af269de46dce45c3ca0dd9bdd24d0ba0c99a78c91eb24d8 not found: ID does not exist" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.766843 4799 scope.go:117] "RemoveContainer" containerID="f12ae2f0a085df7fc9fc23c1597081e929160c65d8835f8c24d590bf3e10e2c1" Jan 27 08:08:53 crc kubenswrapper[4799]: E0127 08:08:53.767050 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f12ae2f0a085df7fc9fc23c1597081e929160c65d8835f8c24d590bf3e10e2c1\": container with ID starting with f12ae2f0a085df7fc9fc23c1597081e929160c65d8835f8c24d590bf3e10e2c1 not found: ID does not exist" containerID="f12ae2f0a085df7fc9fc23c1597081e929160c65d8835f8c24d590bf3e10e2c1" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.767070 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f12ae2f0a085df7fc9fc23c1597081e929160c65d8835f8c24d590bf3e10e2c1"} err="failed to get container status \"f12ae2f0a085df7fc9fc23c1597081e929160c65d8835f8c24d590bf3e10e2c1\": rpc error: code = NotFound desc = could not find container \"f12ae2f0a085df7fc9fc23c1597081e929160c65d8835f8c24d590bf3e10e2c1\": container with ID starting with f12ae2f0a085df7fc9fc23c1597081e929160c65d8835f8c24d590bf3e10e2c1 not found: ID does not exist" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.780437 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc22b588-9cca-4284-9e73-27a8cd5113b4" (UID: "fc22b588-9cca-4284-9e73-27a8cd5113b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.801351 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-config-data" (OuterVolumeSpecName: "config-data") pod "fc22b588-9cca-4284-9e73-27a8cd5113b4" (UID: "fc22b588-9cca-4284-9e73-27a8cd5113b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.855761 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mwr5\" (UniqueName: \"kubernetes.io/projected/fc22b588-9cca-4284-9e73-27a8cd5113b4-kube-api-access-8mwr5\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.855797 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.855807 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.855816 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:53 crc kubenswrapper[4799]: I0127 08:08:53.855825 4799 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc22b588-9cca-4284-9e73-27a8cd5113b4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.064434 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.070418 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.124023 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 08:08:54 crc kubenswrapper[4799]: E0127 08:08:54.125269 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc22b588-9cca-4284-9e73-27a8cd5113b4" containerName="cinder-api-log" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.125296 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc22b588-9cca-4284-9e73-27a8cd5113b4" containerName="cinder-api-log" Jan 27 08:08:54 crc kubenswrapper[4799]: E0127 08:08:54.125336 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc22b588-9cca-4284-9e73-27a8cd5113b4" containerName="cinder-api" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.125346 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc22b588-9cca-4284-9e73-27a8cd5113b4" containerName="cinder-api" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.125828 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc22b588-9cca-4284-9e73-27a8cd5113b4" containerName="cinder-api-log" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.125861 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc22b588-9cca-4284-9e73-27a8cd5113b4" containerName="cinder-api" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.126963 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.129426 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.129751 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.130348 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.142874 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.161338 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-config-data\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.161384 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.161440 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.161465 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.161486 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b04e9a37-9722-491b-ada1-992d747e5bed-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.161514 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-config-data-custom\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.161534 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b04e9a37-9722-491b-ada1-992d747e5bed-logs\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.161782 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-scripts\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.161969 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcfk4\" (UniqueName: \"kubernetes.io/projected/b04e9a37-9722-491b-ada1-992d747e5bed-kube-api-access-pcfk4\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.264090 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.264141 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b04e9a37-9722-491b-ada1-992d747e5bed-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.264190 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-config-data-custom\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.264207 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b04e9a37-9722-491b-ada1-992d747e5bed-logs\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.264264 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-scripts\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.264324 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b04e9a37-9722-491b-ada1-992d747e5bed-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.264343 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcfk4\" (UniqueName: \"kubernetes.io/projected/b04e9a37-9722-491b-ada1-992d747e5bed-kube-api-access-pcfk4\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.264598 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-config-data\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.264639 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.264756 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.264953 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b04e9a37-9722-491b-ada1-992d747e5bed-logs\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.268000 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.270850 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-scripts\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.271795 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-config-data\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.272222 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.279032 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.279211 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-config-data-custom\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.289903 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcfk4\" (UniqueName: \"kubernetes.io/projected/b04e9a37-9722-491b-ada1-992d747e5bed-kube-api-access-pcfk4\") pod \"cinder-api-0\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.463827 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc22b588-9cca-4284-9e73-27a8cd5113b4" path="/var/lib/kubelet/pods/fc22b588-9cca-4284-9e73-27a8cd5113b4/volumes" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.469777 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.739784 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85c6c54fbb-zhvhw" event={"ID":"0dbfc3a0-883d-46a6-af9b-879efb42840e","Type":"ContainerStarted","Data":"f6c0c751dfd74d698477e4e018861e43ef7141cef238f287a434550b2a21af4b"} Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.740099 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85c6c54fbb-zhvhw" event={"ID":"0dbfc3a0-883d-46a6-af9b-879efb42840e","Type":"ContainerStarted","Data":"40d6b9faa74af8ff6a32d01f9fc3a6c0f6258a0b08ea53fa5774e5655a3aa97d"} Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.741935 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.741991 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.746685 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.780972 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-85c6c54fbb-zhvhw" podStartSLOduration=2.78095072 podStartE2EDuration="2.78095072s" podCreationTimestamp="2026-01-27 08:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:08:54.763610605 +0000 UTC m=+1401.074714680" watchObservedRunningTime="2026-01-27 08:08:54.78095072 +0000 UTC m=+1401.092054785" Jan 27 08:08:54 crc kubenswrapper[4799]: I0127 08:08:54.801760 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.997886996 podStartE2EDuration="7.801737177s" podCreationTimestamp="2026-01-27 08:08:47 +0000 UTC" firstStartedPulling="2026-01-27 08:08:49.346880976 +0000 UTC m=+1395.657985041" lastFinishedPulling="2026-01-27 08:08:53.150731167 +0000 UTC m=+1399.461835222" observedRunningTime="2026-01-27 08:08:54.794566315 +0000 UTC m=+1401.105670400" watchObservedRunningTime="2026-01-27 08:08:54.801737177 +0000 UTC m=+1401.112841242" Jan 27 08:08:55 crc kubenswrapper[4799]: I0127 08:08:55.024017 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 08:08:55 crc kubenswrapper[4799]: I0127 08:08:55.760122 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b04e9a37-9722-491b-ada1-992d747e5bed","Type":"ContainerStarted","Data":"81cc342fe76174057a1e4ac1d424f9ae5b786539e062c9b4410b8ca621aa20ea"} Jan 27 08:08:56 crc kubenswrapper[4799]: I0127 08:08:56.681633 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:08:56 crc kubenswrapper[4799]: I0127 08:08:56.692278 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:08:56 crc kubenswrapper[4799]: I0127 08:08:56.786577 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b04e9a37-9722-491b-ada1-992d747e5bed","Type":"ContainerStarted","Data":"116993352c0ec841dd44d8855c494723f90029b4a77addd4297eb275455f13c3"} Jan 27 08:08:56 crc kubenswrapper[4799]: I0127 08:08:56.786940 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 08:08:56 crc kubenswrapper[4799]: I0127 08:08:56.786955 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b04e9a37-9722-491b-ada1-992d747e5bed","Type":"ContainerStarted","Data":"f06952a9ab57e05a35f1acbc89982309bc73e3bb682bdad8e9a6892475d7d2d6"} Jan 27 08:08:56 crc kubenswrapper[4799]: I0127 08:08:56.813907 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.813884254 podStartE2EDuration="2.813884254s" podCreationTimestamp="2026-01-27 08:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:08:56.803161917 +0000 UTC m=+1403.114265992" watchObservedRunningTime="2026-01-27 08:08:56.813884254 +0000 UTC m=+1403.124988319" Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.079473 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.164417 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-pvktj"] Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.164661 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-pvktj" podUID="5fde0b34-cfe7-4ce2-9377-53e6ef0be697" containerName="dnsmasq-dns" containerID="cri-o://89bbf42d7a5fe7fdb366285edcfeddd80e430533e822721ce0f47fede0c4a885" gracePeriod=10 Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.176489 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.225747 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.745953 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.800654 4799 generic.go:334] "Generic (PLEG): container finished" podID="5fde0b34-cfe7-4ce2-9377-53e6ef0be697" containerID="89bbf42d7a5fe7fdb366285edcfeddd80e430533e822721ce0f47fede0c4a885" exitCode=0 Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.801484 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-pvktj" Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.801865 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-pvktj" event={"ID":"5fde0b34-cfe7-4ce2-9377-53e6ef0be697","Type":"ContainerDied","Data":"89bbf42d7a5fe7fdb366285edcfeddd80e430533e822721ce0f47fede0c4a885"} Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.801890 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-pvktj" event={"ID":"5fde0b34-cfe7-4ce2-9377-53e6ef0be697","Type":"ContainerDied","Data":"4d3f4e17d0235ea45375360c62c49b8e10993eeb2e57d7632df928f502dd7386"} Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.801906 4799 scope.go:117] "RemoveContainer" containerID="89bbf42d7a5fe7fdb366285edcfeddd80e430533e822721ce0f47fede0c4a885" Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.802848 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7e958b88-23dd-4e18-bd39-497a621e39ba" containerName="cinder-scheduler" containerID="cri-o://047ceb50f91c218a380edad162192dbdf640620894152032f327c52058ba614b" gracePeriod=30 Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.803039 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7e958b88-23dd-4e18-bd39-497a621e39ba" containerName="probe" containerID="cri-o://718ac6a54cd361b5a309e7f6dff7c144b9b872120e27139014c94af079e81c92" gracePeriod=30 Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.833549 4799 scope.go:117] "RemoveContainer" containerID="0a79c9b957938ffdc9e403ec034b3f2a8a9d57e148d4dd598917333b5b2bbdfd" Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.844646 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24vdp\" (UniqueName: \"kubernetes.io/projected/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-kube-api-access-24vdp\") pod \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.844828 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-dns-svc\") pod \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.844871 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-ovsdbserver-nb\") pod \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.844918 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-config\") pod \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.844977 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-ovsdbserver-sb\") pod \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.845042 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-dns-swift-storage-0\") pod \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\" (UID: \"5fde0b34-cfe7-4ce2-9377-53e6ef0be697\") " Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.853472 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-kube-api-access-24vdp" (OuterVolumeSpecName: "kube-api-access-24vdp") pod "5fde0b34-cfe7-4ce2-9377-53e6ef0be697" (UID: "5fde0b34-cfe7-4ce2-9377-53e6ef0be697"). InnerVolumeSpecName "kube-api-access-24vdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.854094 4799 scope.go:117] "RemoveContainer" containerID="89bbf42d7a5fe7fdb366285edcfeddd80e430533e822721ce0f47fede0c4a885" Jan 27 08:08:57 crc kubenswrapper[4799]: E0127 08:08:57.854538 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89bbf42d7a5fe7fdb366285edcfeddd80e430533e822721ce0f47fede0c4a885\": container with ID starting with 89bbf42d7a5fe7fdb366285edcfeddd80e430533e822721ce0f47fede0c4a885 not found: ID does not exist" containerID="89bbf42d7a5fe7fdb366285edcfeddd80e430533e822721ce0f47fede0c4a885" Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.854574 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89bbf42d7a5fe7fdb366285edcfeddd80e430533e822721ce0f47fede0c4a885"} err="failed to get container status \"89bbf42d7a5fe7fdb366285edcfeddd80e430533e822721ce0f47fede0c4a885\": rpc error: code = NotFound desc = could not find container \"89bbf42d7a5fe7fdb366285edcfeddd80e430533e822721ce0f47fede0c4a885\": container with ID starting with 89bbf42d7a5fe7fdb366285edcfeddd80e430533e822721ce0f47fede0c4a885 not found: ID does not exist" Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.854595 4799 scope.go:117] "RemoveContainer" containerID="0a79c9b957938ffdc9e403ec034b3f2a8a9d57e148d4dd598917333b5b2bbdfd" Jan 27 08:08:57 crc kubenswrapper[4799]: E0127 08:08:57.854815 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a79c9b957938ffdc9e403ec034b3f2a8a9d57e148d4dd598917333b5b2bbdfd\": container with ID starting with 0a79c9b957938ffdc9e403ec034b3f2a8a9d57e148d4dd598917333b5b2bbdfd not found: ID does not exist" containerID="0a79c9b957938ffdc9e403ec034b3f2a8a9d57e148d4dd598917333b5b2bbdfd" Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.854837 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a79c9b957938ffdc9e403ec034b3f2a8a9d57e148d4dd598917333b5b2bbdfd"} err="failed to get container status \"0a79c9b957938ffdc9e403ec034b3f2a8a9d57e148d4dd598917333b5b2bbdfd\": rpc error: code = NotFound desc = could not find container \"0a79c9b957938ffdc9e403ec034b3f2a8a9d57e148d4dd598917333b5b2bbdfd\": container with ID starting with 0a79c9b957938ffdc9e403ec034b3f2a8a9d57e148d4dd598917333b5b2bbdfd not found: ID does not exist" Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.901024 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-config" (OuterVolumeSpecName: "config") pod "5fde0b34-cfe7-4ce2-9377-53e6ef0be697" (UID: "5fde0b34-cfe7-4ce2-9377-53e6ef0be697"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.905644 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5fde0b34-cfe7-4ce2-9377-53e6ef0be697" (UID: "5fde0b34-cfe7-4ce2-9377-53e6ef0be697"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.908659 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5fde0b34-cfe7-4ce2-9377-53e6ef0be697" (UID: "5fde0b34-cfe7-4ce2-9377-53e6ef0be697"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.916689 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5fde0b34-cfe7-4ce2-9377-53e6ef0be697" (UID: "5fde0b34-cfe7-4ce2-9377-53e6ef0be697"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.928462 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5fde0b34-cfe7-4ce2-9377-53e6ef0be697" (UID: "5fde0b34-cfe7-4ce2-9377-53e6ef0be697"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.952973 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.953009 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.953019 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.953030 4799 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.953039 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24vdp\" (UniqueName: \"kubernetes.io/projected/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-kube-api-access-24vdp\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:57 crc kubenswrapper[4799]: I0127 08:08:57.953048 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fde0b34-cfe7-4ce2-9377-53e6ef0be697-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:58 crc kubenswrapper[4799]: I0127 08:08:58.130108 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-pvktj"] Jan 27 08:08:58 crc kubenswrapper[4799]: I0127 08:08:58.141283 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-pvktj"] Jan 27 08:08:58 crc kubenswrapper[4799]: I0127 08:08:58.469288 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fde0b34-cfe7-4ce2-9377-53e6ef0be697" path="/var/lib/kubelet/pods/5fde0b34-cfe7-4ce2-9377-53e6ef0be697/volumes" Jan 27 08:08:58 crc kubenswrapper[4799]: I0127 08:08:58.821222 4799 generic.go:334] "Generic (PLEG): container finished" podID="7e958b88-23dd-4e18-bd39-497a621e39ba" containerID="718ac6a54cd361b5a309e7f6dff7c144b9b872120e27139014c94af079e81c92" exitCode=0 Jan 27 08:08:58 crc kubenswrapper[4799]: I0127 08:08:58.821602 4799 generic.go:334] "Generic (PLEG): container finished" podID="7e958b88-23dd-4e18-bd39-497a621e39ba" containerID="047ceb50f91c218a380edad162192dbdf640620894152032f327c52058ba614b" exitCode=0 Jan 27 08:08:58 crc kubenswrapper[4799]: I0127 08:08:58.821696 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7e958b88-23dd-4e18-bd39-497a621e39ba","Type":"ContainerDied","Data":"718ac6a54cd361b5a309e7f6dff7c144b9b872120e27139014c94af079e81c92"} Jan 27 08:08:58 crc kubenswrapper[4799]: I0127 08:08:58.821743 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7e958b88-23dd-4e18-bd39-497a621e39ba","Type":"ContainerDied","Data":"047ceb50f91c218a380edad162192dbdf640620894152032f327c52058ba614b"} Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.005382 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.073348 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-config-data\") pod \"7e958b88-23dd-4e18-bd39-497a621e39ba\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.073408 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e958b88-23dd-4e18-bd39-497a621e39ba-etc-machine-id\") pod \"7e958b88-23dd-4e18-bd39-497a621e39ba\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.073464 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-scripts\") pod \"7e958b88-23dd-4e18-bd39-497a621e39ba\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.073607 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7dq8\" (UniqueName: \"kubernetes.io/projected/7e958b88-23dd-4e18-bd39-497a621e39ba-kube-api-access-r7dq8\") pod \"7e958b88-23dd-4e18-bd39-497a621e39ba\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.073650 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-config-data-custom\") pod \"7e958b88-23dd-4e18-bd39-497a621e39ba\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.073706 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-combined-ca-bundle\") pod \"7e958b88-23dd-4e18-bd39-497a621e39ba\" (UID: \"7e958b88-23dd-4e18-bd39-497a621e39ba\") " Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.077403 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e958b88-23dd-4e18-bd39-497a621e39ba-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7e958b88-23dd-4e18-bd39-497a621e39ba" (UID: "7e958b88-23dd-4e18-bd39-497a621e39ba"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.080176 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e958b88-23dd-4e18-bd39-497a621e39ba-kube-api-access-r7dq8" (OuterVolumeSpecName: "kube-api-access-r7dq8") pod "7e958b88-23dd-4e18-bd39-497a621e39ba" (UID: "7e958b88-23dd-4e18-bd39-497a621e39ba"). InnerVolumeSpecName "kube-api-access-r7dq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.082325 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7e958b88-23dd-4e18-bd39-497a621e39ba" (UID: "7e958b88-23dd-4e18-bd39-497a621e39ba"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.084158 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-scripts" (OuterVolumeSpecName: "scripts") pod "7e958b88-23dd-4e18-bd39-497a621e39ba" (UID: "7e958b88-23dd-4e18-bd39-497a621e39ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.119523 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e958b88-23dd-4e18-bd39-497a621e39ba" (UID: "7e958b88-23dd-4e18-bd39-497a621e39ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.175664 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7dq8\" (UniqueName: \"kubernetes.io/projected/7e958b88-23dd-4e18-bd39-497a621e39ba-kube-api-access-r7dq8\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.175705 4799 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.175742 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.175754 4799 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e958b88-23dd-4e18-bd39-497a621e39ba-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.175767 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.178917 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-config-data" (OuterVolumeSpecName: "config-data") pod "7e958b88-23dd-4e18-bd39-497a621e39ba" (UID: "7e958b88-23dd-4e18-bd39-497a621e39ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.277103 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e958b88-23dd-4e18-bd39-497a621e39ba-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.773910 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.833333 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7e958b88-23dd-4e18-bd39-497a621e39ba","Type":"ContainerDied","Data":"ce5ed8cbf1c981e8f69be408abb217475196048da86dde8c27a1e9e1c31caaab"} Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.833676 4799 scope.go:117] "RemoveContainer" containerID="718ac6a54cd361b5a309e7f6dff7c144b9b872120e27139014c94af079e81c92" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.833487 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.867039 4799 scope.go:117] "RemoveContainer" containerID="047ceb50f91c218a380edad162192dbdf640620894152032f327c52058ba614b" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.883828 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.901408 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.915405 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 08:08:59 crc kubenswrapper[4799]: E0127 08:08:59.915786 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e958b88-23dd-4e18-bd39-497a621e39ba" containerName="probe" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.915803 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e958b88-23dd-4e18-bd39-497a621e39ba" containerName="probe" Jan 27 08:08:59 crc kubenswrapper[4799]: E0127 08:08:59.915817 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e958b88-23dd-4e18-bd39-497a621e39ba" containerName="cinder-scheduler" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.915823 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e958b88-23dd-4e18-bd39-497a621e39ba" containerName="cinder-scheduler" Jan 27 08:08:59 crc kubenswrapper[4799]: E0127 08:08:59.915835 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fde0b34-cfe7-4ce2-9377-53e6ef0be697" containerName="init" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.915844 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fde0b34-cfe7-4ce2-9377-53e6ef0be697" containerName="init" Jan 27 08:08:59 crc kubenswrapper[4799]: E0127 08:08:59.915867 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fde0b34-cfe7-4ce2-9377-53e6ef0be697" containerName="dnsmasq-dns" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.915873 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fde0b34-cfe7-4ce2-9377-53e6ef0be697" containerName="dnsmasq-dns" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.916038 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fde0b34-cfe7-4ce2-9377-53e6ef0be697" containerName="dnsmasq-dns" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.916053 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e958b88-23dd-4e18-bd39-497a621e39ba" containerName="cinder-scheduler" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.916068 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e958b88-23dd-4e18-bd39-497a621e39ba" containerName="probe" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.916948 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.920289 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.923419 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.988454 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.988520 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.988567 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/182368c8-7aeb-4cfe-8de7-60794b59792c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.988582 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpbg7\" (UniqueName: \"kubernetes.io/projected/182368c8-7aeb-4cfe-8de7-60794b59792c-kube-api-access-dpbg7\") pod \"cinder-scheduler-0\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.988626 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data\") pod \"cinder-scheduler-0\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " pod="openstack/cinder-scheduler-0" Jan 27 08:08:59 crc kubenswrapper[4799]: I0127 08:08:59.988648 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-scripts\") pod \"cinder-scheduler-0\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " pod="openstack/cinder-scheduler-0" Jan 27 08:09:00 crc kubenswrapper[4799]: I0127 08:09:00.089595 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/182368c8-7aeb-4cfe-8de7-60794b59792c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " pod="openstack/cinder-scheduler-0" Jan 27 08:09:00 crc kubenswrapper[4799]: I0127 08:09:00.089636 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpbg7\" (UniqueName: \"kubernetes.io/projected/182368c8-7aeb-4cfe-8de7-60794b59792c-kube-api-access-dpbg7\") pod \"cinder-scheduler-0\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " pod="openstack/cinder-scheduler-0" Jan 27 08:09:00 crc kubenswrapper[4799]: I0127 08:09:00.089687 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data\") pod \"cinder-scheduler-0\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " pod="openstack/cinder-scheduler-0" Jan 27 08:09:00 crc kubenswrapper[4799]: I0127 08:09:00.089711 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-scripts\") pod \"cinder-scheduler-0\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " pod="openstack/cinder-scheduler-0" Jan 27 08:09:00 crc kubenswrapper[4799]: I0127 08:09:00.089768 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " pod="openstack/cinder-scheduler-0" Jan 27 08:09:00 crc kubenswrapper[4799]: I0127 08:09:00.089803 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " pod="openstack/cinder-scheduler-0" Jan 27 08:09:00 crc kubenswrapper[4799]: I0127 08:09:00.094251 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/182368c8-7aeb-4cfe-8de7-60794b59792c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " pod="openstack/cinder-scheduler-0" Jan 27 08:09:00 crc kubenswrapper[4799]: I0127 08:09:00.099382 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " pod="openstack/cinder-scheduler-0" Jan 27 08:09:00 crc kubenswrapper[4799]: I0127 08:09:00.100499 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-scripts\") pod \"cinder-scheduler-0\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " pod="openstack/cinder-scheduler-0" Jan 27 08:09:00 crc kubenswrapper[4799]: I0127 08:09:00.101718 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data\") pod \"cinder-scheduler-0\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " pod="openstack/cinder-scheduler-0" Jan 27 08:09:00 crc kubenswrapper[4799]: I0127 08:09:00.106643 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " pod="openstack/cinder-scheduler-0" Jan 27 08:09:00 crc kubenswrapper[4799]: I0127 08:09:00.110756 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpbg7\" (UniqueName: \"kubernetes.io/projected/182368c8-7aeb-4cfe-8de7-60794b59792c-kube-api-access-dpbg7\") pod \"cinder-scheduler-0\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " pod="openstack/cinder-scheduler-0" Jan 27 08:09:00 crc kubenswrapper[4799]: I0127 08:09:00.244216 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 08:09:00 crc kubenswrapper[4799]: I0127 08:09:00.462099 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e958b88-23dd-4e18-bd39-497a621e39ba" path="/var/lib/kubelet/pods/7e958b88-23dd-4e18-bd39-497a621e39ba/volumes" Jan 27 08:09:00 crc kubenswrapper[4799]: I0127 08:09:00.732474 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 08:09:00 crc kubenswrapper[4799]: I0127 08:09:00.855206 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"182368c8-7aeb-4cfe-8de7-60794b59792c","Type":"ContainerStarted","Data":"2e947bd71a4d3d6420047f74fd8fbdd510df04573e79e63ef63546e4851cfba1"} Jan 27 08:09:01 crc kubenswrapper[4799]: I0127 08:09:01.290171 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:09:01 crc kubenswrapper[4799]: I0127 08:09:01.364254 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5986c44cf4-xhtf8"] Jan 27 08:09:01 crc kubenswrapper[4799]: I0127 08:09:01.364499 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5986c44cf4-xhtf8" podUID="54204b59-e201-4fe0-b86b-20a807415269" containerName="barbican-api-log" containerID="cri-o://9df4b0110e1f07ef202f1ce30cb3c791bd7cb5b94cfc83252634b17863fedc83" gracePeriod=30 Jan 27 08:09:01 crc kubenswrapper[4799]: I0127 08:09:01.364895 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5986c44cf4-xhtf8" podUID="54204b59-e201-4fe0-b86b-20a807415269" containerName="barbican-api" containerID="cri-o://39276b2c3c01c8673aba78103bc42ac3840d0c04da54ce6044f126303eb51030" gracePeriod=30 Jan 27 08:09:01 crc kubenswrapper[4799]: I0127 08:09:01.866346 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"182368c8-7aeb-4cfe-8de7-60794b59792c","Type":"ContainerStarted","Data":"887b7522c7a579e1357187dc19c73cd82c504e4628cbbadfecd2ef27a7755903"} Jan 27 08:09:01 crc kubenswrapper[4799]: I0127 08:09:01.869424 4799 generic.go:334] "Generic (PLEG): container finished" podID="54204b59-e201-4fe0-b86b-20a807415269" containerID="9df4b0110e1f07ef202f1ce30cb3c791bd7cb5b94cfc83252634b17863fedc83" exitCode=143 Jan 27 08:09:01 crc kubenswrapper[4799]: I0127 08:09:01.869462 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5986c44cf4-xhtf8" event={"ID":"54204b59-e201-4fe0-b86b-20a807415269","Type":"ContainerDied","Data":"9df4b0110e1f07ef202f1ce30cb3c791bd7cb5b94cfc83252634b17863fedc83"} Jan 27 08:09:02 crc kubenswrapper[4799]: I0127 08:09:02.885720 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"182368c8-7aeb-4cfe-8de7-60794b59792c","Type":"ContainerStarted","Data":"2e0014740e85a33f412b5d3841b82af95fac4e3521ee8803886d47fa9713d82f"} Jan 27 08:09:02 crc kubenswrapper[4799]: I0127 08:09:02.916151 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.916112608 podStartE2EDuration="3.916112608s" podCreationTimestamp="2026-01-27 08:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:09:02.907527488 +0000 UTC m=+1409.218631553" watchObservedRunningTime="2026-01-27 08:09:02.916112608 +0000 UTC m=+1409.227216673" Jan 27 08:09:03 crc kubenswrapper[4799]: I0127 08:09:03.717116 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:09:03 crc kubenswrapper[4799]: I0127 08:09:03.722514 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:09:04 crc kubenswrapper[4799]: I0127 08:09:04.428245 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:09:04 crc kubenswrapper[4799]: I0127 08:09:04.833847 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6685f45956-srl2k" Jan 27 08:09:04 crc kubenswrapper[4799]: I0127 08:09:04.909120 4799 generic.go:334] "Generic (PLEG): container finished" podID="54204b59-e201-4fe0-b86b-20a807415269" containerID="39276b2c3c01c8673aba78103bc42ac3840d0c04da54ce6044f126303eb51030" exitCode=0 Jan 27 08:09:04 crc kubenswrapper[4799]: I0127 08:09:04.909185 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5986c44cf4-xhtf8" event={"ID":"54204b59-e201-4fe0-b86b-20a807415269","Type":"ContainerDied","Data":"39276b2c3c01c8673aba78103bc42ac3840d0c04da54ce6044f126303eb51030"} Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.114948 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.244487 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.301542 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54204b59-e201-4fe0-b86b-20a807415269-config-data-custom\") pod \"54204b59-e201-4fe0-b86b-20a807415269\" (UID: \"54204b59-e201-4fe0-b86b-20a807415269\") " Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.301682 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54204b59-e201-4fe0-b86b-20a807415269-config-data\") pod \"54204b59-e201-4fe0-b86b-20a807415269\" (UID: \"54204b59-e201-4fe0-b86b-20a807415269\") " Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.301726 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54204b59-e201-4fe0-b86b-20a807415269-logs\") pod \"54204b59-e201-4fe0-b86b-20a807415269\" (UID: \"54204b59-e201-4fe0-b86b-20a807415269\") " Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.301779 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54204b59-e201-4fe0-b86b-20a807415269-combined-ca-bundle\") pod \"54204b59-e201-4fe0-b86b-20a807415269\" (UID: \"54204b59-e201-4fe0-b86b-20a807415269\") " Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.301802 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbkmz\" (UniqueName: \"kubernetes.io/projected/54204b59-e201-4fe0-b86b-20a807415269-kube-api-access-gbkmz\") pod \"54204b59-e201-4fe0-b86b-20a807415269\" (UID: \"54204b59-e201-4fe0-b86b-20a807415269\") " Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.302672 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54204b59-e201-4fe0-b86b-20a807415269-logs" (OuterVolumeSpecName: "logs") pod "54204b59-e201-4fe0-b86b-20a807415269" (UID: "54204b59-e201-4fe0-b86b-20a807415269"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.314141 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54204b59-e201-4fe0-b86b-20a807415269-kube-api-access-gbkmz" (OuterVolumeSpecName: "kube-api-access-gbkmz") pod "54204b59-e201-4fe0-b86b-20a807415269" (UID: "54204b59-e201-4fe0-b86b-20a807415269"). InnerVolumeSpecName "kube-api-access-gbkmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.324900 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54204b59-e201-4fe0-b86b-20a807415269-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "54204b59-e201-4fe0-b86b-20a807415269" (UID: "54204b59-e201-4fe0-b86b-20a807415269"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.324996 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54204b59-e201-4fe0-b86b-20a807415269-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54204b59-e201-4fe0-b86b-20a807415269" (UID: "54204b59-e201-4fe0-b86b-20a807415269"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.356595 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54204b59-e201-4fe0-b86b-20a807415269-config-data" (OuterVolumeSpecName: "config-data") pod "54204b59-e201-4fe0-b86b-20a807415269" (UID: "54204b59-e201-4fe0-b86b-20a807415269"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.404113 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54204b59-e201-4fe0-b86b-20a807415269-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.404146 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54204b59-e201-4fe0-b86b-20a807415269-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.404156 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54204b59-e201-4fe0-b86b-20a807415269-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.404166 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbkmz\" (UniqueName: \"kubernetes.io/projected/54204b59-e201-4fe0-b86b-20a807415269-kube-api-access-gbkmz\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.404175 4799 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54204b59-e201-4fe0-b86b-20a807415269-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.923989 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5986c44cf4-xhtf8" event={"ID":"54204b59-e201-4fe0-b86b-20a807415269","Type":"ContainerDied","Data":"631c599c146c5a3cfba30ba7ef2b01d015922e2be50ab73f4d54d009b33ffc82"} Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.924593 4799 scope.go:117] "RemoveContainer" containerID="39276b2c3c01c8673aba78103bc42ac3840d0c04da54ce6044f126303eb51030" Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.924296 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5986c44cf4-xhtf8" Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.955591 4799 scope.go:117] "RemoveContainer" containerID="9df4b0110e1f07ef202f1ce30cb3c791bd7cb5b94cfc83252634b17863fedc83" Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.975072 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5986c44cf4-xhtf8"] Jan 27 08:09:05 crc kubenswrapper[4799]: I0127 08:09:05.983063 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5986c44cf4-xhtf8"] Jan 27 08:09:06 crc kubenswrapper[4799]: I0127 08:09:06.462183 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54204b59-e201-4fe0-b86b-20a807415269" path="/var/lib/kubelet/pods/54204b59-e201-4fe0-b86b-20a807415269/volumes" Jan 27 08:09:06 crc kubenswrapper[4799]: I0127 08:09:06.723143 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 08:09:08 crc kubenswrapper[4799]: I0127 08:09:08.006084 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:09:08 crc kubenswrapper[4799]: I0127 08:09:08.088908 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6685f45956-srl2k"] Jan 27 08:09:08 crc kubenswrapper[4799]: I0127 08:09:08.089258 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6685f45956-srl2k" podUID="e295299c-8c47-48e4-a231-a1582bc0f3af" containerName="neutron-api" containerID="cri-o://0fa1d70bab19c7e9facfc74062a516796a67474b586c1c2b568545daf1caa22c" gracePeriod=30 Jan 27 08:09:08 crc kubenswrapper[4799]: I0127 08:09:08.089462 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6685f45956-srl2k" podUID="e295299c-8c47-48e4-a231-a1582bc0f3af" containerName="neutron-httpd" containerID="cri-o://de406fdc38c24232226c538331e728ce7a4f2f2db044e6fa2f7d9269f79f75d9" gracePeriod=30 Jan 27 08:09:08 crc kubenswrapper[4799]: I0127 08:09:08.960924 4799 generic.go:334] "Generic (PLEG): container finished" podID="e295299c-8c47-48e4-a231-a1582bc0f3af" containerID="de406fdc38c24232226c538331e728ce7a4f2f2db044e6fa2f7d9269f79f75d9" exitCode=0 Jan 27 08:09:08 crc kubenswrapper[4799]: I0127 08:09:08.960976 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6685f45956-srl2k" event={"ID":"e295299c-8c47-48e4-a231-a1582bc0f3af","Type":"ContainerDied","Data":"de406fdc38c24232226c538331e728ce7a4f2f2db044e6fa2f7d9269f79f75d9"} Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.403695 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 08:09:09 crc kubenswrapper[4799]: E0127 08:09:09.404201 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54204b59-e201-4fe0-b86b-20a807415269" containerName="barbican-api" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.404225 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="54204b59-e201-4fe0-b86b-20a807415269" containerName="barbican-api" Jan 27 08:09:09 crc kubenswrapper[4799]: E0127 08:09:09.404259 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54204b59-e201-4fe0-b86b-20a807415269" containerName="barbican-api-log" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.404587 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="54204b59-e201-4fe0-b86b-20a807415269" containerName="barbican-api-log" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.404895 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="54204b59-e201-4fe0-b86b-20a807415269" containerName="barbican-api" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.404932 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="54204b59-e201-4fe0-b86b-20a807415269" containerName="barbican-api-log" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.405907 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.408385 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.408458 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-kzlq8" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.409747 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.415254 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.578949 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzn94\" (UniqueName: \"kubernetes.io/projected/07e52675-0afa-4579-a5c1-f0aba31dd6e7-kube-api-access-qzn94\") pod \"openstackclient\" (UID: \"07e52675-0afa-4579-a5c1-f0aba31dd6e7\") " pod="openstack/openstackclient" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.579066 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/07e52675-0afa-4579-a5c1-f0aba31dd6e7-openstack-config\") pod \"openstackclient\" (UID: \"07e52675-0afa-4579-a5c1-f0aba31dd6e7\") " pod="openstack/openstackclient" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.579162 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e52675-0afa-4579-a5c1-f0aba31dd6e7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"07e52675-0afa-4579-a5c1-f0aba31dd6e7\") " pod="openstack/openstackclient" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.579281 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/07e52675-0afa-4579-a5c1-f0aba31dd6e7-openstack-config-secret\") pod \"openstackclient\" (UID: \"07e52675-0afa-4579-a5c1-f0aba31dd6e7\") " pod="openstack/openstackclient" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.680845 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/07e52675-0afa-4579-a5c1-f0aba31dd6e7-openstack-config\") pod \"openstackclient\" (UID: \"07e52675-0afa-4579-a5c1-f0aba31dd6e7\") " pod="openstack/openstackclient" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.680911 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e52675-0afa-4579-a5c1-f0aba31dd6e7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"07e52675-0afa-4579-a5c1-f0aba31dd6e7\") " pod="openstack/openstackclient" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.681006 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/07e52675-0afa-4579-a5c1-f0aba31dd6e7-openstack-config-secret\") pod \"openstackclient\" (UID: \"07e52675-0afa-4579-a5c1-f0aba31dd6e7\") " pod="openstack/openstackclient" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.681067 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzn94\" (UniqueName: \"kubernetes.io/projected/07e52675-0afa-4579-a5c1-f0aba31dd6e7-kube-api-access-qzn94\") pod \"openstackclient\" (UID: \"07e52675-0afa-4579-a5c1-f0aba31dd6e7\") " pod="openstack/openstackclient" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.682181 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/07e52675-0afa-4579-a5c1-f0aba31dd6e7-openstack-config\") pod \"openstackclient\" (UID: \"07e52675-0afa-4579-a5c1-f0aba31dd6e7\") " pod="openstack/openstackclient" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.687107 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e52675-0afa-4579-a5c1-f0aba31dd6e7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"07e52675-0afa-4579-a5c1-f0aba31dd6e7\") " pod="openstack/openstackclient" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.697705 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/07e52675-0afa-4579-a5c1-f0aba31dd6e7-openstack-config-secret\") pod \"openstackclient\" (UID: \"07e52675-0afa-4579-a5c1-f0aba31dd6e7\") " pod="openstack/openstackclient" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.699991 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzn94\" (UniqueName: \"kubernetes.io/projected/07e52675-0afa-4579-a5c1-f0aba31dd6e7-kube-api-access-qzn94\") pod \"openstackclient\" (UID: \"07e52675-0afa-4579-a5c1-f0aba31dd6e7\") " pod="openstack/openstackclient" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.729192 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.794232 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-796576645f-ws7ff"] Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.796428 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.806048 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.806267 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.806580 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.809535 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-796576645f-ws7ff"] Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.910644 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/786fd8aa-3ed9-420c-bdcd-b15a36795e72-etc-swift\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.910998 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-config-data\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.911036 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-internal-tls-certs\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.911071 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-combined-ca-bundle\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.911105 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/786fd8aa-3ed9-420c-bdcd-b15a36795e72-log-httpd\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.911125 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kdmq\" (UniqueName: \"kubernetes.io/projected/786fd8aa-3ed9-420c-bdcd-b15a36795e72-kube-api-access-9kdmq\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.911153 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/786fd8aa-3ed9-420c-bdcd-b15a36795e72-run-httpd\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:09 crc kubenswrapper[4799]: I0127 08:09:09.911169 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-public-tls-certs\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.013368 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/786fd8aa-3ed9-420c-bdcd-b15a36795e72-log-httpd\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.013420 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kdmq\" (UniqueName: \"kubernetes.io/projected/786fd8aa-3ed9-420c-bdcd-b15a36795e72-kube-api-access-9kdmq\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.013463 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/786fd8aa-3ed9-420c-bdcd-b15a36795e72-run-httpd\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.013484 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-public-tls-certs\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.013550 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/786fd8aa-3ed9-420c-bdcd-b15a36795e72-etc-swift\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.013583 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-config-data\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.013635 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-internal-tls-certs\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.013690 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-combined-ca-bundle\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.014932 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/786fd8aa-3ed9-420c-bdcd-b15a36795e72-log-httpd\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.015907 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/786fd8aa-3ed9-420c-bdcd-b15a36795e72-run-httpd\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.019391 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-public-tls-certs\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.021228 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-config-data\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.022679 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-combined-ca-bundle\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.025010 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/786fd8aa-3ed9-420c-bdcd-b15a36795e72-etc-swift\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.028130 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-internal-tls-certs\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.033994 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kdmq\" (UniqueName: \"kubernetes.io/projected/786fd8aa-3ed9-420c-bdcd-b15a36795e72-kube-api-access-9kdmq\") pod \"swift-proxy-796576645f-ws7ff\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.130343 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.277832 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.505184 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.717013 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-796576645f-ws7ff"] Jan 27 08:09:10 crc kubenswrapper[4799]: W0127 08:09:10.726389 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod786fd8aa_3ed9_420c_bdcd_b15a36795e72.slice/crio-224478b6a05caebbe95da6cd7ffeef1a6c164b12dfe12547665ff0091ad2f52c WatchSource:0}: Error finding container 224478b6a05caebbe95da6cd7ffeef1a6c164b12dfe12547665ff0091ad2f52c: Status 404 returned error can't find the container with id 224478b6a05caebbe95da6cd7ffeef1a6c164b12dfe12547665ff0091ad2f52c Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.993319 4799 generic.go:334] "Generic (PLEG): container finished" podID="e295299c-8c47-48e4-a231-a1582bc0f3af" containerID="0fa1d70bab19c7e9facfc74062a516796a67474b586c1c2b568545daf1caa22c" exitCode=0 Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.993337 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6685f45956-srl2k" event={"ID":"e295299c-8c47-48e4-a231-a1582bc0f3af","Type":"ContainerDied","Data":"0fa1d70bab19c7e9facfc74062a516796a67474b586c1c2b568545daf1caa22c"} Jan 27 08:09:10 crc kubenswrapper[4799]: I0127 08:09:10.996785 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"07e52675-0afa-4579-a5c1-f0aba31dd6e7","Type":"ContainerStarted","Data":"7fe91c2d0f3ad8930ef11934bc565716259a1df84aa799be751f3c4e60ff745b"} Jan 27 08:09:11 crc kubenswrapper[4799]: I0127 08:09:11.002506 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-796576645f-ws7ff" event={"ID":"786fd8aa-3ed9-420c-bdcd-b15a36795e72","Type":"ContainerStarted","Data":"224478b6a05caebbe95da6cd7ffeef1a6c164b12dfe12547665ff0091ad2f52c"} Jan 27 08:09:11 crc kubenswrapper[4799]: I0127 08:09:11.539325 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6685f45956-srl2k" Jan 27 08:09:11 crc kubenswrapper[4799]: I0127 08:09:11.552229 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-httpd-config\") pod \"e295299c-8c47-48e4-a231-a1582bc0f3af\" (UID: \"e295299c-8c47-48e4-a231-a1582bc0f3af\") " Jan 27 08:09:11 crc kubenswrapper[4799]: I0127 08:09:11.552279 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-ovndb-tls-certs\") pod \"e295299c-8c47-48e4-a231-a1582bc0f3af\" (UID: \"e295299c-8c47-48e4-a231-a1582bc0f3af\") " Jan 27 08:09:11 crc kubenswrapper[4799]: I0127 08:09:11.552344 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-combined-ca-bundle\") pod \"e295299c-8c47-48e4-a231-a1582bc0f3af\" (UID: \"e295299c-8c47-48e4-a231-a1582bc0f3af\") " Jan 27 08:09:11 crc kubenswrapper[4799]: I0127 08:09:11.552615 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk9x7\" (UniqueName: \"kubernetes.io/projected/e295299c-8c47-48e4-a231-a1582bc0f3af-kube-api-access-hk9x7\") pod \"e295299c-8c47-48e4-a231-a1582bc0f3af\" (UID: \"e295299c-8c47-48e4-a231-a1582bc0f3af\") " Jan 27 08:09:11 crc kubenswrapper[4799]: I0127 08:09:11.552701 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-config\") pod \"e295299c-8c47-48e4-a231-a1582bc0f3af\" (UID: \"e295299c-8c47-48e4-a231-a1582bc0f3af\") " Jan 27 08:09:11 crc kubenswrapper[4799]: I0127 08:09:11.573463 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e295299c-8c47-48e4-a231-a1582bc0f3af-kube-api-access-hk9x7" (OuterVolumeSpecName: "kube-api-access-hk9x7") pod "e295299c-8c47-48e4-a231-a1582bc0f3af" (UID: "e295299c-8c47-48e4-a231-a1582bc0f3af"). InnerVolumeSpecName "kube-api-access-hk9x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:09:11 crc kubenswrapper[4799]: I0127 08:09:11.585211 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e295299c-8c47-48e4-a231-a1582bc0f3af" (UID: "e295299c-8c47-48e4-a231-a1582bc0f3af"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:11 crc kubenswrapper[4799]: I0127 08:09:11.606032 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-config" (OuterVolumeSpecName: "config") pod "e295299c-8c47-48e4-a231-a1582bc0f3af" (UID: "e295299c-8c47-48e4-a231-a1582bc0f3af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:11 crc kubenswrapper[4799]: I0127 08:09:11.620130 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e295299c-8c47-48e4-a231-a1582bc0f3af" (UID: "e295299c-8c47-48e4-a231-a1582bc0f3af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:11 crc kubenswrapper[4799]: I0127 08:09:11.642202 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e295299c-8c47-48e4-a231-a1582bc0f3af" (UID: "e295299c-8c47-48e4-a231-a1582bc0f3af"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:11 crc kubenswrapper[4799]: I0127 08:09:11.654323 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk9x7\" (UniqueName: \"kubernetes.io/projected/e295299c-8c47-48e4-a231-a1582bc0f3af-kube-api-access-hk9x7\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:11 crc kubenswrapper[4799]: I0127 08:09:11.654557 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:11 crc kubenswrapper[4799]: I0127 08:09:11.654643 4799 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:11 crc kubenswrapper[4799]: I0127 08:09:11.654721 4799 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:11 crc kubenswrapper[4799]: I0127 08:09:11.654794 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e295299c-8c47-48e4-a231-a1582bc0f3af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:12 crc kubenswrapper[4799]: I0127 08:09:12.026249 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-796576645f-ws7ff" event={"ID":"786fd8aa-3ed9-420c-bdcd-b15a36795e72","Type":"ContainerStarted","Data":"a410fa87aa27bc5aec692e3904102b11a8beb6ba58b545d710293acc1e8b85db"} Jan 27 08:09:12 crc kubenswrapper[4799]: I0127 08:09:12.026652 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-796576645f-ws7ff" event={"ID":"786fd8aa-3ed9-420c-bdcd-b15a36795e72","Type":"ContainerStarted","Data":"d0a0014e66a5c2d50763987c6970b274c017a47ee7c6c4a889fdb90cf619e3f2"} Jan 27 08:09:12 crc kubenswrapper[4799]: I0127 08:09:12.026667 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:12 crc kubenswrapper[4799]: I0127 08:09:12.026680 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:12 crc kubenswrapper[4799]: I0127 08:09:12.030162 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6685f45956-srl2k" event={"ID":"e295299c-8c47-48e4-a231-a1582bc0f3af","Type":"ContainerDied","Data":"eee0ca3f7d4b1697fae6ae9e3aba5439c1de61b242b5b1d0b59d301a6e05392e"} Jan 27 08:09:12 crc kubenswrapper[4799]: I0127 08:09:12.030395 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6685f45956-srl2k" Jan 27 08:09:12 crc kubenswrapper[4799]: I0127 08:09:12.035131 4799 scope.go:117] "RemoveContainer" containerID="de406fdc38c24232226c538331e728ce7a4f2f2db044e6fa2f7d9269f79f75d9" Jan 27 08:09:12 crc kubenswrapper[4799]: I0127 08:09:12.051111 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-796576645f-ws7ff" podStartSLOduration=3.051088903 podStartE2EDuration="3.051088903s" podCreationTimestamp="2026-01-27 08:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:09:12.043249732 +0000 UTC m=+1418.354353787" watchObservedRunningTime="2026-01-27 08:09:12.051088903 +0000 UTC m=+1418.362192968" Jan 27 08:09:12 crc kubenswrapper[4799]: I0127 08:09:12.075770 4799 scope.go:117] "RemoveContainer" containerID="0fa1d70bab19c7e9facfc74062a516796a67474b586c1c2b568545daf1caa22c" Jan 27 08:09:12 crc kubenswrapper[4799]: I0127 08:09:12.092502 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6685f45956-srl2k"] Jan 27 08:09:12 crc kubenswrapper[4799]: I0127 08:09:12.101251 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6685f45956-srl2k"] Jan 27 08:09:12 crc kubenswrapper[4799]: I0127 08:09:12.463170 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e295299c-8c47-48e4-a231-a1582bc0f3af" path="/var/lib/kubelet/pods/e295299c-8c47-48e4-a231-a1582bc0f3af/volumes" Jan 27 08:09:12 crc kubenswrapper[4799]: I0127 08:09:12.749293 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:09:12 crc kubenswrapper[4799]: I0127 08:09:12.752167 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb0e296f-2929-4aa2-8272-ccba5119c5d1" containerName="ceilometer-central-agent" containerID="cri-o://d5fcc17cd2e0493914de2652c1cfb3cff509e9cf606385252ab3c15566bd0ef5" gracePeriod=30 Jan 27 08:09:12 crc kubenswrapper[4799]: I0127 08:09:12.752813 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb0e296f-2929-4aa2-8272-ccba5119c5d1" containerName="proxy-httpd" containerID="cri-o://a94e94816abc1214f1fbc4f4dc33ae8edcdbea0de33802dc3a02f58c48a15a7a" gracePeriod=30 Jan 27 08:09:12 crc kubenswrapper[4799]: I0127 08:09:12.752915 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb0e296f-2929-4aa2-8272-ccba5119c5d1" containerName="ceilometer-notification-agent" containerID="cri-o://bc5cfc2195df2652b912c7113441fe825feb0d54d5789c2a1d5b77087667f4cd" gracePeriod=30 Jan 27 08:09:12 crc kubenswrapper[4799]: I0127 08:09:12.752972 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb0e296f-2929-4aa2-8272-ccba5119c5d1" containerName="sg-core" containerID="cri-o://5f1f4d9b278d2eec2980ea28c2616ad243d63e73f554f6c36db050eeeaf0755b" gracePeriod=30 Jan 27 08:09:12 crc kubenswrapper[4799]: I0127 08:09:12.763087 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bb0e296f-2929-4aa2-8272-ccba5119c5d1" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.160:3000/\": EOF" Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.043821 4799 generic.go:334] "Generic (PLEG): container finished" podID="bb0e296f-2929-4aa2-8272-ccba5119c5d1" containerID="a94e94816abc1214f1fbc4f4dc33ae8edcdbea0de33802dc3a02f58c48a15a7a" exitCode=0 Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.043856 4799 generic.go:334] "Generic (PLEG): container finished" podID="bb0e296f-2929-4aa2-8272-ccba5119c5d1" containerID="5f1f4d9b278d2eec2980ea28c2616ad243d63e73f554f6c36db050eeeaf0755b" exitCode=2 Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.043905 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb0e296f-2929-4aa2-8272-ccba5119c5d1","Type":"ContainerDied","Data":"a94e94816abc1214f1fbc4f4dc33ae8edcdbea0de33802dc3a02f58c48a15a7a"} Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.043931 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb0e296f-2929-4aa2-8272-ccba5119c5d1","Type":"ContainerDied","Data":"5f1f4d9b278d2eec2980ea28c2616ad243d63e73f554f6c36db050eeeaf0755b"} Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.677740 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.694461 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-scripts\") pod \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.694548 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0e296f-2929-4aa2-8272-ccba5119c5d1-run-httpd\") pod \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.694614 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-sg-core-conf-yaml\") pod \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.694666 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0e296f-2929-4aa2-8272-ccba5119c5d1-log-httpd\") pod \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.694742 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdh74\" (UniqueName: \"kubernetes.io/projected/bb0e296f-2929-4aa2-8272-ccba5119c5d1-kube-api-access-jdh74\") pod \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.694777 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-combined-ca-bundle\") pod \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.695667 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0e296f-2929-4aa2-8272-ccba5119c5d1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bb0e296f-2929-4aa2-8272-ccba5119c5d1" (UID: "bb0e296f-2929-4aa2-8272-ccba5119c5d1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.695906 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-config-data\") pod \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\" (UID: \"bb0e296f-2929-4aa2-8272-ccba5119c5d1\") " Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.696138 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0e296f-2929-4aa2-8272-ccba5119c5d1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bb0e296f-2929-4aa2-8272-ccba5119c5d1" (UID: "bb0e296f-2929-4aa2-8272-ccba5119c5d1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.696895 4799 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0e296f-2929-4aa2-8272-ccba5119c5d1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.696921 4799 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0e296f-2929-4aa2-8272-ccba5119c5d1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.701527 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-scripts" (OuterVolumeSpecName: "scripts") pod "bb0e296f-2929-4aa2-8272-ccba5119c5d1" (UID: "bb0e296f-2929-4aa2-8272-ccba5119c5d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.703537 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0e296f-2929-4aa2-8272-ccba5119c5d1-kube-api-access-jdh74" (OuterVolumeSpecName: "kube-api-access-jdh74") pod "bb0e296f-2929-4aa2-8272-ccba5119c5d1" (UID: "bb0e296f-2929-4aa2-8272-ccba5119c5d1"). InnerVolumeSpecName "kube-api-access-jdh74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.739482 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bb0e296f-2929-4aa2-8272-ccba5119c5d1" (UID: "bb0e296f-2929-4aa2-8272-ccba5119c5d1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.787614 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb0e296f-2929-4aa2-8272-ccba5119c5d1" (UID: "bb0e296f-2929-4aa2-8272-ccba5119c5d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.799086 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.799132 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.799145 4799 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.799159 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdh74\" (UniqueName: \"kubernetes.io/projected/bb0e296f-2929-4aa2-8272-ccba5119c5d1-kube-api-access-jdh74\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.827502 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-config-data" (OuterVolumeSpecName: "config-data") pod "bb0e296f-2929-4aa2-8272-ccba5119c5d1" (UID: "bb0e296f-2929-4aa2-8272-ccba5119c5d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:13 crc kubenswrapper[4799]: I0127 08:09:13.900642 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0e296f-2929-4aa2-8272-ccba5119c5d1-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.064371 4799 generic.go:334] "Generic (PLEG): container finished" podID="bb0e296f-2929-4aa2-8272-ccba5119c5d1" containerID="bc5cfc2195df2652b912c7113441fe825feb0d54d5789c2a1d5b77087667f4cd" exitCode=0 Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.064406 4799 generic.go:334] "Generic (PLEG): container finished" podID="bb0e296f-2929-4aa2-8272-ccba5119c5d1" containerID="d5fcc17cd2e0493914de2652c1cfb3cff509e9cf606385252ab3c15566bd0ef5" exitCode=0 Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.064428 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb0e296f-2929-4aa2-8272-ccba5119c5d1","Type":"ContainerDied","Data":"bc5cfc2195df2652b912c7113441fe825feb0d54d5789c2a1d5b77087667f4cd"} Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.064458 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb0e296f-2929-4aa2-8272-ccba5119c5d1","Type":"ContainerDied","Data":"d5fcc17cd2e0493914de2652c1cfb3cff509e9cf606385252ab3c15566bd0ef5"} Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.064471 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb0e296f-2929-4aa2-8272-ccba5119c5d1","Type":"ContainerDied","Data":"8bdcc3ee3773b1282bd0d09dafd29b88728bb613d4313035f5cf7b9fbd79726c"} Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.064488 4799 scope.go:117] "RemoveContainer" containerID="a94e94816abc1214f1fbc4f4dc33ae8edcdbea0de33802dc3a02f58c48a15a7a" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.064635 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.101505 4799 scope.go:117] "RemoveContainer" containerID="5f1f4d9b278d2eec2980ea28c2616ad243d63e73f554f6c36db050eeeaf0755b" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.123082 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.134569 4799 scope.go:117] "RemoveContainer" containerID="bc5cfc2195df2652b912c7113441fe825feb0d54d5789c2a1d5b77087667f4cd" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.146438 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.152401 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:09:14 crc kubenswrapper[4799]: E0127 08:09:14.152924 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e295299c-8c47-48e4-a231-a1582bc0f3af" containerName="neutron-api" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.152948 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e295299c-8c47-48e4-a231-a1582bc0f3af" containerName="neutron-api" Jan 27 08:09:14 crc kubenswrapper[4799]: E0127 08:09:14.152965 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0e296f-2929-4aa2-8272-ccba5119c5d1" containerName="ceilometer-notification-agent" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.152975 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0e296f-2929-4aa2-8272-ccba5119c5d1" containerName="ceilometer-notification-agent" Jan 27 08:09:14 crc kubenswrapper[4799]: E0127 08:09:14.152996 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0e296f-2929-4aa2-8272-ccba5119c5d1" containerName="ceilometer-central-agent" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.153007 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0e296f-2929-4aa2-8272-ccba5119c5d1" containerName="ceilometer-central-agent" Jan 27 08:09:14 crc kubenswrapper[4799]: E0127 08:09:14.153028 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0e296f-2929-4aa2-8272-ccba5119c5d1" containerName="sg-core" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.153036 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0e296f-2929-4aa2-8272-ccba5119c5d1" containerName="sg-core" Jan 27 08:09:14 crc kubenswrapper[4799]: E0127 08:09:14.153045 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0e296f-2929-4aa2-8272-ccba5119c5d1" containerName="proxy-httpd" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.153052 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0e296f-2929-4aa2-8272-ccba5119c5d1" containerName="proxy-httpd" Jan 27 08:09:14 crc kubenswrapper[4799]: E0127 08:09:14.153067 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e295299c-8c47-48e4-a231-a1582bc0f3af" containerName="neutron-httpd" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.153074 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e295299c-8c47-48e4-a231-a1582bc0f3af" containerName="neutron-httpd" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.153293 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0e296f-2929-4aa2-8272-ccba5119c5d1" containerName="ceilometer-central-agent" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.153338 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0e296f-2929-4aa2-8272-ccba5119c5d1" containerName="sg-core" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.153350 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e295299c-8c47-48e4-a231-a1582bc0f3af" containerName="neutron-api" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.153358 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e295299c-8c47-48e4-a231-a1582bc0f3af" containerName="neutron-httpd" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.153375 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0e296f-2929-4aa2-8272-ccba5119c5d1" containerName="proxy-httpd" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.153390 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0e296f-2929-4aa2-8272-ccba5119c5d1" containerName="ceilometer-notification-agent" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.155353 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.157514 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.157539 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.177282 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.185285 4799 scope.go:117] "RemoveContainer" containerID="d5fcc17cd2e0493914de2652c1cfb3cff509e9cf606385252ab3c15566bd0ef5" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.204787 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.204968 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-scripts\") pod \"ceilometer-0\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.205003 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-config-data\") pod \"ceilometer-0\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.205030 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18be09ba-8035-4f5f-be90-d8892cf5f8ad-log-httpd\") pod \"ceilometer-0\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.205175 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.205213 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18be09ba-8035-4f5f-be90-d8892cf5f8ad-run-httpd\") pod \"ceilometer-0\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.205258 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp9v5\" (UniqueName: \"kubernetes.io/projected/18be09ba-8035-4f5f-be90-d8892cf5f8ad-kube-api-access-kp9v5\") pod \"ceilometer-0\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.209131 4799 scope.go:117] "RemoveContainer" containerID="a94e94816abc1214f1fbc4f4dc33ae8edcdbea0de33802dc3a02f58c48a15a7a" Jan 27 08:09:14 crc kubenswrapper[4799]: E0127 08:09:14.210516 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a94e94816abc1214f1fbc4f4dc33ae8edcdbea0de33802dc3a02f58c48a15a7a\": container with ID starting with a94e94816abc1214f1fbc4f4dc33ae8edcdbea0de33802dc3a02f58c48a15a7a not found: ID does not exist" containerID="a94e94816abc1214f1fbc4f4dc33ae8edcdbea0de33802dc3a02f58c48a15a7a" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.210575 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a94e94816abc1214f1fbc4f4dc33ae8edcdbea0de33802dc3a02f58c48a15a7a"} err="failed to get container status \"a94e94816abc1214f1fbc4f4dc33ae8edcdbea0de33802dc3a02f58c48a15a7a\": rpc error: code = NotFound desc = could not find container \"a94e94816abc1214f1fbc4f4dc33ae8edcdbea0de33802dc3a02f58c48a15a7a\": container with ID starting with a94e94816abc1214f1fbc4f4dc33ae8edcdbea0de33802dc3a02f58c48a15a7a not found: ID does not exist" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.210613 4799 scope.go:117] "RemoveContainer" containerID="5f1f4d9b278d2eec2980ea28c2616ad243d63e73f554f6c36db050eeeaf0755b" Jan 27 08:09:14 crc kubenswrapper[4799]: E0127 08:09:14.211829 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f1f4d9b278d2eec2980ea28c2616ad243d63e73f554f6c36db050eeeaf0755b\": container with ID starting with 5f1f4d9b278d2eec2980ea28c2616ad243d63e73f554f6c36db050eeeaf0755b not found: ID does not exist" containerID="5f1f4d9b278d2eec2980ea28c2616ad243d63e73f554f6c36db050eeeaf0755b" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.211859 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f1f4d9b278d2eec2980ea28c2616ad243d63e73f554f6c36db050eeeaf0755b"} err="failed to get container status \"5f1f4d9b278d2eec2980ea28c2616ad243d63e73f554f6c36db050eeeaf0755b\": rpc error: code = NotFound desc = could not find container \"5f1f4d9b278d2eec2980ea28c2616ad243d63e73f554f6c36db050eeeaf0755b\": container with ID starting with 5f1f4d9b278d2eec2980ea28c2616ad243d63e73f554f6c36db050eeeaf0755b not found: ID does not exist" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.211876 4799 scope.go:117] "RemoveContainer" containerID="bc5cfc2195df2652b912c7113441fe825feb0d54d5789c2a1d5b77087667f4cd" Jan 27 08:09:14 crc kubenswrapper[4799]: E0127 08:09:14.212218 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc5cfc2195df2652b912c7113441fe825feb0d54d5789c2a1d5b77087667f4cd\": container with ID starting with bc5cfc2195df2652b912c7113441fe825feb0d54d5789c2a1d5b77087667f4cd not found: ID does not exist" containerID="bc5cfc2195df2652b912c7113441fe825feb0d54d5789c2a1d5b77087667f4cd" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.212247 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc5cfc2195df2652b912c7113441fe825feb0d54d5789c2a1d5b77087667f4cd"} err="failed to get container status \"bc5cfc2195df2652b912c7113441fe825feb0d54d5789c2a1d5b77087667f4cd\": rpc error: code = NotFound desc = could not find container \"bc5cfc2195df2652b912c7113441fe825feb0d54d5789c2a1d5b77087667f4cd\": container with ID starting with bc5cfc2195df2652b912c7113441fe825feb0d54d5789c2a1d5b77087667f4cd not found: ID does not exist" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.212266 4799 scope.go:117] "RemoveContainer" containerID="d5fcc17cd2e0493914de2652c1cfb3cff509e9cf606385252ab3c15566bd0ef5" Jan 27 08:09:14 crc kubenswrapper[4799]: E0127 08:09:14.212700 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5fcc17cd2e0493914de2652c1cfb3cff509e9cf606385252ab3c15566bd0ef5\": container with ID starting with d5fcc17cd2e0493914de2652c1cfb3cff509e9cf606385252ab3c15566bd0ef5 not found: ID does not exist" containerID="d5fcc17cd2e0493914de2652c1cfb3cff509e9cf606385252ab3c15566bd0ef5" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.212721 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5fcc17cd2e0493914de2652c1cfb3cff509e9cf606385252ab3c15566bd0ef5"} err="failed to get container status \"d5fcc17cd2e0493914de2652c1cfb3cff509e9cf606385252ab3c15566bd0ef5\": rpc error: code = NotFound desc = could not find container \"d5fcc17cd2e0493914de2652c1cfb3cff509e9cf606385252ab3c15566bd0ef5\": container with ID starting with d5fcc17cd2e0493914de2652c1cfb3cff509e9cf606385252ab3c15566bd0ef5 not found: ID does not exist" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.212736 4799 scope.go:117] "RemoveContainer" containerID="a94e94816abc1214f1fbc4f4dc33ae8edcdbea0de33802dc3a02f58c48a15a7a" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.212947 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a94e94816abc1214f1fbc4f4dc33ae8edcdbea0de33802dc3a02f58c48a15a7a"} err="failed to get container status \"a94e94816abc1214f1fbc4f4dc33ae8edcdbea0de33802dc3a02f58c48a15a7a\": rpc error: code = NotFound desc = could not find container \"a94e94816abc1214f1fbc4f4dc33ae8edcdbea0de33802dc3a02f58c48a15a7a\": container with ID starting with a94e94816abc1214f1fbc4f4dc33ae8edcdbea0de33802dc3a02f58c48a15a7a not found: ID does not exist" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.212981 4799 scope.go:117] "RemoveContainer" containerID="5f1f4d9b278d2eec2980ea28c2616ad243d63e73f554f6c36db050eeeaf0755b" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.213217 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f1f4d9b278d2eec2980ea28c2616ad243d63e73f554f6c36db050eeeaf0755b"} err="failed to get container status \"5f1f4d9b278d2eec2980ea28c2616ad243d63e73f554f6c36db050eeeaf0755b\": rpc error: code = NotFound desc = could not find container \"5f1f4d9b278d2eec2980ea28c2616ad243d63e73f554f6c36db050eeeaf0755b\": container with ID starting with 5f1f4d9b278d2eec2980ea28c2616ad243d63e73f554f6c36db050eeeaf0755b not found: ID does not exist" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.213236 4799 scope.go:117] "RemoveContainer" containerID="bc5cfc2195df2652b912c7113441fe825feb0d54d5789c2a1d5b77087667f4cd" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.213493 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc5cfc2195df2652b912c7113441fe825feb0d54d5789c2a1d5b77087667f4cd"} err="failed to get container status \"bc5cfc2195df2652b912c7113441fe825feb0d54d5789c2a1d5b77087667f4cd\": rpc error: code = NotFound desc = could not find container \"bc5cfc2195df2652b912c7113441fe825feb0d54d5789c2a1d5b77087667f4cd\": container with ID starting with bc5cfc2195df2652b912c7113441fe825feb0d54d5789c2a1d5b77087667f4cd not found: ID does not exist" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.213519 4799 scope.go:117] "RemoveContainer" containerID="d5fcc17cd2e0493914de2652c1cfb3cff509e9cf606385252ab3c15566bd0ef5" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.213800 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5fcc17cd2e0493914de2652c1cfb3cff509e9cf606385252ab3c15566bd0ef5"} err="failed to get container status \"d5fcc17cd2e0493914de2652c1cfb3cff509e9cf606385252ab3c15566bd0ef5\": rpc error: code = NotFound desc = could not find container \"d5fcc17cd2e0493914de2652c1cfb3cff509e9cf606385252ab3c15566bd0ef5\": container with ID starting with d5fcc17cd2e0493914de2652c1cfb3cff509e9cf606385252ab3c15566bd0ef5 not found: ID does not exist" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.306432 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-config-data\") pod \"ceilometer-0\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.306489 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18be09ba-8035-4f5f-be90-d8892cf5f8ad-log-httpd\") pod \"ceilometer-0\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.306564 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.306586 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18be09ba-8035-4f5f-be90-d8892cf5f8ad-run-httpd\") pod \"ceilometer-0\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.307262 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp9v5\" (UniqueName: \"kubernetes.io/projected/18be09ba-8035-4f5f-be90-d8892cf5f8ad-kube-api-access-kp9v5\") pod \"ceilometer-0\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.307315 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.307441 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-scripts\") pod \"ceilometer-0\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.307142 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18be09ba-8035-4f5f-be90-d8892cf5f8ad-run-httpd\") pod \"ceilometer-0\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.307104 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18be09ba-8035-4f5f-be90-d8892cf5f8ad-log-httpd\") pod \"ceilometer-0\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.310778 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-scripts\") pod \"ceilometer-0\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.310832 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-config-data\") pod \"ceilometer-0\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.311733 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.325515 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.327484 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp9v5\" (UniqueName: \"kubernetes.io/projected/18be09ba-8035-4f5f-be90-d8892cf5f8ad-kube-api-access-kp9v5\") pod \"ceilometer-0\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.466995 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb0e296f-2929-4aa2-8272-ccba5119c5d1" path="/var/lib/kubelet/pods/bb0e296f-2929-4aa2-8272-ccba5119c5d1/volumes" Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.484234 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:09:14 crc kubenswrapper[4799]: W0127 08:09:14.793252 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18be09ba_8035_4f5f_be90_d8892cf5f8ad.slice/crio-5d29a3a6d6fb0fb8c1c5e25585db0393abcaa08eedc1435aefd1107808dd21d4 WatchSource:0}: Error finding container 5d29a3a6d6fb0fb8c1c5e25585db0393abcaa08eedc1435aefd1107808dd21d4: Status 404 returned error can't find the container with id 5d29a3a6d6fb0fb8c1c5e25585db0393abcaa08eedc1435aefd1107808dd21d4 Jan 27 08:09:14 crc kubenswrapper[4799]: I0127 08:09:14.794929 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:09:15 crc kubenswrapper[4799]: I0127 08:09:15.075862 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18be09ba-8035-4f5f-be90-d8892cf5f8ad","Type":"ContainerStarted","Data":"5d29a3a6d6fb0fb8c1c5e25585db0393abcaa08eedc1435aefd1107808dd21d4"} Jan 27 08:09:15 crc kubenswrapper[4799]: I0127 08:09:15.148986 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:20 crc kubenswrapper[4799]: I0127 08:09:20.135101 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:09:21 crc kubenswrapper[4799]: I0127 08:09:21.002197 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:09:22 crc kubenswrapper[4799]: I0127 08:09:22.140558 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18be09ba-8035-4f5f-be90-d8892cf5f8ad","Type":"ContainerStarted","Data":"56a1e0e567cfa08f604a96bf778cee55e167511573fd5b5991c46fd59cb2b7b5"} Jan 27 08:09:22 crc kubenswrapper[4799]: I0127 08:09:22.141000 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18be09ba-8035-4f5f-be90-d8892cf5f8ad","Type":"ContainerStarted","Data":"f8a9336fcb119530a72af14d27be0c664331fa385f674924c8674edd6b652643"} Jan 27 08:09:22 crc kubenswrapper[4799]: I0127 08:09:22.141872 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"07e52675-0afa-4579-a5c1-f0aba31dd6e7","Type":"ContainerStarted","Data":"932c19e1786489198ed2fd00256bc0ea9ef4db8f5bd057c7ca4751c183f4f1d6"} Jan 27 08:09:22 crc kubenswrapper[4799]: I0127 08:09:22.167514 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.505303383 podStartE2EDuration="13.167493308s" podCreationTimestamp="2026-01-27 08:09:09 +0000 UTC" firstStartedPulling="2026-01-27 08:09:10.291579791 +0000 UTC m=+1416.602683856" lastFinishedPulling="2026-01-27 08:09:20.953769716 +0000 UTC m=+1427.264873781" observedRunningTime="2026-01-27 08:09:22.161395935 +0000 UTC m=+1428.472500000" watchObservedRunningTime="2026-01-27 08:09:22.167493308 +0000 UTC m=+1428.478597373" Jan 27 08:09:23 crc kubenswrapper[4799]: I0127 08:09:23.165033 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18be09ba-8035-4f5f-be90-d8892cf5f8ad","Type":"ContainerStarted","Data":"faa92dc8d7defa68a7ba62741f8477a72bf17eb5374f0b6676cf704214ec6024"} Jan 27 08:09:23 crc kubenswrapper[4799]: I0127 08:09:23.731803 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:09:23 crc kubenswrapper[4799]: I0127 08:09:23.731882 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:09:24 crc kubenswrapper[4799]: I0127 08:09:24.176416 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18be09ba-8035-4f5f-be90-d8892cf5f8ad","Type":"ContainerStarted","Data":"e90dc1d5178ce7b73c738b3b59dcb5c782a1547c9c188cefac3aa0aaa559a86e"} Jan 27 08:09:24 crc kubenswrapper[4799]: I0127 08:09:24.176809 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 08:09:24 crc kubenswrapper[4799]: I0127 08:09:24.176650 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="18be09ba-8035-4f5f-be90-d8892cf5f8ad" containerName="sg-core" containerID="cri-o://faa92dc8d7defa68a7ba62741f8477a72bf17eb5374f0b6676cf704214ec6024" gracePeriod=30 Jan 27 08:09:24 crc kubenswrapper[4799]: I0127 08:09:24.176614 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="18be09ba-8035-4f5f-be90-d8892cf5f8ad" containerName="ceilometer-central-agent" containerID="cri-o://f8a9336fcb119530a72af14d27be0c664331fa385f674924c8674edd6b652643" gracePeriod=30 Jan 27 08:09:24 crc kubenswrapper[4799]: I0127 08:09:24.176687 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="18be09ba-8035-4f5f-be90-d8892cf5f8ad" containerName="ceilometer-notification-agent" containerID="cri-o://56a1e0e567cfa08f604a96bf778cee55e167511573fd5b5991c46fd59cb2b7b5" gracePeriod=30 Jan 27 08:09:24 crc kubenswrapper[4799]: I0127 08:09:24.176684 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="18be09ba-8035-4f5f-be90-d8892cf5f8ad" containerName="proxy-httpd" containerID="cri-o://e90dc1d5178ce7b73c738b3b59dcb5c782a1547c9c188cefac3aa0aaa559a86e" gracePeriod=30 Jan 27 08:09:24 crc kubenswrapper[4799]: I0127 08:09:24.201383 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.058517723 podStartE2EDuration="10.201362858s" podCreationTimestamp="2026-01-27 08:09:14 +0000 UTC" firstStartedPulling="2026-01-27 08:09:14.797171093 +0000 UTC m=+1421.108275158" lastFinishedPulling="2026-01-27 08:09:23.940016228 +0000 UTC m=+1430.251120293" observedRunningTime="2026-01-27 08:09:24.19922222 +0000 UTC m=+1430.510326285" watchObservedRunningTime="2026-01-27 08:09:24.201362858 +0000 UTC m=+1430.512466923" Jan 27 08:09:25 crc kubenswrapper[4799]: I0127 08:09:25.191237 4799 generic.go:334] "Generic (PLEG): container finished" podID="18be09ba-8035-4f5f-be90-d8892cf5f8ad" containerID="faa92dc8d7defa68a7ba62741f8477a72bf17eb5374f0b6676cf704214ec6024" exitCode=2 Jan 27 08:09:25 crc kubenswrapper[4799]: I0127 08:09:25.191469 4799 generic.go:334] "Generic (PLEG): container finished" podID="18be09ba-8035-4f5f-be90-d8892cf5f8ad" containerID="56a1e0e567cfa08f604a96bf778cee55e167511573fd5b5991c46fd59cb2b7b5" exitCode=0 Jan 27 08:09:25 crc kubenswrapper[4799]: I0127 08:09:25.191501 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18be09ba-8035-4f5f-be90-d8892cf5f8ad","Type":"ContainerDied","Data":"faa92dc8d7defa68a7ba62741f8477a72bf17eb5374f0b6676cf704214ec6024"} Jan 27 08:09:25 crc kubenswrapper[4799]: I0127 08:09:25.191528 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18be09ba-8035-4f5f-be90-d8892cf5f8ad","Type":"ContainerDied","Data":"56a1e0e567cfa08f604a96bf778cee55e167511573fd5b5991c46fd59cb2b7b5"} Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.309394 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-kdhxd"] Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.310800 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kdhxd" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.320724 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-kdhxd"] Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.411623 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-9pj2r"] Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.412763 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9pj2r" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.443094 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-9pj2r"] Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.512603 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83464c1e-e470-4907-aece-b0aeea8a7ff2-operator-scripts\") pod \"nova-api-db-create-kdhxd\" (UID: \"83464c1e-e470-4907-aece-b0aeea8a7ff2\") " pod="openstack/nova-api-db-create-kdhxd" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.513786 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qbkj\" (UniqueName: \"kubernetes.io/projected/83464c1e-e470-4907-aece-b0aeea8a7ff2-kube-api-access-4qbkj\") pod \"nova-api-db-create-kdhxd\" (UID: \"83464c1e-e470-4907-aece-b0aeea8a7ff2\") " pod="openstack/nova-api-db-create-kdhxd" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.516462 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-774s2"] Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.517821 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-774s2" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.527907 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-774s2"] Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.540845 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-07af-account-create-update-jg2hs"] Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.541992 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-07af-account-create-update-jg2hs" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.544573 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.550416 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-07af-account-create-update-jg2hs"] Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.615355 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qbkj\" (UniqueName: \"kubernetes.io/projected/83464c1e-e470-4907-aece-b0aeea8a7ff2-kube-api-access-4qbkj\") pod \"nova-api-db-create-kdhxd\" (UID: \"83464c1e-e470-4907-aece-b0aeea8a7ff2\") " pod="openstack/nova-api-db-create-kdhxd" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.615966 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxzcj\" (UniqueName: \"kubernetes.io/projected/2bd94d42-7c61-4f0a-a655-d4f85cd03d88-kube-api-access-lxzcj\") pod \"nova-cell0-db-create-9pj2r\" (UID: \"2bd94d42-7c61-4f0a-a655-d4f85cd03d88\") " pod="openstack/nova-cell0-db-create-9pj2r" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.616030 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bd94d42-7c61-4f0a-a655-d4f85cd03d88-operator-scripts\") pod \"nova-cell0-db-create-9pj2r\" (UID: \"2bd94d42-7c61-4f0a-a655-d4f85cd03d88\") " pod="openstack/nova-cell0-db-create-9pj2r" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.616098 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83464c1e-e470-4907-aece-b0aeea8a7ff2-operator-scripts\") pod \"nova-api-db-create-kdhxd\" (UID: \"83464c1e-e470-4907-aece-b0aeea8a7ff2\") " pod="openstack/nova-api-db-create-kdhxd" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.616850 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83464c1e-e470-4907-aece-b0aeea8a7ff2-operator-scripts\") pod \"nova-api-db-create-kdhxd\" (UID: \"83464c1e-e470-4907-aece-b0aeea8a7ff2\") " pod="openstack/nova-api-db-create-kdhxd" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.633964 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qbkj\" (UniqueName: \"kubernetes.io/projected/83464c1e-e470-4907-aece-b0aeea8a7ff2-kube-api-access-4qbkj\") pod \"nova-api-db-create-kdhxd\" (UID: \"83464c1e-e470-4907-aece-b0aeea8a7ff2\") " pod="openstack/nova-api-db-create-kdhxd" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.636954 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kdhxd" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.715765 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0f1e-account-create-update-grg95"] Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.716783 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f1e-account-create-update-grg95" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.717178 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxzcj\" (UniqueName: \"kubernetes.io/projected/2bd94d42-7c61-4f0a-a655-d4f85cd03d88-kube-api-access-lxzcj\") pod \"nova-cell0-db-create-9pj2r\" (UID: \"2bd94d42-7c61-4f0a-a655-d4f85cd03d88\") " pod="openstack/nova-cell0-db-create-9pj2r" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.717239 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bd94d42-7c61-4f0a-a655-d4f85cd03d88-operator-scripts\") pod \"nova-cell0-db-create-9pj2r\" (UID: \"2bd94d42-7c61-4f0a-a655-d4f85cd03d88\") " pod="openstack/nova-cell0-db-create-9pj2r" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.717273 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0445ff2-2b89-41e9-81f3-953e21253b19-operator-scripts\") pod \"nova-cell1-db-create-774s2\" (UID: \"e0445ff2-2b89-41e9-81f3-953e21253b19\") " pod="openstack/nova-cell1-db-create-774s2" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.717346 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6kxd\" (UniqueName: \"kubernetes.io/projected/b181c307-a5d9-4821-8b81-9bf5539511e5-kube-api-access-b6kxd\") pod \"nova-api-07af-account-create-update-jg2hs\" (UID: \"b181c307-a5d9-4821-8b81-9bf5539511e5\") " pod="openstack/nova-api-07af-account-create-update-jg2hs" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.717399 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b181c307-a5d9-4821-8b81-9bf5539511e5-operator-scripts\") pod \"nova-api-07af-account-create-update-jg2hs\" (UID: \"b181c307-a5d9-4821-8b81-9bf5539511e5\") " pod="openstack/nova-api-07af-account-create-update-jg2hs" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.717434 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgxt5\" (UniqueName: \"kubernetes.io/projected/e0445ff2-2b89-41e9-81f3-953e21253b19-kube-api-access-sgxt5\") pod \"nova-cell1-db-create-774s2\" (UID: \"e0445ff2-2b89-41e9-81f3-953e21253b19\") " pod="openstack/nova-cell1-db-create-774s2" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.718393 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bd94d42-7c61-4f0a-a655-d4f85cd03d88-operator-scripts\") pod \"nova-cell0-db-create-9pj2r\" (UID: \"2bd94d42-7c61-4f0a-a655-d4f85cd03d88\") " pod="openstack/nova-cell0-db-create-9pj2r" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.718930 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.727777 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0f1e-account-create-update-grg95"] Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.736889 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxzcj\" (UniqueName: \"kubernetes.io/projected/2bd94d42-7c61-4f0a-a655-d4f85cd03d88-kube-api-access-lxzcj\") pod \"nova-cell0-db-create-9pj2r\" (UID: \"2bd94d42-7c61-4f0a-a655-d4f85cd03d88\") " pod="openstack/nova-cell0-db-create-9pj2r" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.819785 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82bh4\" (UniqueName: \"kubernetes.io/projected/c8cfa24c-646d-435d-bd6f-30199969555c-kube-api-access-82bh4\") pod \"nova-cell0-0f1e-account-create-update-grg95\" (UID: \"c8cfa24c-646d-435d-bd6f-30199969555c\") " pod="openstack/nova-cell0-0f1e-account-create-update-grg95" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.820064 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6kxd\" (UniqueName: \"kubernetes.io/projected/b181c307-a5d9-4821-8b81-9bf5539511e5-kube-api-access-b6kxd\") pod \"nova-api-07af-account-create-update-jg2hs\" (UID: \"b181c307-a5d9-4821-8b81-9bf5539511e5\") " pod="openstack/nova-api-07af-account-create-update-jg2hs" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.820104 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8cfa24c-646d-435d-bd6f-30199969555c-operator-scripts\") pod \"nova-cell0-0f1e-account-create-update-grg95\" (UID: \"c8cfa24c-646d-435d-bd6f-30199969555c\") " pod="openstack/nova-cell0-0f1e-account-create-update-grg95" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.820130 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b181c307-a5d9-4821-8b81-9bf5539511e5-operator-scripts\") pod \"nova-api-07af-account-create-update-jg2hs\" (UID: \"b181c307-a5d9-4821-8b81-9bf5539511e5\") " pod="openstack/nova-api-07af-account-create-update-jg2hs" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.820163 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgxt5\" (UniqueName: \"kubernetes.io/projected/e0445ff2-2b89-41e9-81f3-953e21253b19-kube-api-access-sgxt5\") pod \"nova-cell1-db-create-774s2\" (UID: \"e0445ff2-2b89-41e9-81f3-953e21253b19\") " pod="openstack/nova-cell1-db-create-774s2" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.820229 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0445ff2-2b89-41e9-81f3-953e21253b19-operator-scripts\") pod \"nova-cell1-db-create-774s2\" (UID: \"e0445ff2-2b89-41e9-81f3-953e21253b19\") " pod="openstack/nova-cell1-db-create-774s2" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.820964 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0445ff2-2b89-41e9-81f3-953e21253b19-operator-scripts\") pod \"nova-cell1-db-create-774s2\" (UID: \"e0445ff2-2b89-41e9-81f3-953e21253b19\") " pod="openstack/nova-cell1-db-create-774s2" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.821908 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b181c307-a5d9-4821-8b81-9bf5539511e5-operator-scripts\") pod \"nova-api-07af-account-create-update-jg2hs\" (UID: \"b181c307-a5d9-4821-8b81-9bf5539511e5\") " pod="openstack/nova-api-07af-account-create-update-jg2hs" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.852699 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6kxd\" (UniqueName: \"kubernetes.io/projected/b181c307-a5d9-4821-8b81-9bf5539511e5-kube-api-access-b6kxd\") pod \"nova-api-07af-account-create-update-jg2hs\" (UID: \"b181c307-a5d9-4821-8b81-9bf5539511e5\") " pod="openstack/nova-api-07af-account-create-update-jg2hs" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.853231 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgxt5\" (UniqueName: \"kubernetes.io/projected/e0445ff2-2b89-41e9-81f3-953e21253b19-kube-api-access-sgxt5\") pod \"nova-cell1-db-create-774s2\" (UID: \"e0445ff2-2b89-41e9-81f3-953e21253b19\") " pod="openstack/nova-cell1-db-create-774s2" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.860440 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-07af-account-create-update-jg2hs" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.921668 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82bh4\" (UniqueName: \"kubernetes.io/projected/c8cfa24c-646d-435d-bd6f-30199969555c-kube-api-access-82bh4\") pod \"nova-cell0-0f1e-account-create-update-grg95\" (UID: \"c8cfa24c-646d-435d-bd6f-30199969555c\") " pod="openstack/nova-cell0-0f1e-account-create-update-grg95" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.921754 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8cfa24c-646d-435d-bd6f-30199969555c-operator-scripts\") pod \"nova-cell0-0f1e-account-create-update-grg95\" (UID: \"c8cfa24c-646d-435d-bd6f-30199969555c\") " pod="openstack/nova-cell0-0f1e-account-create-update-grg95" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.924459 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8cfa24c-646d-435d-bd6f-30199969555c-operator-scripts\") pod \"nova-cell0-0f1e-account-create-update-grg95\" (UID: \"c8cfa24c-646d-435d-bd6f-30199969555c\") " pod="openstack/nova-cell0-0f1e-account-create-update-grg95" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.939099 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82bh4\" (UniqueName: \"kubernetes.io/projected/c8cfa24c-646d-435d-bd6f-30199969555c-kube-api-access-82bh4\") pod \"nova-cell0-0f1e-account-create-update-grg95\" (UID: \"c8cfa24c-646d-435d-bd6f-30199969555c\") " pod="openstack/nova-cell0-0f1e-account-create-update-grg95" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.941378 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-b6fd-account-create-update-zjgk2"] Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.943052 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b6fd-account-create-update-zjgk2" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.947109 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 27 08:09:29 crc kubenswrapper[4799]: I0127 08:09:29.951766 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b6fd-account-create-update-zjgk2"] Jan 27 08:09:30 crc kubenswrapper[4799]: I0127 08:09:30.023847 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n5cp\" (UniqueName: \"kubernetes.io/projected/6a00aa8f-6e63-4f84-8353-1ba24e84e64d-kube-api-access-2n5cp\") pod \"nova-cell1-b6fd-account-create-update-zjgk2\" (UID: \"6a00aa8f-6e63-4f84-8353-1ba24e84e64d\") " pod="openstack/nova-cell1-b6fd-account-create-update-zjgk2" Jan 27 08:09:30 crc kubenswrapper[4799]: I0127 08:09:30.024011 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a00aa8f-6e63-4f84-8353-1ba24e84e64d-operator-scripts\") pod \"nova-cell1-b6fd-account-create-update-zjgk2\" (UID: \"6a00aa8f-6e63-4f84-8353-1ba24e84e64d\") " pod="openstack/nova-cell1-b6fd-account-create-update-zjgk2" Jan 27 08:09:30 crc kubenswrapper[4799]: I0127 08:09:30.036172 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9pj2r" Jan 27 08:09:30 crc kubenswrapper[4799]: I0127 08:09:30.045130 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f1e-account-create-update-grg95" Jan 27 08:09:30 crc kubenswrapper[4799]: I0127 08:09:30.137450 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a00aa8f-6e63-4f84-8353-1ba24e84e64d-operator-scripts\") pod \"nova-cell1-b6fd-account-create-update-zjgk2\" (UID: \"6a00aa8f-6e63-4f84-8353-1ba24e84e64d\") " pod="openstack/nova-cell1-b6fd-account-create-update-zjgk2" Jan 27 08:09:30 crc kubenswrapper[4799]: I0127 08:09:30.137501 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-774s2" Jan 27 08:09:30 crc kubenswrapper[4799]: I0127 08:09:30.137512 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a00aa8f-6e63-4f84-8353-1ba24e84e64d-operator-scripts\") pod \"nova-cell1-b6fd-account-create-update-zjgk2\" (UID: \"6a00aa8f-6e63-4f84-8353-1ba24e84e64d\") " pod="openstack/nova-cell1-b6fd-account-create-update-zjgk2" Jan 27 08:09:30 crc kubenswrapper[4799]: I0127 08:09:30.137669 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n5cp\" (UniqueName: \"kubernetes.io/projected/6a00aa8f-6e63-4f84-8353-1ba24e84e64d-kube-api-access-2n5cp\") pod \"nova-cell1-b6fd-account-create-update-zjgk2\" (UID: \"6a00aa8f-6e63-4f84-8353-1ba24e84e64d\") " pod="openstack/nova-cell1-b6fd-account-create-update-zjgk2" Jan 27 08:09:30 crc kubenswrapper[4799]: I0127 08:09:30.141962 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-kdhxd"] Jan 27 08:09:30 crc kubenswrapper[4799]: I0127 08:09:30.163113 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n5cp\" (UniqueName: \"kubernetes.io/projected/6a00aa8f-6e63-4f84-8353-1ba24e84e64d-kube-api-access-2n5cp\") pod \"nova-cell1-b6fd-account-create-update-zjgk2\" (UID: \"6a00aa8f-6e63-4f84-8353-1ba24e84e64d\") " pod="openstack/nova-cell1-b6fd-account-create-update-zjgk2" Jan 27 08:09:30 crc kubenswrapper[4799]: I0127 08:09:30.242342 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kdhxd" event={"ID":"83464c1e-e470-4907-aece-b0aeea8a7ff2","Type":"ContainerStarted","Data":"79be3d7e2e21c308ef0f36d00a9978f5f3af98c48b0d8e6ae05f95950965f567"} Jan 27 08:09:30 crc kubenswrapper[4799]: I0127 08:09:30.266439 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b6fd-account-create-update-zjgk2" Jan 27 08:09:30 crc kubenswrapper[4799]: I0127 08:09:30.390293 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-07af-account-create-update-jg2hs"] Jan 27 08:09:30 crc kubenswrapper[4799]: W0127 08:09:30.408356 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb181c307_a5d9_4821_8b81_9bf5539511e5.slice/crio-5ba7e57ebd81df1b5da3088dc9bc562085d8cac1d4f038b0eab0406e851aaf28 WatchSource:0}: Error finding container 5ba7e57ebd81df1b5da3088dc9bc562085d8cac1d4f038b0eab0406e851aaf28: Status 404 returned error can't find the container with id 5ba7e57ebd81df1b5da3088dc9bc562085d8cac1d4f038b0eab0406e851aaf28 Jan 27 08:09:30 crc kubenswrapper[4799]: I0127 08:09:30.534154 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0f1e-account-create-update-grg95"] Jan 27 08:09:30 crc kubenswrapper[4799]: W0127 08:09:30.556170 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8cfa24c_646d_435d_bd6f_30199969555c.slice/crio-d9a9c9cfcfa33b4d2004e202423bfc3d227e6c39c891126474da23f554ca3c52 WatchSource:0}: Error finding container d9a9c9cfcfa33b4d2004e202423bfc3d227e6c39c891126474da23f554ca3c52: Status 404 returned error can't find the container with id d9a9c9cfcfa33b4d2004e202423bfc3d227e6c39c891126474da23f554ca3c52 Jan 27 08:09:30 crc kubenswrapper[4799]: I0127 08:09:30.635260 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-9pj2r"] Jan 27 08:09:30 crc kubenswrapper[4799]: I0127 08:09:30.717751 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-774s2"] Jan 27 08:09:30 crc kubenswrapper[4799]: I0127 08:09:30.822838 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b6fd-account-create-update-zjgk2"] Jan 27 08:09:30 crc kubenswrapper[4799]: W0127 08:09:30.841916 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a00aa8f_6e63_4f84_8353_1ba24e84e64d.slice/crio-5ad867d4d13a7be168444c3687130dbc7de4477d69f9db8bb1b8a383b2aa1425 WatchSource:0}: Error finding container 5ad867d4d13a7be168444c3687130dbc7de4477d69f9db8bb1b8a383b2aa1425: Status 404 returned error can't find the container with id 5ad867d4d13a7be168444c3687130dbc7de4477d69f9db8bb1b8a383b2aa1425 Jan 27 08:09:31 crc kubenswrapper[4799]: I0127 08:09:31.257063 4799 generic.go:334] "Generic (PLEG): container finished" podID="b181c307-a5d9-4821-8b81-9bf5539511e5" containerID="164609eb4caa85166487145780db42d3fb0581e57c8a8c66eac723a4f5bc2cf7" exitCode=0 Jan 27 08:09:31 crc kubenswrapper[4799]: I0127 08:09:31.257133 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-07af-account-create-update-jg2hs" event={"ID":"b181c307-a5d9-4821-8b81-9bf5539511e5","Type":"ContainerDied","Data":"164609eb4caa85166487145780db42d3fb0581e57c8a8c66eac723a4f5bc2cf7"} Jan 27 08:09:31 crc kubenswrapper[4799]: I0127 08:09:31.257160 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-07af-account-create-update-jg2hs" event={"ID":"b181c307-a5d9-4821-8b81-9bf5539511e5","Type":"ContainerStarted","Data":"5ba7e57ebd81df1b5da3088dc9bc562085d8cac1d4f038b0eab0406e851aaf28"} Jan 27 08:09:31 crc kubenswrapper[4799]: I0127 08:09:31.262224 4799 generic.go:334] "Generic (PLEG): container finished" podID="18be09ba-8035-4f5f-be90-d8892cf5f8ad" containerID="f8a9336fcb119530a72af14d27be0c664331fa385f674924c8674edd6b652643" exitCode=0 Jan 27 08:09:31 crc kubenswrapper[4799]: I0127 08:09:31.262283 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18be09ba-8035-4f5f-be90-d8892cf5f8ad","Type":"ContainerDied","Data":"f8a9336fcb119530a72af14d27be0c664331fa385f674924c8674edd6b652643"} Jan 27 08:09:31 crc kubenswrapper[4799]: I0127 08:09:31.265259 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-774s2" event={"ID":"e0445ff2-2b89-41e9-81f3-953e21253b19","Type":"ContainerStarted","Data":"d248ad2bdeac65e2733206852796a178a421bfc4311d4c8a1d5cac10230c0f50"} Jan 27 08:09:31 crc kubenswrapper[4799]: I0127 08:09:31.265317 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-774s2" event={"ID":"e0445ff2-2b89-41e9-81f3-953e21253b19","Type":"ContainerStarted","Data":"432ac9f510f0a13715ac5fa595187128754b885ac1760ab6dc2b515ce99609e6"} Jan 27 08:09:31 crc kubenswrapper[4799]: I0127 08:09:31.269452 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f1e-account-create-update-grg95" event={"ID":"c8cfa24c-646d-435d-bd6f-30199969555c","Type":"ContainerStarted","Data":"012c5492e476216d5b48eea413a8c072a2d360afd90cae7702e94faae4a0cecf"} Jan 27 08:09:31 crc kubenswrapper[4799]: I0127 08:09:31.269490 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f1e-account-create-update-grg95" event={"ID":"c8cfa24c-646d-435d-bd6f-30199969555c","Type":"ContainerStarted","Data":"d9a9c9cfcfa33b4d2004e202423bfc3d227e6c39c891126474da23f554ca3c52"} Jan 27 08:09:31 crc kubenswrapper[4799]: I0127 08:09:31.274325 4799 generic.go:334] "Generic (PLEG): container finished" podID="83464c1e-e470-4907-aece-b0aeea8a7ff2" containerID="fbf81c7d67c613cc7e18d02405e57bc12861568985ce1ef5ceba2ad9fbb16599" exitCode=0 Jan 27 08:09:31 crc kubenswrapper[4799]: I0127 08:09:31.274402 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kdhxd" event={"ID":"83464c1e-e470-4907-aece-b0aeea8a7ff2","Type":"ContainerDied","Data":"fbf81c7d67c613cc7e18d02405e57bc12861568985ce1ef5ceba2ad9fbb16599"} Jan 27 08:09:31 crc kubenswrapper[4799]: I0127 08:09:31.286406 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b6fd-account-create-update-zjgk2" event={"ID":"6a00aa8f-6e63-4f84-8353-1ba24e84e64d","Type":"ContainerStarted","Data":"1a41cdc3ee1368151b407e42de5af928b3e4b5851478b9ecb8d6a356357cac0e"} Jan 27 08:09:31 crc kubenswrapper[4799]: I0127 08:09:31.286723 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b6fd-account-create-update-zjgk2" event={"ID":"6a00aa8f-6e63-4f84-8353-1ba24e84e64d","Type":"ContainerStarted","Data":"5ad867d4d13a7be168444c3687130dbc7de4477d69f9db8bb1b8a383b2aa1425"} Jan 27 08:09:31 crc kubenswrapper[4799]: I0127 08:09:31.288898 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9pj2r" event={"ID":"2bd94d42-7c61-4f0a-a655-d4f85cd03d88","Type":"ContainerStarted","Data":"db4e67252efaac0bd08269a4ff81fe59a762095f54b31c4946a50bbae415c5ba"} Jan 27 08:09:31 crc kubenswrapper[4799]: I0127 08:09:31.288930 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9pj2r" event={"ID":"2bd94d42-7c61-4f0a-a655-d4f85cd03d88","Type":"ContainerStarted","Data":"2a291f0bf93a26a80db175c23ba0e031161070e8a1dc9a045fd22d9595c64a29"} Jan 27 08:09:31 crc kubenswrapper[4799]: I0127 08:09:31.305050 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-0f1e-account-create-update-grg95" podStartSLOduration=2.305031771 podStartE2EDuration="2.305031771s" podCreationTimestamp="2026-01-27 08:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:09:31.294222961 +0000 UTC m=+1437.605327026" watchObservedRunningTime="2026-01-27 08:09:31.305031771 +0000 UTC m=+1437.616135856" Jan 27 08:09:31 crc kubenswrapper[4799]: I0127 08:09:31.319058 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-774s2" podStartSLOduration=2.319033376 podStartE2EDuration="2.319033376s" podCreationTimestamp="2026-01-27 08:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:09:31.306889271 +0000 UTC m=+1437.617993336" watchObservedRunningTime="2026-01-27 08:09:31.319033376 +0000 UTC m=+1437.630137441" Jan 27 08:09:31 crc kubenswrapper[4799]: I0127 08:09:31.329831 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-b6fd-account-create-update-zjgk2" podStartSLOduration=2.329807186 podStartE2EDuration="2.329807186s" podCreationTimestamp="2026-01-27 08:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:09:31.321912564 +0000 UTC m=+1437.633016629" watchObservedRunningTime="2026-01-27 08:09:31.329807186 +0000 UTC m=+1437.640911251" Jan 27 08:09:31 crc kubenswrapper[4799]: I0127 08:09:31.353338 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-9pj2r" podStartSLOduration=2.353300696 podStartE2EDuration="2.353300696s" podCreationTimestamp="2026-01-27 08:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:09:31.351371084 +0000 UTC m=+1437.662475149" watchObservedRunningTime="2026-01-27 08:09:31.353300696 +0000 UTC m=+1437.664404761" Jan 27 08:09:31 crc kubenswrapper[4799]: I0127 08:09:31.717053 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 08:09:31 crc kubenswrapper[4799]: I0127 08:09:31.717359 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55" containerName="glance-log" containerID="cri-o://42cf2852ef566080aba6b920ff65efbe0457b3c568931d5800463ccba29c7866" gracePeriod=30 Jan 27 08:09:31 crc kubenswrapper[4799]: I0127 08:09:31.720437 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55" containerName="glance-httpd" containerID="cri-o://54ef10f706939c87bfaffb4c06838550699f2c3a45c42880cb9e3be9fc0383d2" gracePeriod=30 Jan 27 08:09:32 crc kubenswrapper[4799]: I0127 08:09:32.300790 4799 generic.go:334] "Generic (PLEG): container finished" podID="6a00aa8f-6e63-4f84-8353-1ba24e84e64d" containerID="1a41cdc3ee1368151b407e42de5af928b3e4b5851478b9ecb8d6a356357cac0e" exitCode=0 Jan 27 08:09:32 crc kubenswrapper[4799]: I0127 08:09:32.300880 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b6fd-account-create-update-zjgk2" event={"ID":"6a00aa8f-6e63-4f84-8353-1ba24e84e64d","Type":"ContainerDied","Data":"1a41cdc3ee1368151b407e42de5af928b3e4b5851478b9ecb8d6a356357cac0e"} Jan 27 08:09:32 crc kubenswrapper[4799]: I0127 08:09:32.304418 4799 generic.go:334] "Generic (PLEG): container finished" podID="2bd94d42-7c61-4f0a-a655-d4f85cd03d88" containerID="db4e67252efaac0bd08269a4ff81fe59a762095f54b31c4946a50bbae415c5ba" exitCode=0 Jan 27 08:09:32 crc kubenswrapper[4799]: I0127 08:09:32.304527 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9pj2r" event={"ID":"2bd94d42-7c61-4f0a-a655-d4f85cd03d88","Type":"ContainerDied","Data":"db4e67252efaac0bd08269a4ff81fe59a762095f54b31c4946a50bbae415c5ba"} Jan 27 08:09:32 crc kubenswrapper[4799]: I0127 08:09:32.306784 4799 generic.go:334] "Generic (PLEG): container finished" podID="e0445ff2-2b89-41e9-81f3-953e21253b19" containerID="d248ad2bdeac65e2733206852796a178a421bfc4311d4c8a1d5cac10230c0f50" exitCode=0 Jan 27 08:09:32 crc kubenswrapper[4799]: I0127 08:09:32.306855 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-774s2" event={"ID":"e0445ff2-2b89-41e9-81f3-953e21253b19","Type":"ContainerDied","Data":"d248ad2bdeac65e2733206852796a178a421bfc4311d4c8a1d5cac10230c0f50"} Jan 27 08:09:32 crc kubenswrapper[4799]: I0127 08:09:32.308846 4799 generic.go:334] "Generic (PLEG): container finished" podID="c8cfa24c-646d-435d-bd6f-30199969555c" containerID="012c5492e476216d5b48eea413a8c072a2d360afd90cae7702e94faae4a0cecf" exitCode=0 Jan 27 08:09:32 crc kubenswrapper[4799]: I0127 08:09:32.308893 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f1e-account-create-update-grg95" event={"ID":"c8cfa24c-646d-435d-bd6f-30199969555c","Type":"ContainerDied","Data":"012c5492e476216d5b48eea413a8c072a2d360afd90cae7702e94faae4a0cecf"} Jan 27 08:09:32 crc kubenswrapper[4799]: I0127 08:09:32.311823 4799 generic.go:334] "Generic (PLEG): container finished" podID="a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55" containerID="42cf2852ef566080aba6b920ff65efbe0457b3c568931d5800463ccba29c7866" exitCode=143 Jan 27 08:09:32 crc kubenswrapper[4799]: I0127 08:09:32.311898 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55","Type":"ContainerDied","Data":"42cf2852ef566080aba6b920ff65efbe0457b3c568931d5800463ccba29c7866"} Jan 27 08:09:32 crc kubenswrapper[4799]: I0127 08:09:32.808329 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kdhxd" Jan 27 08:09:32 crc kubenswrapper[4799]: I0127 08:09:32.814176 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-07af-account-create-update-jg2hs" Jan 27 08:09:32 crc kubenswrapper[4799]: I0127 08:09:32.999018 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b181c307-a5d9-4821-8b81-9bf5539511e5-operator-scripts\") pod \"b181c307-a5d9-4821-8b81-9bf5539511e5\" (UID: \"b181c307-a5d9-4821-8b81-9bf5539511e5\") " Jan 27 08:09:32 crc kubenswrapper[4799]: I0127 08:09:32.999126 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6kxd\" (UniqueName: \"kubernetes.io/projected/b181c307-a5d9-4821-8b81-9bf5539511e5-kube-api-access-b6kxd\") pod \"b181c307-a5d9-4821-8b81-9bf5539511e5\" (UID: \"b181c307-a5d9-4821-8b81-9bf5539511e5\") " Jan 27 08:09:32 crc kubenswrapper[4799]: I0127 08:09:32.999167 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83464c1e-e470-4907-aece-b0aeea8a7ff2-operator-scripts\") pod \"83464c1e-e470-4907-aece-b0aeea8a7ff2\" (UID: \"83464c1e-e470-4907-aece-b0aeea8a7ff2\") " Jan 27 08:09:32 crc kubenswrapper[4799]: I0127 08:09:32.999298 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qbkj\" (UniqueName: \"kubernetes.io/projected/83464c1e-e470-4907-aece-b0aeea8a7ff2-kube-api-access-4qbkj\") pod \"83464c1e-e470-4907-aece-b0aeea8a7ff2\" (UID: \"83464c1e-e470-4907-aece-b0aeea8a7ff2\") " Jan 27 08:09:32 crc kubenswrapper[4799]: I0127 08:09:32.999942 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83464c1e-e470-4907-aece-b0aeea8a7ff2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83464c1e-e470-4907-aece-b0aeea8a7ff2" (UID: "83464c1e-e470-4907-aece-b0aeea8a7ff2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:32.999999 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b181c307-a5d9-4821-8b81-9bf5539511e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b181c307-a5d9-4821-8b81-9bf5539511e5" (UID: "b181c307-a5d9-4821-8b81-9bf5539511e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:33.004680 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83464c1e-e470-4907-aece-b0aeea8a7ff2-kube-api-access-4qbkj" (OuterVolumeSpecName: "kube-api-access-4qbkj") pod "83464c1e-e470-4907-aece-b0aeea8a7ff2" (UID: "83464c1e-e470-4907-aece-b0aeea8a7ff2"). InnerVolumeSpecName "kube-api-access-4qbkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:33.006252 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b181c307-a5d9-4821-8b81-9bf5539511e5-kube-api-access-b6kxd" (OuterVolumeSpecName: "kube-api-access-b6kxd") pod "b181c307-a5d9-4821-8b81-9bf5539511e5" (UID: "b181c307-a5d9-4821-8b81-9bf5539511e5"). InnerVolumeSpecName "kube-api-access-b6kxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:33.100778 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83464c1e-e470-4907-aece-b0aeea8a7ff2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:33.101664 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qbkj\" (UniqueName: \"kubernetes.io/projected/83464c1e-e470-4907-aece-b0aeea8a7ff2-kube-api-access-4qbkj\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:33.101688 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b181c307-a5d9-4821-8b81-9bf5539511e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:33.101698 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6kxd\" (UniqueName: \"kubernetes.io/projected/b181c307-a5d9-4821-8b81-9bf5539511e5-kube-api-access-b6kxd\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:33.322420 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-07af-account-create-update-jg2hs" event={"ID":"b181c307-a5d9-4821-8b81-9bf5539511e5","Type":"ContainerDied","Data":"5ba7e57ebd81df1b5da3088dc9bc562085d8cac1d4f038b0eab0406e851aaf28"} Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:33.322462 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ba7e57ebd81df1b5da3088dc9bc562085d8cac1d4f038b0eab0406e851aaf28" Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:33.322426 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-07af-account-create-update-jg2hs" Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:33.323979 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kdhxd" event={"ID":"83464c1e-e470-4907-aece-b0aeea8a7ff2","Type":"ContainerDied","Data":"79be3d7e2e21c308ef0f36d00a9978f5f3af98c48b0d8e6ae05f95950965f567"} Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:33.324020 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79be3d7e2e21c308ef0f36d00a9978f5f3af98c48b0d8e6ae05f95950965f567" Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:33.324117 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kdhxd" Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:33.644414 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b6fd-account-create-update-zjgk2" Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:33.811978 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n5cp\" (UniqueName: \"kubernetes.io/projected/6a00aa8f-6e63-4f84-8353-1ba24e84e64d-kube-api-access-2n5cp\") pod \"6a00aa8f-6e63-4f84-8353-1ba24e84e64d\" (UID: \"6a00aa8f-6e63-4f84-8353-1ba24e84e64d\") " Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:33.812438 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a00aa8f-6e63-4f84-8353-1ba24e84e64d-operator-scripts\") pod \"6a00aa8f-6e63-4f84-8353-1ba24e84e64d\" (UID: \"6a00aa8f-6e63-4f84-8353-1ba24e84e64d\") " Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:33.813180 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a00aa8f-6e63-4f84-8353-1ba24e84e64d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a00aa8f-6e63-4f84-8353-1ba24e84e64d" (UID: "6a00aa8f-6e63-4f84-8353-1ba24e84e64d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:33.816472 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a00aa8f-6e63-4f84-8353-1ba24e84e64d-kube-api-access-2n5cp" (OuterVolumeSpecName: "kube-api-access-2n5cp") pod "6a00aa8f-6e63-4f84-8353-1ba24e84e64d" (UID: "6a00aa8f-6e63-4f84-8353-1ba24e84e64d"). InnerVolumeSpecName "kube-api-access-2n5cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:33.915122 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n5cp\" (UniqueName: \"kubernetes.io/projected/6a00aa8f-6e63-4f84-8353-1ba24e84e64d-kube-api-access-2n5cp\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:33.915158 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a00aa8f-6e63-4f84-8353-1ba24e84e64d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:33.949543 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f1e-account-create-update-grg95" Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:33.959631 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-774s2" Jan 27 08:09:33 crc kubenswrapper[4799]: I0127 08:09:33.971463 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9pj2r" Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.117320 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bd94d42-7c61-4f0a-a655-d4f85cd03d88-operator-scripts\") pod \"2bd94d42-7c61-4f0a-a655-d4f85cd03d88\" (UID: \"2bd94d42-7c61-4f0a-a655-d4f85cd03d88\") " Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.117406 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0445ff2-2b89-41e9-81f3-953e21253b19-operator-scripts\") pod \"e0445ff2-2b89-41e9-81f3-953e21253b19\" (UID: \"e0445ff2-2b89-41e9-81f3-953e21253b19\") " Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.117444 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxzcj\" (UniqueName: \"kubernetes.io/projected/2bd94d42-7c61-4f0a-a655-d4f85cd03d88-kube-api-access-lxzcj\") pod \"2bd94d42-7c61-4f0a-a655-d4f85cd03d88\" (UID: \"2bd94d42-7c61-4f0a-a655-d4f85cd03d88\") " Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.117470 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgxt5\" (UniqueName: \"kubernetes.io/projected/e0445ff2-2b89-41e9-81f3-953e21253b19-kube-api-access-sgxt5\") pod \"e0445ff2-2b89-41e9-81f3-953e21253b19\" (UID: \"e0445ff2-2b89-41e9-81f3-953e21253b19\") " Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.117499 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82bh4\" (UniqueName: \"kubernetes.io/projected/c8cfa24c-646d-435d-bd6f-30199969555c-kube-api-access-82bh4\") pod \"c8cfa24c-646d-435d-bd6f-30199969555c\" (UID: \"c8cfa24c-646d-435d-bd6f-30199969555c\") " Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.117541 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8cfa24c-646d-435d-bd6f-30199969555c-operator-scripts\") pod \"c8cfa24c-646d-435d-bd6f-30199969555c\" (UID: \"c8cfa24c-646d-435d-bd6f-30199969555c\") " Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.117865 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bd94d42-7c61-4f0a-a655-d4f85cd03d88-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2bd94d42-7c61-4f0a-a655-d4f85cd03d88" (UID: "2bd94d42-7c61-4f0a-a655-d4f85cd03d88"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.117942 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0445ff2-2b89-41e9-81f3-953e21253b19-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0445ff2-2b89-41e9-81f3-953e21253b19" (UID: "e0445ff2-2b89-41e9-81f3-953e21253b19"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.118167 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8cfa24c-646d-435d-bd6f-30199969555c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8cfa24c-646d-435d-bd6f-30199969555c" (UID: "c8cfa24c-646d-435d-bd6f-30199969555c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.118276 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bd94d42-7c61-4f0a-a655-d4f85cd03d88-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.118332 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0445ff2-2b89-41e9-81f3-953e21253b19-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.121597 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd94d42-7c61-4f0a-a655-d4f85cd03d88-kube-api-access-lxzcj" (OuterVolumeSpecName: "kube-api-access-lxzcj") pod "2bd94d42-7c61-4f0a-a655-d4f85cd03d88" (UID: "2bd94d42-7c61-4f0a-a655-d4f85cd03d88"). InnerVolumeSpecName "kube-api-access-lxzcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.127014 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0445ff2-2b89-41e9-81f3-953e21253b19-kube-api-access-sgxt5" (OuterVolumeSpecName: "kube-api-access-sgxt5") pod "e0445ff2-2b89-41e9-81f3-953e21253b19" (UID: "e0445ff2-2b89-41e9-81f3-953e21253b19"). InnerVolumeSpecName "kube-api-access-sgxt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.128423 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8cfa24c-646d-435d-bd6f-30199969555c-kube-api-access-82bh4" (OuterVolumeSpecName: "kube-api-access-82bh4") pod "c8cfa24c-646d-435d-bd6f-30199969555c" (UID: "c8cfa24c-646d-435d-bd6f-30199969555c"). InnerVolumeSpecName "kube-api-access-82bh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.219586 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxzcj\" (UniqueName: \"kubernetes.io/projected/2bd94d42-7c61-4f0a-a655-d4f85cd03d88-kube-api-access-lxzcj\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.219661 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgxt5\" (UniqueName: \"kubernetes.io/projected/e0445ff2-2b89-41e9-81f3-953e21253b19-kube-api-access-sgxt5\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.219671 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82bh4\" (UniqueName: \"kubernetes.io/projected/c8cfa24c-646d-435d-bd6f-30199969555c-kube-api-access-82bh4\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.219683 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8cfa24c-646d-435d-bd6f-30199969555c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.335341 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b6fd-account-create-update-zjgk2" Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.335682 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b6fd-account-create-update-zjgk2" event={"ID":"6a00aa8f-6e63-4f84-8353-1ba24e84e64d","Type":"ContainerDied","Data":"5ad867d4d13a7be168444c3687130dbc7de4477d69f9db8bb1b8a383b2aa1425"} Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.335757 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ad867d4d13a7be168444c3687130dbc7de4477d69f9db8bb1b8a383b2aa1425" Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.337385 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9pj2r" event={"ID":"2bd94d42-7c61-4f0a-a655-d4f85cd03d88","Type":"ContainerDied","Data":"2a291f0bf93a26a80db175c23ba0e031161070e8a1dc9a045fd22d9595c64a29"} Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.337419 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a291f0bf93a26a80db175c23ba0e031161070e8a1dc9a045fd22d9595c64a29" Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.337432 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9pj2r" Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.339096 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-774s2" event={"ID":"e0445ff2-2b89-41e9-81f3-953e21253b19","Type":"ContainerDied","Data":"432ac9f510f0a13715ac5fa595187128754b885ac1760ab6dc2b515ce99609e6"} Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.339119 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-774s2" Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.339126 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="432ac9f510f0a13715ac5fa595187128754b885ac1760ab6dc2b515ce99609e6" Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.341016 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f1e-account-create-update-grg95" event={"ID":"c8cfa24c-646d-435d-bd6f-30199969555c","Type":"ContainerDied","Data":"d9a9c9cfcfa33b4d2004e202423bfc3d227e6c39c891126474da23f554ca3c52"} Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.341059 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9a9c9cfcfa33b4d2004e202423bfc3d227e6c39c891126474da23f554ca3c52" Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.341104 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f1e-account-create-update-grg95" Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.508574 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.509168 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="11218246-f6a8-477a-9b9a-7abf0338df9e" containerName="glance-log" containerID="cri-o://f0f1a27d1c3775d4f9bb3826cb3def99570c42e6ed6f54bd1e8c144f71e8c3ee" gracePeriod=30 Jan 27 08:09:34 crc kubenswrapper[4799]: I0127 08:09:34.509636 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="11218246-f6a8-477a-9b9a-7abf0338df9e" containerName="glance-httpd" containerID="cri-o://58103e21c893ba0c7f7e115f0cb776fe7e3182e09f7ad2ca9104804b9087f777" gracePeriod=30 Jan 27 08:09:35 crc kubenswrapper[4799]: I0127 08:09:35.352150 4799 generic.go:334] "Generic (PLEG): container finished" podID="11218246-f6a8-477a-9b9a-7abf0338df9e" containerID="f0f1a27d1c3775d4f9bb3826cb3def99570c42e6ed6f54bd1e8c144f71e8c3ee" exitCode=143 Jan 27 08:09:35 crc kubenswrapper[4799]: I0127 08:09:35.352210 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"11218246-f6a8-477a-9b9a-7abf0338df9e","Type":"ContainerDied","Data":"f0f1a27d1c3775d4f9bb3826cb3def99570c42e6ed6f54bd1e8c144f71e8c3ee"} Jan 27 08:09:35 crc kubenswrapper[4799]: I0127 08:09:35.895111 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.054212 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxx7b\" (UniqueName: \"kubernetes.io/projected/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-kube-api-access-jxx7b\") pod \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.054325 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-public-tls-certs\") pod \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.054367 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-scripts\") pod \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.054414 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-logs\") pod \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.054438 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.054461 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-config-data\") pod \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.054551 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-httpd-run\") pod \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.054568 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-combined-ca-bundle\") pod \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\" (UID: \"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55\") " Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.055167 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55" (UID: "a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.055607 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-logs" (OuterVolumeSpecName: "logs") pod "a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55" (UID: "a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.061406 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-scripts" (OuterVolumeSpecName: "scripts") pod "a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55" (UID: "a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.061662 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-kube-api-access-jxx7b" (OuterVolumeSpecName: "kube-api-access-jxx7b") pod "a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55" (UID: "a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55"). InnerVolumeSpecName "kube-api-access-jxx7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.061812 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55" (UID: "a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.090896 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55" (UID: "a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.117712 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55" (UID: "a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.125486 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-config-data" (OuterVolumeSpecName: "config-data") pod "a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55" (UID: "a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.157050 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.157080 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.157090 4799 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.157099 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.157108 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxx7b\" (UniqueName: \"kubernetes.io/projected/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-kube-api-access-jxx7b\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.157135 4799 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.157143 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.157153 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.178378 4799 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.258946 4799 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.363200 4799 generic.go:334] "Generic (PLEG): container finished" podID="a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55" containerID="54ef10f706939c87bfaffb4c06838550699f2c3a45c42880cb9e3be9fc0383d2" exitCode=0 Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.363248 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55","Type":"ContainerDied","Data":"54ef10f706939c87bfaffb4c06838550699f2c3a45c42880cb9e3be9fc0383d2"} Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.363275 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55","Type":"ContainerDied","Data":"3526e57ba100926dddd6c03cfa6436d8966cdcaccecd679f269948f9af4db9b9"} Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.363323 4799 scope.go:117] "RemoveContainer" containerID="54ef10f706939c87bfaffb4c06838550699f2c3a45c42880cb9e3be9fc0383d2" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.363455 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.393254 4799 scope.go:117] "RemoveContainer" containerID="42cf2852ef566080aba6b920ff65efbe0457b3c568931d5800463ccba29c7866" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.403960 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.413616 4799 scope.go:117] "RemoveContainer" containerID="54ef10f706939c87bfaffb4c06838550699f2c3a45c42880cb9e3be9fc0383d2" Jan 27 08:09:36 crc kubenswrapper[4799]: E0127 08:09:36.414053 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54ef10f706939c87bfaffb4c06838550699f2c3a45c42880cb9e3be9fc0383d2\": container with ID starting with 54ef10f706939c87bfaffb4c06838550699f2c3a45c42880cb9e3be9fc0383d2 not found: ID does not exist" containerID="54ef10f706939c87bfaffb4c06838550699f2c3a45c42880cb9e3be9fc0383d2" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.414096 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54ef10f706939c87bfaffb4c06838550699f2c3a45c42880cb9e3be9fc0383d2"} err="failed to get container status \"54ef10f706939c87bfaffb4c06838550699f2c3a45c42880cb9e3be9fc0383d2\": rpc error: code = NotFound desc = could not find container \"54ef10f706939c87bfaffb4c06838550699f2c3a45c42880cb9e3be9fc0383d2\": container with ID starting with 54ef10f706939c87bfaffb4c06838550699f2c3a45c42880cb9e3be9fc0383d2 not found: ID does not exist" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.414123 4799 scope.go:117] "RemoveContainer" containerID="42cf2852ef566080aba6b920ff65efbe0457b3c568931d5800463ccba29c7866" Jan 27 08:09:36 crc kubenswrapper[4799]: E0127 08:09:36.414404 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42cf2852ef566080aba6b920ff65efbe0457b3c568931d5800463ccba29c7866\": container with ID starting with 42cf2852ef566080aba6b920ff65efbe0457b3c568931d5800463ccba29c7866 not found: ID does not exist" containerID="42cf2852ef566080aba6b920ff65efbe0457b3c568931d5800463ccba29c7866" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.414426 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42cf2852ef566080aba6b920ff65efbe0457b3c568931d5800463ccba29c7866"} err="failed to get container status \"42cf2852ef566080aba6b920ff65efbe0457b3c568931d5800463ccba29c7866\": rpc error: code = NotFound desc = could not find container \"42cf2852ef566080aba6b920ff65efbe0457b3c568931d5800463ccba29c7866\": container with ID starting with 42cf2852ef566080aba6b920ff65efbe0457b3c568931d5800463ccba29c7866 not found: ID does not exist" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.418020 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.437155 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 08:09:36 crc kubenswrapper[4799]: E0127 08:09:36.437907 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b181c307-a5d9-4821-8b81-9bf5539511e5" containerName="mariadb-account-create-update" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.438004 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b181c307-a5d9-4821-8b81-9bf5539511e5" containerName="mariadb-account-create-update" Jan 27 08:09:36 crc kubenswrapper[4799]: E0127 08:09:36.438109 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8cfa24c-646d-435d-bd6f-30199969555c" containerName="mariadb-account-create-update" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.438189 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8cfa24c-646d-435d-bd6f-30199969555c" containerName="mariadb-account-create-update" Jan 27 08:09:36 crc kubenswrapper[4799]: E0127 08:09:36.438261 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0445ff2-2b89-41e9-81f3-953e21253b19" containerName="mariadb-database-create" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.438343 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0445ff2-2b89-41e9-81f3-953e21253b19" containerName="mariadb-database-create" Jan 27 08:09:36 crc kubenswrapper[4799]: E0127 08:09:36.438441 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd94d42-7c61-4f0a-a655-d4f85cd03d88" containerName="mariadb-database-create" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.438510 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd94d42-7c61-4f0a-a655-d4f85cd03d88" containerName="mariadb-database-create" Jan 27 08:09:36 crc kubenswrapper[4799]: E0127 08:09:36.438591 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a00aa8f-6e63-4f84-8353-1ba24e84e64d" containerName="mariadb-account-create-update" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.438669 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a00aa8f-6e63-4f84-8353-1ba24e84e64d" containerName="mariadb-account-create-update" Jan 27 08:09:36 crc kubenswrapper[4799]: E0127 08:09:36.438742 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55" containerName="glance-httpd" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.438804 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55" containerName="glance-httpd" Jan 27 08:09:36 crc kubenswrapper[4799]: E0127 08:09:36.438881 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55" containerName="glance-log" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.438946 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55" containerName="glance-log" Jan 27 08:09:36 crc kubenswrapper[4799]: E0127 08:09:36.439016 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83464c1e-e470-4907-aece-b0aeea8a7ff2" containerName="mariadb-database-create" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.439081 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="83464c1e-e470-4907-aece-b0aeea8a7ff2" containerName="mariadb-database-create" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.439456 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="83464c1e-e470-4907-aece-b0aeea8a7ff2" containerName="mariadb-database-create" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.439537 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8cfa24c-646d-435d-bd6f-30199969555c" containerName="mariadb-account-create-update" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.439621 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55" containerName="glance-log" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.439697 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="b181c307-a5d9-4821-8b81-9bf5539511e5" containerName="mariadb-account-create-update" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.439777 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd94d42-7c61-4f0a-a655-d4f85cd03d88" containerName="mariadb-database-create" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.439855 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a00aa8f-6e63-4f84-8353-1ba24e84e64d" containerName="mariadb-account-create-update" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.439932 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55" containerName="glance-httpd" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.439998 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0445ff2-2b89-41e9-81f3-953e21253b19" containerName="mariadb-database-create" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.441261 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.444012 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.444314 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.446458 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.484438 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55" path="/var/lib/kubelet/pods/a72dcfc8-bda0-475d-ab6f-3c8a3ba8da55/volumes" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.566469 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.566528 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.566639 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-logs\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.566666 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fsvk\" (UniqueName: \"kubernetes.io/projected/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-kube-api-access-5fsvk\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.566692 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-scripts\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.566822 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.566850 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-config-data\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.566903 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.668698 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-logs\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.668740 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fsvk\" (UniqueName: \"kubernetes.io/projected/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-kube-api-access-5fsvk\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.668763 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-scripts\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.668850 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.668872 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-config-data\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.668903 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.668926 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.668944 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.669468 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.670053 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.670151 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-logs\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.675491 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-scripts\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.675919 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.676160 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-config-data\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.676395 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.691237 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fsvk\" (UniqueName: \"kubernetes.io/projected/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-kube-api-access-5fsvk\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.718966 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " pod="openstack/glance-default-external-api-0" Jan 27 08:09:36 crc kubenswrapper[4799]: I0127 08:09:36.764461 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 08:09:37 crc kubenswrapper[4799]: I0127 08:09:37.444391 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.412639 4799 generic.go:334] "Generic (PLEG): container finished" podID="11218246-f6a8-477a-9b9a-7abf0338df9e" containerID="58103e21c893ba0c7f7e115f0cb776fe7e3182e09f7ad2ca9104804b9087f777" exitCode=0 Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.412938 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"11218246-f6a8-477a-9b9a-7abf0338df9e","Type":"ContainerDied","Data":"58103e21c893ba0c7f7e115f0cb776fe7e3182e09f7ad2ca9104804b9087f777"} Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.413080 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"11218246-f6a8-477a-9b9a-7abf0338df9e","Type":"ContainerDied","Data":"e8faad5e363e437f9979d58e910514d2a8c32a0a5ab6a39fd753111896f99dbe"} Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.413102 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8faad5e363e437f9979d58e910514d2a8c32a0a5ab6a39fd753111896f99dbe" Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.414409 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.422743 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fbb51a95-a5db-4e7c-8cca-a59d07200ad5","Type":"ContainerStarted","Data":"47f6d93a69dd90911aea2e658078c1ee7cf68c9157a2c5886005700a1107b370"} Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.422795 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fbb51a95-a5db-4e7c-8cca-a59d07200ad5","Type":"ContainerStarted","Data":"d26a7759bc073d43ed7139c4eabc6e26943a22df69a9ca1460423b7cd817ad0a"} Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.512698 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-config-data\") pod \"11218246-f6a8-477a-9b9a-7abf0338df9e\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.512760 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzczc\" (UniqueName: \"kubernetes.io/projected/11218246-f6a8-477a-9b9a-7abf0338df9e-kube-api-access-nzczc\") pod \"11218246-f6a8-477a-9b9a-7abf0338df9e\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.512790 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-scripts\") pod \"11218246-f6a8-477a-9b9a-7abf0338df9e\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.513434 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11218246-f6a8-477a-9b9a-7abf0338df9e-logs\") pod \"11218246-f6a8-477a-9b9a-7abf0338df9e\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.513543 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"11218246-f6a8-477a-9b9a-7abf0338df9e\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.513579 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-internal-tls-certs\") pod \"11218246-f6a8-477a-9b9a-7abf0338df9e\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.513616 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11218246-f6a8-477a-9b9a-7abf0338df9e-httpd-run\") pod \"11218246-f6a8-477a-9b9a-7abf0338df9e\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.513659 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-combined-ca-bundle\") pod \"11218246-f6a8-477a-9b9a-7abf0338df9e\" (UID: \"11218246-f6a8-477a-9b9a-7abf0338df9e\") " Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.516391 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11218246-f6a8-477a-9b9a-7abf0338df9e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "11218246-f6a8-477a-9b9a-7abf0338df9e" (UID: "11218246-f6a8-477a-9b9a-7abf0338df9e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.516766 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11218246-f6a8-477a-9b9a-7abf0338df9e-logs" (OuterVolumeSpecName: "logs") pod "11218246-f6a8-477a-9b9a-7abf0338df9e" (UID: "11218246-f6a8-477a-9b9a-7abf0338df9e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.522438 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "11218246-f6a8-477a-9b9a-7abf0338df9e" (UID: "11218246-f6a8-477a-9b9a-7abf0338df9e"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.527882 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11218246-f6a8-477a-9b9a-7abf0338df9e-kube-api-access-nzczc" (OuterVolumeSpecName: "kube-api-access-nzczc") pod "11218246-f6a8-477a-9b9a-7abf0338df9e" (UID: "11218246-f6a8-477a-9b9a-7abf0338df9e"). InnerVolumeSpecName "kube-api-access-nzczc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.530519 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-scripts" (OuterVolumeSpecName: "scripts") pod "11218246-f6a8-477a-9b9a-7abf0338df9e" (UID: "11218246-f6a8-477a-9b9a-7abf0338df9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.567226 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11218246-f6a8-477a-9b9a-7abf0338df9e" (UID: "11218246-f6a8-477a-9b9a-7abf0338df9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.567252 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-config-data" (OuterVolumeSpecName: "config-data") pod "11218246-f6a8-477a-9b9a-7abf0338df9e" (UID: "11218246-f6a8-477a-9b9a-7abf0338df9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.588273 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "11218246-f6a8-477a-9b9a-7abf0338df9e" (UID: "11218246-f6a8-477a-9b9a-7abf0338df9e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.615553 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11218246-f6a8-477a-9b9a-7abf0338df9e-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.615596 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.615607 4799 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.615617 4799 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11218246-f6a8-477a-9b9a-7abf0338df9e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.615627 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.615635 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.615643 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzczc\" (UniqueName: \"kubernetes.io/projected/11218246-f6a8-477a-9b9a-7abf0338df9e-kube-api-access-nzczc\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.615656 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11218246-f6a8-477a-9b9a-7abf0338df9e-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.633644 4799 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 27 08:09:38 crc kubenswrapper[4799]: I0127 08:09:38.717062 4799 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.432935 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fbb51a95-a5db-4e7c-8cca-a59d07200ad5","Type":"ContainerStarted","Data":"8cd4ca5237c50f4d23bf8d52a5873e5a8a629a0e179bd22093979620a71f464d"} Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.432974 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.457113 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.457087502 podStartE2EDuration="3.457087502s" podCreationTimestamp="2026-01-27 08:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:09:39.452443198 +0000 UTC m=+1445.763547273" watchObservedRunningTime="2026-01-27 08:09:39.457087502 +0000 UTC m=+1445.768191567" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.491865 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.511158 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.525967 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 08:09:39 crc kubenswrapper[4799]: E0127 08:09:39.526425 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11218246-f6a8-477a-9b9a-7abf0338df9e" containerName="glance-log" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.526440 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="11218246-f6a8-477a-9b9a-7abf0338df9e" containerName="glance-log" Jan 27 08:09:39 crc kubenswrapper[4799]: E0127 08:09:39.526472 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11218246-f6a8-477a-9b9a-7abf0338df9e" containerName="glance-httpd" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.526480 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="11218246-f6a8-477a-9b9a-7abf0338df9e" containerName="glance-httpd" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.526699 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="11218246-f6a8-477a-9b9a-7abf0338df9e" containerName="glance-httpd" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.526715 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="11218246-f6a8-477a-9b9a-7abf0338df9e" containerName="glance-log" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.528053 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.532796 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.533157 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.539594 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.635012 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.635101 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-logs\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.635123 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.635170 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.635403 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.635493 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.635532 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82jrg\" (UniqueName: \"kubernetes.io/projected/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-kube-api-access-82jrg\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.635606 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.676428 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hmh5p"] Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.677554 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hmh5p" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.684000 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-469gs" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.687024 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.687286 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.705374 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hmh5p"] Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.737654 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.737731 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.737768 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82jrg\" (UniqueName: \"kubernetes.io/projected/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-kube-api-access-82jrg\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.737803 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.737869 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.737900 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-logs\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.737919 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.737944 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.739134 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-logs\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.739441 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.741660 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.743180 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.746040 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.746388 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.759380 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.763203 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82jrg\" (UniqueName: \"kubernetes.io/projected/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-kube-api-access-82jrg\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.776548 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.840013 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccz22\" (UniqueName: \"kubernetes.io/projected/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-kube-api-access-ccz22\") pod \"nova-cell0-conductor-db-sync-hmh5p\" (UID: \"a070ecb5-b0ed-42b2-9778-07e62cffe5c4\") " pod="openstack/nova-cell0-conductor-db-sync-hmh5p" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.840078 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-config-data\") pod \"nova-cell0-conductor-db-sync-hmh5p\" (UID: \"a070ecb5-b0ed-42b2-9778-07e62cffe5c4\") " pod="openstack/nova-cell0-conductor-db-sync-hmh5p" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.840095 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-scripts\") pod \"nova-cell0-conductor-db-sync-hmh5p\" (UID: \"a070ecb5-b0ed-42b2-9778-07e62cffe5c4\") " pod="openstack/nova-cell0-conductor-db-sync-hmh5p" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.840149 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hmh5p\" (UID: \"a070ecb5-b0ed-42b2-9778-07e62cffe5c4\") " pod="openstack/nova-cell0-conductor-db-sync-hmh5p" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.852616 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.945855 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccz22\" (UniqueName: \"kubernetes.io/projected/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-kube-api-access-ccz22\") pod \"nova-cell0-conductor-db-sync-hmh5p\" (UID: \"a070ecb5-b0ed-42b2-9778-07e62cffe5c4\") " pod="openstack/nova-cell0-conductor-db-sync-hmh5p" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.947284 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-config-data\") pod \"nova-cell0-conductor-db-sync-hmh5p\" (UID: \"a070ecb5-b0ed-42b2-9778-07e62cffe5c4\") " pod="openstack/nova-cell0-conductor-db-sync-hmh5p" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.947567 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-scripts\") pod \"nova-cell0-conductor-db-sync-hmh5p\" (UID: \"a070ecb5-b0ed-42b2-9778-07e62cffe5c4\") " pod="openstack/nova-cell0-conductor-db-sync-hmh5p" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.947647 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hmh5p\" (UID: \"a070ecb5-b0ed-42b2-9778-07e62cffe5c4\") " pod="openstack/nova-cell0-conductor-db-sync-hmh5p" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.954409 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-config-data\") pod \"nova-cell0-conductor-db-sync-hmh5p\" (UID: \"a070ecb5-b0ed-42b2-9778-07e62cffe5c4\") " pod="openstack/nova-cell0-conductor-db-sync-hmh5p" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.954896 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hmh5p\" (UID: \"a070ecb5-b0ed-42b2-9778-07e62cffe5c4\") " pod="openstack/nova-cell0-conductor-db-sync-hmh5p" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.956002 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-scripts\") pod \"nova-cell0-conductor-db-sync-hmh5p\" (UID: \"a070ecb5-b0ed-42b2-9778-07e62cffe5c4\") " pod="openstack/nova-cell0-conductor-db-sync-hmh5p" Jan 27 08:09:39 crc kubenswrapper[4799]: I0127 08:09:39.965990 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccz22\" (UniqueName: \"kubernetes.io/projected/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-kube-api-access-ccz22\") pod \"nova-cell0-conductor-db-sync-hmh5p\" (UID: \"a070ecb5-b0ed-42b2-9778-07e62cffe5c4\") " pod="openstack/nova-cell0-conductor-db-sync-hmh5p" Jan 27 08:09:40 crc kubenswrapper[4799]: I0127 08:09:40.004678 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hmh5p" Jan 27 08:09:40 crc kubenswrapper[4799]: I0127 08:09:40.450217 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 08:09:40 crc kubenswrapper[4799]: I0127 08:09:40.480543 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11218246-f6a8-477a-9b9a-7abf0338df9e" path="/var/lib/kubelet/pods/11218246-f6a8-477a-9b9a-7abf0338df9e/volumes" Jan 27 08:09:40 crc kubenswrapper[4799]: I0127 08:09:40.552953 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hmh5p"] Jan 27 08:09:41 crc kubenswrapper[4799]: I0127 08:09:41.462241 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4","Type":"ContainerStarted","Data":"645e7a56ca8ac37dc99398357884c68673d8c691b0491e5e93509703b5f8f491"} Jan 27 08:09:41 crc kubenswrapper[4799]: I0127 08:09:41.463053 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4","Type":"ContainerStarted","Data":"983e1aee27eff1308ac3123ecad7bb84214018db1cd40ee868c1c898cb997849"} Jan 27 08:09:41 crc kubenswrapper[4799]: I0127 08:09:41.463423 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hmh5p" event={"ID":"a070ecb5-b0ed-42b2-9778-07e62cffe5c4","Type":"ContainerStarted","Data":"a86efbce892ded0a55ee3ff53f247ecd45b5d644382f31ed2f5684a1c8738773"} Jan 27 08:09:42 crc kubenswrapper[4799]: I0127 08:09:42.494848 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4","Type":"ContainerStarted","Data":"9163187825d75d65281abfc71d2b39e88d5e1b584e17b29a6cb086d7ce38d30f"} Jan 27 08:09:42 crc kubenswrapper[4799]: I0127 08:09:42.533869 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.533849133 podStartE2EDuration="3.533849133s" podCreationTimestamp="2026-01-27 08:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:09:42.524897463 +0000 UTC m=+1448.836001528" watchObservedRunningTime="2026-01-27 08:09:42.533849133 +0000 UTC m=+1448.844953198" Jan 27 08:09:44 crc kubenswrapper[4799]: I0127 08:09:44.492442 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="18be09ba-8035-4f5f-be90-d8892cf5f8ad" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 08:09:46 crc kubenswrapper[4799]: I0127 08:09:46.765163 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 08:09:46 crc kubenswrapper[4799]: I0127 08:09:46.765488 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 08:09:46 crc kubenswrapper[4799]: I0127 08:09:46.807276 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 08:09:46 crc kubenswrapper[4799]: I0127 08:09:46.807521 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 08:09:47 crc kubenswrapper[4799]: I0127 08:09:47.564800 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 08:09:47 crc kubenswrapper[4799]: I0127 08:09:47.565159 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 08:09:47 crc kubenswrapper[4799]: I0127 08:09:47.713280 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5xppr"] Jan 27 08:09:47 crc kubenswrapper[4799]: I0127 08:09:47.715270 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5xppr" Jan 27 08:09:47 crc kubenswrapper[4799]: I0127 08:09:47.717095 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5xppr"] Jan 27 08:09:47 crc kubenswrapper[4799]: I0127 08:09:47.800332 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3-utilities\") pod \"community-operators-5xppr\" (UID: \"f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3\") " pod="openshift-marketplace/community-operators-5xppr" Jan 27 08:09:47 crc kubenswrapper[4799]: I0127 08:09:47.801004 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3-catalog-content\") pod \"community-operators-5xppr\" (UID: \"f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3\") " pod="openshift-marketplace/community-operators-5xppr" Jan 27 08:09:47 crc kubenswrapper[4799]: I0127 08:09:47.801111 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzw2r\" (UniqueName: \"kubernetes.io/projected/f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3-kube-api-access-gzw2r\") pod \"community-operators-5xppr\" (UID: \"f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3\") " pod="openshift-marketplace/community-operators-5xppr" Jan 27 08:09:47 crc kubenswrapper[4799]: I0127 08:09:47.903253 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3-utilities\") pod \"community-operators-5xppr\" (UID: \"f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3\") " pod="openshift-marketplace/community-operators-5xppr" Jan 27 08:09:47 crc kubenswrapper[4799]: I0127 08:09:47.903345 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3-catalog-content\") pod \"community-operators-5xppr\" (UID: \"f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3\") " pod="openshift-marketplace/community-operators-5xppr" Jan 27 08:09:47 crc kubenswrapper[4799]: I0127 08:09:47.903406 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzw2r\" (UniqueName: \"kubernetes.io/projected/f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3-kube-api-access-gzw2r\") pod \"community-operators-5xppr\" (UID: \"f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3\") " pod="openshift-marketplace/community-operators-5xppr" Jan 27 08:09:47 crc kubenswrapper[4799]: I0127 08:09:47.903775 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3-utilities\") pod \"community-operators-5xppr\" (UID: \"f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3\") " pod="openshift-marketplace/community-operators-5xppr" Jan 27 08:09:47 crc kubenswrapper[4799]: I0127 08:09:47.903826 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3-catalog-content\") pod \"community-operators-5xppr\" (UID: \"f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3\") " pod="openshift-marketplace/community-operators-5xppr" Jan 27 08:09:47 crc kubenswrapper[4799]: I0127 08:09:47.919824 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzw2r\" (UniqueName: \"kubernetes.io/projected/f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3-kube-api-access-gzw2r\") pod \"community-operators-5xppr\" (UID: \"f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3\") " pod="openshift-marketplace/community-operators-5xppr" Jan 27 08:09:48 crc kubenswrapper[4799]: I0127 08:09:48.060443 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5xppr" Jan 27 08:09:48 crc kubenswrapper[4799]: I0127 08:09:48.574157 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hmh5p" event={"ID":"a070ecb5-b0ed-42b2-9778-07e62cffe5c4","Type":"ContainerStarted","Data":"2e49a4fc214f12566b2516479f547812997d82785b8858995acbe7d34ebe9df8"} Jan 27 08:09:48 crc kubenswrapper[4799]: I0127 08:09:48.596665 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-hmh5p" podStartSLOduration=2.7589213089999998 podStartE2EDuration="9.596646519s" podCreationTimestamp="2026-01-27 08:09:39 +0000 UTC" firstStartedPulling="2026-01-27 08:09:40.57067251 +0000 UTC m=+1446.881776575" lastFinishedPulling="2026-01-27 08:09:47.40839772 +0000 UTC m=+1453.719501785" observedRunningTime="2026-01-27 08:09:48.593656879 +0000 UTC m=+1454.904760954" watchObservedRunningTime="2026-01-27 08:09:48.596646519 +0000 UTC m=+1454.907750584" Jan 27 08:09:48 crc kubenswrapper[4799]: I0127 08:09:48.622982 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5xppr"] Jan 27 08:09:49 crc kubenswrapper[4799]: I0127 08:09:49.599778 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 08:09:49 crc kubenswrapper[4799]: I0127 08:09:49.622433 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 08:09:49 crc kubenswrapper[4799]: I0127 08:09:49.627276 4799 generic.go:334] "Generic (PLEG): container finished" podID="f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3" containerID="781c0a9144fbd29331b5dd302173f8cbce4b7c250ebe6fbee0f15b852941dbd1" exitCode=0 Jan 27 08:09:49 crc kubenswrapper[4799]: I0127 08:09:49.627391 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xppr" event={"ID":"f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3","Type":"ContainerDied","Data":"781c0a9144fbd29331b5dd302173f8cbce4b7c250ebe6fbee0f15b852941dbd1"} Jan 27 08:09:49 crc kubenswrapper[4799]: I0127 08:09:49.627465 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xppr" event={"ID":"f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3","Type":"ContainerStarted","Data":"8e7f72939de9a72683e4fe52e604d26ba322c096bf0b9be5efe916986d3124b8"} Jan 27 08:09:49 crc kubenswrapper[4799]: I0127 08:09:49.853558 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 08:09:49 crc kubenswrapper[4799]: I0127 08:09:49.853616 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 08:09:49 crc kubenswrapper[4799]: I0127 08:09:49.901867 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 08:09:49 crc kubenswrapper[4799]: I0127 08:09:49.906922 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 08:09:50 crc kubenswrapper[4799]: I0127 08:09:50.640118 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xppr" event={"ID":"f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3","Type":"ContainerStarted","Data":"6af69ba88db18a0107f5a2fc7952f42a2c374dca6b5c54d3104910c53491e829"} Jan 27 08:09:50 crc kubenswrapper[4799]: I0127 08:09:50.640473 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 08:09:50 crc kubenswrapper[4799]: I0127 08:09:50.640807 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 08:09:51 crc kubenswrapper[4799]: I0127 08:09:51.654965 4799 generic.go:334] "Generic (PLEG): container finished" podID="f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3" containerID="6af69ba88db18a0107f5a2fc7952f42a2c374dca6b5c54d3104910c53491e829" exitCode=0 Jan 27 08:09:51 crc kubenswrapper[4799]: I0127 08:09:51.656706 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xppr" event={"ID":"f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3","Type":"ContainerDied","Data":"6af69ba88db18a0107f5a2fc7952f42a2c374dca6b5c54d3104910c53491e829"} Jan 27 08:09:52 crc kubenswrapper[4799]: I0127 08:09:52.577961 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 08:09:52 crc kubenswrapper[4799]: I0127 08:09:52.610778 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 08:09:52 crc kubenswrapper[4799]: I0127 08:09:52.678117 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xppr" event={"ID":"f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3","Type":"ContainerStarted","Data":"6f74f27a4472fd0bc7a2c0c30991e6137b8352af73c3c4b60f7bdc36f8c7a525"} Jan 27 08:09:53 crc kubenswrapper[4799]: I0127 08:09:53.731433 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:09:53 crc kubenswrapper[4799]: I0127 08:09:53.731805 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:09:54 crc kubenswrapper[4799]: E0127 08:09:54.489596 4799 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18be09ba_8035_4f5f_be90_d8892cf5f8ad.slice/crio-conmon-e90dc1d5178ce7b73c738b3b59dcb5c782a1547c9c188cefac3aa0aaa559a86e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18be09ba_8035_4f5f_be90_d8892cf5f8ad.slice/crio-e90dc1d5178ce7b73c738b3b59dcb5c782a1547c9c188cefac3aa0aaa559a86e.scope\": RecentStats: unable to find data in memory cache]" Jan 27 08:09:54 crc kubenswrapper[4799]: I0127 08:09:54.720016 4799 generic.go:334] "Generic (PLEG): container finished" podID="18be09ba-8035-4f5f-be90-d8892cf5f8ad" containerID="e90dc1d5178ce7b73c738b3b59dcb5c782a1547c9c188cefac3aa0aaa559a86e" exitCode=137 Jan 27 08:09:54 crc kubenswrapper[4799]: I0127 08:09:54.720085 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18be09ba-8035-4f5f-be90-d8892cf5f8ad","Type":"ContainerDied","Data":"e90dc1d5178ce7b73c738b3b59dcb5c782a1547c9c188cefac3aa0aaa559a86e"} Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.269558 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.310551 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5xppr" podStartSLOduration=5.59517674 podStartE2EDuration="8.310529208s" podCreationTimestamp="2026-01-27 08:09:47 +0000 UTC" firstStartedPulling="2026-01-27 08:09:49.645732386 +0000 UTC m=+1455.956836451" lastFinishedPulling="2026-01-27 08:09:52.361084854 +0000 UTC m=+1458.672188919" observedRunningTime="2026-01-27 08:09:52.712526869 +0000 UTC m=+1459.023630944" watchObservedRunningTime="2026-01-27 08:09:55.310529208 +0000 UTC m=+1461.621633263" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.354582 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-config-data\") pod \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.354664 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-combined-ca-bundle\") pod \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.354780 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp9v5\" (UniqueName: \"kubernetes.io/projected/18be09ba-8035-4f5f-be90-d8892cf5f8ad-kube-api-access-kp9v5\") pod \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.354873 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18be09ba-8035-4f5f-be90-d8892cf5f8ad-run-httpd\") pod \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.354900 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-scripts\") pod \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.354968 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-sg-core-conf-yaml\") pod \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.354990 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18be09ba-8035-4f5f-be90-d8892cf5f8ad-log-httpd\") pod \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\" (UID: \"18be09ba-8035-4f5f-be90-d8892cf5f8ad\") " Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.355871 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18be09ba-8035-4f5f-be90-d8892cf5f8ad-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "18be09ba-8035-4f5f-be90-d8892cf5f8ad" (UID: "18be09ba-8035-4f5f-be90-d8892cf5f8ad"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.356102 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18be09ba-8035-4f5f-be90-d8892cf5f8ad-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "18be09ba-8035-4f5f-be90-d8892cf5f8ad" (UID: "18be09ba-8035-4f5f-be90-d8892cf5f8ad"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.359849 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18be09ba-8035-4f5f-be90-d8892cf5f8ad-kube-api-access-kp9v5" (OuterVolumeSpecName: "kube-api-access-kp9v5") pod "18be09ba-8035-4f5f-be90-d8892cf5f8ad" (UID: "18be09ba-8035-4f5f-be90-d8892cf5f8ad"). InnerVolumeSpecName "kube-api-access-kp9v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.369102 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-scripts" (OuterVolumeSpecName: "scripts") pod "18be09ba-8035-4f5f-be90-d8892cf5f8ad" (UID: "18be09ba-8035-4f5f-be90-d8892cf5f8ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.390849 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "18be09ba-8035-4f5f-be90-d8892cf5f8ad" (UID: "18be09ba-8035-4f5f-be90-d8892cf5f8ad"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.418085 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18be09ba-8035-4f5f-be90-d8892cf5f8ad" (UID: "18be09ba-8035-4f5f-be90-d8892cf5f8ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.444337 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-config-data" (OuterVolumeSpecName: "config-data") pod "18be09ba-8035-4f5f-be90-d8892cf5f8ad" (UID: "18be09ba-8035-4f5f-be90-d8892cf5f8ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.457018 4799 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.457051 4799 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18be09ba-8035-4f5f-be90-d8892cf5f8ad-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.457062 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.457073 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.457086 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp9v5\" (UniqueName: \"kubernetes.io/projected/18be09ba-8035-4f5f-be90-d8892cf5f8ad-kube-api-access-kp9v5\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.457098 4799 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18be09ba-8035-4f5f-be90-d8892cf5f8ad-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.457112 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18be09ba-8035-4f5f-be90-d8892cf5f8ad-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.734539 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18be09ba-8035-4f5f-be90-d8892cf5f8ad","Type":"ContainerDied","Data":"5d29a3a6d6fb0fb8c1c5e25585db0393abcaa08eedc1435aefd1107808dd21d4"} Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.734966 4799 scope.go:117] "RemoveContainer" containerID="e90dc1d5178ce7b73c738b3b59dcb5c782a1547c9c188cefac3aa0aaa559a86e" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.734646 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.764996 4799 scope.go:117] "RemoveContainer" containerID="faa92dc8d7defa68a7ba62741f8477a72bf17eb5374f0b6676cf704214ec6024" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.802881 4799 scope.go:117] "RemoveContainer" containerID="56a1e0e567cfa08f604a96bf778cee55e167511573fd5b5991c46fd59cb2b7b5" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.803109 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.820660 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.828744 4799 scope.go:117] "RemoveContainer" containerID="f8a9336fcb119530a72af14d27be0c664331fa385f674924c8674edd6b652643" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.834571 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:09:55 crc kubenswrapper[4799]: E0127 08:09:55.835047 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18be09ba-8035-4f5f-be90-d8892cf5f8ad" containerName="proxy-httpd" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.835136 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="18be09ba-8035-4f5f-be90-d8892cf5f8ad" containerName="proxy-httpd" Jan 27 08:09:55 crc kubenswrapper[4799]: E0127 08:09:55.835218 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18be09ba-8035-4f5f-be90-d8892cf5f8ad" containerName="ceilometer-notification-agent" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.835274 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="18be09ba-8035-4f5f-be90-d8892cf5f8ad" containerName="ceilometer-notification-agent" Jan 27 08:09:55 crc kubenswrapper[4799]: E0127 08:09:55.835360 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18be09ba-8035-4f5f-be90-d8892cf5f8ad" containerName="ceilometer-central-agent" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.835423 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="18be09ba-8035-4f5f-be90-d8892cf5f8ad" containerName="ceilometer-central-agent" Jan 27 08:09:55 crc kubenswrapper[4799]: E0127 08:09:55.835497 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18be09ba-8035-4f5f-be90-d8892cf5f8ad" containerName="sg-core" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.835553 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="18be09ba-8035-4f5f-be90-d8892cf5f8ad" containerName="sg-core" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.835768 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="18be09ba-8035-4f5f-be90-d8892cf5f8ad" containerName="ceilometer-central-agent" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.835834 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="18be09ba-8035-4f5f-be90-d8892cf5f8ad" containerName="sg-core" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.835902 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="18be09ba-8035-4f5f-be90-d8892cf5f8ad" containerName="ceilometer-notification-agent" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.835971 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="18be09ba-8035-4f5f-be90-d8892cf5f8ad" containerName="proxy-httpd" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.839042 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.842874 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.844544 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.865288 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.972261 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4193641c-2be2-4364-a5c2-1936a70fed09-log-httpd\") pod \"ceilometer-0\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " pod="openstack/ceilometer-0" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.972754 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4193641c-2be2-4364-a5c2-1936a70fed09-run-httpd\") pod \"ceilometer-0\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " pod="openstack/ceilometer-0" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.973092 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsps5\" (UniqueName: \"kubernetes.io/projected/4193641c-2be2-4364-a5c2-1936a70fed09-kube-api-access-zsps5\") pod \"ceilometer-0\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " pod="openstack/ceilometer-0" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.973687 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " pod="openstack/ceilometer-0" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.973999 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " pod="openstack/ceilometer-0" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.974238 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-scripts\") pod \"ceilometer-0\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " pod="openstack/ceilometer-0" Jan 27 08:09:55 crc kubenswrapper[4799]: I0127 08:09:55.974479 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-config-data\") pod \"ceilometer-0\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " pod="openstack/ceilometer-0" Jan 27 08:09:56 crc kubenswrapper[4799]: I0127 08:09:56.076022 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4193641c-2be2-4364-a5c2-1936a70fed09-log-httpd\") pod \"ceilometer-0\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " pod="openstack/ceilometer-0" Jan 27 08:09:56 crc kubenswrapper[4799]: I0127 08:09:56.076084 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4193641c-2be2-4364-a5c2-1936a70fed09-run-httpd\") pod \"ceilometer-0\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " pod="openstack/ceilometer-0" Jan 27 08:09:56 crc kubenswrapper[4799]: I0127 08:09:56.076171 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsps5\" (UniqueName: \"kubernetes.io/projected/4193641c-2be2-4364-a5c2-1936a70fed09-kube-api-access-zsps5\") pod \"ceilometer-0\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " pod="openstack/ceilometer-0" Jan 27 08:09:56 crc kubenswrapper[4799]: I0127 08:09:56.076225 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " pod="openstack/ceilometer-0" Jan 27 08:09:56 crc kubenswrapper[4799]: I0127 08:09:56.076283 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " pod="openstack/ceilometer-0" Jan 27 08:09:56 crc kubenswrapper[4799]: I0127 08:09:56.076355 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-scripts\") pod \"ceilometer-0\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " pod="openstack/ceilometer-0" Jan 27 08:09:56 crc kubenswrapper[4799]: I0127 08:09:56.076402 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-config-data\") pod \"ceilometer-0\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " pod="openstack/ceilometer-0" Jan 27 08:09:56 crc kubenswrapper[4799]: I0127 08:09:56.076507 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4193641c-2be2-4364-a5c2-1936a70fed09-log-httpd\") pod \"ceilometer-0\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " pod="openstack/ceilometer-0" Jan 27 08:09:56 crc kubenswrapper[4799]: I0127 08:09:56.076951 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4193641c-2be2-4364-a5c2-1936a70fed09-run-httpd\") pod \"ceilometer-0\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " pod="openstack/ceilometer-0" Jan 27 08:09:56 crc kubenswrapper[4799]: I0127 08:09:56.081562 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " pod="openstack/ceilometer-0" Jan 27 08:09:56 crc kubenswrapper[4799]: I0127 08:09:56.082911 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " pod="openstack/ceilometer-0" Jan 27 08:09:56 crc kubenswrapper[4799]: I0127 08:09:56.083145 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-scripts\") pod \"ceilometer-0\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " pod="openstack/ceilometer-0" Jan 27 08:09:56 crc kubenswrapper[4799]: I0127 08:09:56.085706 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-config-data\") pod \"ceilometer-0\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " pod="openstack/ceilometer-0" Jan 27 08:09:56 crc kubenswrapper[4799]: I0127 08:09:56.101992 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsps5\" (UniqueName: \"kubernetes.io/projected/4193641c-2be2-4364-a5c2-1936a70fed09-kube-api-access-zsps5\") pod \"ceilometer-0\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " pod="openstack/ceilometer-0" Jan 27 08:09:56 crc kubenswrapper[4799]: I0127 08:09:56.163206 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:09:56 crc kubenswrapper[4799]: I0127 08:09:56.465938 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18be09ba-8035-4f5f-be90-d8892cf5f8ad" path="/var/lib/kubelet/pods/18be09ba-8035-4f5f-be90-d8892cf5f8ad/volumes" Jan 27 08:09:56 crc kubenswrapper[4799]: I0127 08:09:56.603411 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:09:56 crc kubenswrapper[4799]: W0127 08:09:56.604759 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4193641c_2be2_4364_a5c2_1936a70fed09.slice/crio-eb21a47c7d03b5cc1f1c72639ddad1aee455f7882d1432f656c824d30bfb3457 WatchSource:0}: Error finding container eb21a47c7d03b5cc1f1c72639ddad1aee455f7882d1432f656c824d30bfb3457: Status 404 returned error can't find the container with id eb21a47c7d03b5cc1f1c72639ddad1aee455f7882d1432f656c824d30bfb3457 Jan 27 08:09:56 crc kubenswrapper[4799]: I0127 08:09:56.745404 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4193641c-2be2-4364-a5c2-1936a70fed09","Type":"ContainerStarted","Data":"eb21a47c7d03b5cc1f1c72639ddad1aee455f7882d1432f656c824d30bfb3457"} Jan 27 08:09:57 crc kubenswrapper[4799]: I0127 08:09:57.756656 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4193641c-2be2-4364-a5c2-1936a70fed09","Type":"ContainerStarted","Data":"c6e81c4d0d9e73f77c719f74d718a77a927161e8f1203f2edbee208dc69900fd"} Jan 27 08:09:58 crc kubenswrapper[4799]: I0127 08:09:58.060775 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5xppr" Jan 27 08:09:58 crc kubenswrapper[4799]: I0127 08:09:58.062192 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5xppr" Jan 27 08:09:58 crc kubenswrapper[4799]: I0127 08:09:58.767342 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4193641c-2be2-4364-a5c2-1936a70fed09","Type":"ContainerStarted","Data":"9fadf65fffa0252dfe2078e49622c8bf130cfea23359cbaf7a371e5921309cf4"} Jan 27 08:09:59 crc kubenswrapper[4799]: I0127 08:09:59.116577 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5xppr" podUID="f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3" containerName="registry-server" probeResult="failure" output=< Jan 27 08:09:59 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Jan 27 08:09:59 crc kubenswrapper[4799]: > Jan 27 08:09:59 crc kubenswrapper[4799]: I0127 08:09:59.777223 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4193641c-2be2-4364-a5c2-1936a70fed09","Type":"ContainerStarted","Data":"d688349b2b0d522967e1641dc4f7fed6bb0a6c010d83f570991d2a95bfaa43bd"} Jan 27 08:10:00 crc kubenswrapper[4799]: I0127 08:10:00.790275 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4193641c-2be2-4364-a5c2-1936a70fed09","Type":"ContainerStarted","Data":"9de33aa9b38ab46f4a0efc0a6d236be5aa63fa32c2b6e33f17ef6f429710779e"} Jan 27 08:10:00 crc kubenswrapper[4799]: I0127 08:10:00.790819 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 08:10:00 crc kubenswrapper[4799]: I0127 08:10:00.831813 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.717477905 podStartE2EDuration="5.831794232s" podCreationTimestamp="2026-01-27 08:09:55 +0000 UTC" firstStartedPulling="2026-01-27 08:09:56.607961276 +0000 UTC m=+1462.919065341" lastFinishedPulling="2026-01-27 08:09:59.722277603 +0000 UTC m=+1466.033381668" observedRunningTime="2026-01-27 08:10:00.824506755 +0000 UTC m=+1467.135610840" watchObservedRunningTime="2026-01-27 08:10:00.831794232 +0000 UTC m=+1467.142898307" Jan 27 08:10:02 crc kubenswrapper[4799]: I0127 08:10:02.808398 4799 generic.go:334] "Generic (PLEG): container finished" podID="a070ecb5-b0ed-42b2-9778-07e62cffe5c4" containerID="2e49a4fc214f12566b2516479f547812997d82785b8858995acbe7d34ebe9df8" exitCode=0 Jan 27 08:10:02 crc kubenswrapper[4799]: I0127 08:10:02.808521 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hmh5p" event={"ID":"a070ecb5-b0ed-42b2-9778-07e62cffe5c4","Type":"ContainerDied","Data":"2e49a4fc214f12566b2516479f547812997d82785b8858995acbe7d34ebe9df8"} Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.236652 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hmh5p" Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.330834 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-scripts\") pod \"a070ecb5-b0ed-42b2-9778-07e62cffe5c4\" (UID: \"a070ecb5-b0ed-42b2-9778-07e62cffe5c4\") " Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.330910 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-config-data\") pod \"a070ecb5-b0ed-42b2-9778-07e62cffe5c4\" (UID: \"a070ecb5-b0ed-42b2-9778-07e62cffe5c4\") " Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.330943 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-combined-ca-bundle\") pod \"a070ecb5-b0ed-42b2-9778-07e62cffe5c4\" (UID: \"a070ecb5-b0ed-42b2-9778-07e62cffe5c4\") " Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.330958 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccz22\" (UniqueName: \"kubernetes.io/projected/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-kube-api-access-ccz22\") pod \"a070ecb5-b0ed-42b2-9778-07e62cffe5c4\" (UID: \"a070ecb5-b0ed-42b2-9778-07e62cffe5c4\") " Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.336051 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-kube-api-access-ccz22" (OuterVolumeSpecName: "kube-api-access-ccz22") pod "a070ecb5-b0ed-42b2-9778-07e62cffe5c4" (UID: "a070ecb5-b0ed-42b2-9778-07e62cffe5c4"). InnerVolumeSpecName "kube-api-access-ccz22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.338127 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-scripts" (OuterVolumeSpecName: "scripts") pod "a070ecb5-b0ed-42b2-9778-07e62cffe5c4" (UID: "a070ecb5-b0ed-42b2-9778-07e62cffe5c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.361074 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a070ecb5-b0ed-42b2-9778-07e62cffe5c4" (UID: "a070ecb5-b0ed-42b2-9778-07e62cffe5c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.364116 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-config-data" (OuterVolumeSpecName: "config-data") pod "a070ecb5-b0ed-42b2-9778-07e62cffe5c4" (UID: "a070ecb5-b0ed-42b2-9778-07e62cffe5c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.433584 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.433830 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.433840 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.433849 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccz22\" (UniqueName: \"kubernetes.io/projected/a070ecb5-b0ed-42b2-9778-07e62cffe5c4-kube-api-access-ccz22\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.828541 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hmh5p" event={"ID":"a070ecb5-b0ed-42b2-9778-07e62cffe5c4","Type":"ContainerDied","Data":"a86efbce892ded0a55ee3ff53f247ecd45b5d644382f31ed2f5684a1c8738773"} Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.828596 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a86efbce892ded0a55ee3ff53f247ecd45b5d644382f31ed2f5684a1c8738773" Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.828705 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hmh5p" Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.948490 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 08:10:04 crc kubenswrapper[4799]: E0127 08:10:04.949088 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a070ecb5-b0ed-42b2-9778-07e62cffe5c4" containerName="nova-cell0-conductor-db-sync" Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.949117 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a070ecb5-b0ed-42b2-9778-07e62cffe5c4" containerName="nova-cell0-conductor-db-sync" Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.949513 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a070ecb5-b0ed-42b2-9778-07e62cffe5c4" containerName="nova-cell0-conductor-db-sync" Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.950561 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.952459 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-469gs" Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.954027 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 08:10:04 crc kubenswrapper[4799]: I0127 08:10:04.958737 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 08:10:05 crc kubenswrapper[4799]: I0127 08:10:05.045847 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c53857a-2e9c-4057-9f69-3611704d36f5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3c53857a-2e9c-4057-9f69-3611704d36f5\") " pod="openstack/nova-cell0-conductor-0" Jan 27 08:10:05 crc kubenswrapper[4799]: I0127 08:10:05.045912 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c53857a-2e9c-4057-9f69-3611704d36f5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3c53857a-2e9c-4057-9f69-3611704d36f5\") " pod="openstack/nova-cell0-conductor-0" Jan 27 08:10:05 crc kubenswrapper[4799]: I0127 08:10:05.046056 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2xtd\" (UniqueName: \"kubernetes.io/projected/3c53857a-2e9c-4057-9f69-3611704d36f5-kube-api-access-c2xtd\") pod \"nova-cell0-conductor-0\" (UID: \"3c53857a-2e9c-4057-9f69-3611704d36f5\") " pod="openstack/nova-cell0-conductor-0" Jan 27 08:10:05 crc kubenswrapper[4799]: I0127 08:10:05.148420 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2xtd\" (UniqueName: \"kubernetes.io/projected/3c53857a-2e9c-4057-9f69-3611704d36f5-kube-api-access-c2xtd\") pod \"nova-cell0-conductor-0\" (UID: \"3c53857a-2e9c-4057-9f69-3611704d36f5\") " pod="openstack/nova-cell0-conductor-0" Jan 27 08:10:05 crc kubenswrapper[4799]: I0127 08:10:05.148728 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c53857a-2e9c-4057-9f69-3611704d36f5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3c53857a-2e9c-4057-9f69-3611704d36f5\") " pod="openstack/nova-cell0-conductor-0" Jan 27 08:10:05 crc kubenswrapper[4799]: I0127 08:10:05.149090 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c53857a-2e9c-4057-9f69-3611704d36f5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3c53857a-2e9c-4057-9f69-3611704d36f5\") " pod="openstack/nova-cell0-conductor-0" Jan 27 08:10:05 crc kubenswrapper[4799]: I0127 08:10:05.154411 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c53857a-2e9c-4057-9f69-3611704d36f5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3c53857a-2e9c-4057-9f69-3611704d36f5\") " pod="openstack/nova-cell0-conductor-0" Jan 27 08:10:05 crc kubenswrapper[4799]: I0127 08:10:05.167373 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2xtd\" (UniqueName: \"kubernetes.io/projected/3c53857a-2e9c-4057-9f69-3611704d36f5-kube-api-access-c2xtd\") pod \"nova-cell0-conductor-0\" (UID: \"3c53857a-2e9c-4057-9f69-3611704d36f5\") " pod="openstack/nova-cell0-conductor-0" Jan 27 08:10:05 crc kubenswrapper[4799]: I0127 08:10:05.167511 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c53857a-2e9c-4057-9f69-3611704d36f5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3c53857a-2e9c-4057-9f69-3611704d36f5\") " pod="openstack/nova-cell0-conductor-0" Jan 27 08:10:05 crc kubenswrapper[4799]: I0127 08:10:05.287569 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 08:10:05 crc kubenswrapper[4799]: I0127 08:10:05.721650 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 08:10:05 crc kubenswrapper[4799]: I0127 08:10:05.844251 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3c53857a-2e9c-4057-9f69-3611704d36f5","Type":"ContainerStarted","Data":"e77182b95cdea62c62e3a2fbee3a9a5b43fd764a7c99a5aafbbc88407ec768d1"} Jan 27 08:10:06 crc kubenswrapper[4799]: I0127 08:10:06.856553 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3c53857a-2e9c-4057-9f69-3611704d36f5","Type":"ContainerStarted","Data":"202e2f036574e98bda00448180d3c7a6925f661345419ea82a8d5eedddba0db0"} Jan 27 08:10:06 crc kubenswrapper[4799]: I0127 08:10:06.857530 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 08:10:06 crc kubenswrapper[4799]: I0127 08:10:06.881675 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.88164818 podStartE2EDuration="2.88164818s" podCreationTimestamp="2026-01-27 08:10:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:10:06.874118949 +0000 UTC m=+1473.185223034" watchObservedRunningTime="2026-01-27 08:10:06.88164818 +0000 UTC m=+1473.192752285" Jan 27 08:10:08 crc kubenswrapper[4799]: I0127 08:10:08.119227 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5xppr" Jan 27 08:10:08 crc kubenswrapper[4799]: I0127 08:10:08.182840 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5xppr" Jan 27 08:10:08 crc kubenswrapper[4799]: I0127 08:10:08.358177 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5xppr"] Jan 27 08:10:09 crc kubenswrapper[4799]: I0127 08:10:09.881022 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5xppr" podUID="f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3" containerName="registry-server" containerID="cri-o://6f74f27a4472fd0bc7a2c0c30991e6137b8352af73c3c4b60f7bdc36f8c7a525" gracePeriod=2 Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.315439 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5xppr" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.322981 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.344778 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzw2r\" (UniqueName: \"kubernetes.io/projected/f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3-kube-api-access-gzw2r\") pod \"f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3\" (UID: \"f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3\") " Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.345058 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3-utilities\") pod \"f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3\" (UID: \"f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3\") " Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.345124 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3-catalog-content\") pod \"f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3\" (UID: \"f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3\") " Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.348499 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3-utilities" (OuterVolumeSpecName: "utilities") pod "f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3" (UID: "f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.352195 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3-kube-api-access-gzw2r" (OuterVolumeSpecName: "kube-api-access-gzw2r") pod "f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3" (UID: "f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3"). InnerVolumeSpecName "kube-api-access-gzw2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.401817 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3" (UID: "f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.447220 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.447254 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.447271 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzw2r\" (UniqueName: \"kubernetes.io/projected/f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3-kube-api-access-gzw2r\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.776125 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-m8v2m"] Jan 27 08:10:10 crc kubenswrapper[4799]: E0127 08:10:10.776730 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3" containerName="extract-content" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.776748 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3" containerName="extract-content" Jan 27 08:10:10 crc kubenswrapper[4799]: E0127 08:10:10.776776 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3" containerName="extract-utilities" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.776798 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3" containerName="extract-utilities" Jan 27 08:10:10 crc kubenswrapper[4799]: E0127 08:10:10.776813 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3" containerName="registry-server" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.776819 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3" containerName="registry-server" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.776985 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3" containerName="registry-server" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.777599 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m8v2m" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.780808 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.781674 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.799357 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-m8v2m"] Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.854082 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8c82402-d3a9-494f-a979-881fa184a4e1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-m8v2m\" (UID: \"c8c82402-d3a9-494f-a979-881fa184a4e1\") " pod="openstack/nova-cell0-cell-mapping-m8v2m" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.854204 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzbd9\" (UniqueName: \"kubernetes.io/projected/c8c82402-d3a9-494f-a979-881fa184a4e1-kube-api-access-kzbd9\") pod \"nova-cell0-cell-mapping-m8v2m\" (UID: \"c8c82402-d3a9-494f-a979-881fa184a4e1\") " pod="openstack/nova-cell0-cell-mapping-m8v2m" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.854325 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8c82402-d3a9-494f-a979-881fa184a4e1-scripts\") pod \"nova-cell0-cell-mapping-m8v2m\" (UID: \"c8c82402-d3a9-494f-a979-881fa184a4e1\") " pod="openstack/nova-cell0-cell-mapping-m8v2m" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.854396 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c82402-d3a9-494f-a979-881fa184a4e1-config-data\") pod \"nova-cell0-cell-mapping-m8v2m\" (UID: \"c8c82402-d3a9-494f-a979-881fa184a4e1\") " pod="openstack/nova-cell0-cell-mapping-m8v2m" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.891712 4799 generic.go:334] "Generic (PLEG): container finished" podID="f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3" containerID="6f74f27a4472fd0bc7a2c0c30991e6137b8352af73c3c4b60f7bdc36f8c7a525" exitCode=0 Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.891759 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xppr" event={"ID":"f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3","Type":"ContainerDied","Data":"6f74f27a4472fd0bc7a2c0c30991e6137b8352af73c3c4b60f7bdc36f8c7a525"} Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.891790 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xppr" event={"ID":"f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3","Type":"ContainerDied","Data":"8e7f72939de9a72683e4fe52e604d26ba322c096bf0b9be5efe916986d3124b8"} Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.891810 4799 scope.go:117] "RemoveContainer" containerID="6f74f27a4472fd0bc7a2c0c30991e6137b8352af73c3c4b60f7bdc36f8c7a525" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.891956 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5xppr" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.938718 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5xppr"] Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.942252 4799 scope.go:117] "RemoveContainer" containerID="6af69ba88db18a0107f5a2fc7952f42a2c374dca6b5c54d3104910c53491e829" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.954415 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5xppr"] Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.956496 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8c82402-d3a9-494f-a979-881fa184a4e1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-m8v2m\" (UID: \"c8c82402-d3a9-494f-a979-881fa184a4e1\") " pod="openstack/nova-cell0-cell-mapping-m8v2m" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.956601 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzbd9\" (UniqueName: \"kubernetes.io/projected/c8c82402-d3a9-494f-a979-881fa184a4e1-kube-api-access-kzbd9\") pod \"nova-cell0-cell-mapping-m8v2m\" (UID: \"c8c82402-d3a9-494f-a979-881fa184a4e1\") " pod="openstack/nova-cell0-cell-mapping-m8v2m" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.956654 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8c82402-d3a9-494f-a979-881fa184a4e1-scripts\") pod \"nova-cell0-cell-mapping-m8v2m\" (UID: \"c8c82402-d3a9-494f-a979-881fa184a4e1\") " pod="openstack/nova-cell0-cell-mapping-m8v2m" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.956720 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c82402-d3a9-494f-a979-881fa184a4e1-config-data\") pod \"nova-cell0-cell-mapping-m8v2m\" (UID: \"c8c82402-d3a9-494f-a979-881fa184a4e1\") " pod="openstack/nova-cell0-cell-mapping-m8v2m" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.960661 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8c82402-d3a9-494f-a979-881fa184a4e1-scripts\") pod \"nova-cell0-cell-mapping-m8v2m\" (UID: \"c8c82402-d3a9-494f-a979-881fa184a4e1\") " pod="openstack/nova-cell0-cell-mapping-m8v2m" Jan 27 08:10:10 crc kubenswrapper[4799]: I0127 08:10:10.961947 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c82402-d3a9-494f-a979-881fa184a4e1-config-data\") pod \"nova-cell0-cell-mapping-m8v2m\" (UID: \"c8c82402-d3a9-494f-a979-881fa184a4e1\") " pod="openstack/nova-cell0-cell-mapping-m8v2m" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.008743 4799 scope.go:117] "RemoveContainer" containerID="781c0a9144fbd29331b5dd302173f8cbce4b7c250ebe6fbee0f15b852941dbd1" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.013957 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8c82402-d3a9-494f-a979-881fa184a4e1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-m8v2m\" (UID: \"c8c82402-d3a9-494f-a979-881fa184a4e1\") " pod="openstack/nova-cell0-cell-mapping-m8v2m" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.025282 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzbd9\" (UniqueName: \"kubernetes.io/projected/c8c82402-d3a9-494f-a979-881fa184a4e1-kube-api-access-kzbd9\") pod \"nova-cell0-cell-mapping-m8v2m\" (UID: \"c8c82402-d3a9-494f-a979-881fa184a4e1\") " pod="openstack/nova-cell0-cell-mapping-m8v2m" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.025399 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.100186 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m8v2m" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.105959 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.181963 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.186540 4799 scope.go:117] "RemoveContainer" containerID="6f74f27a4472fd0bc7a2c0c30991e6137b8352af73c3c4b60f7bdc36f8c7a525" Jan 27 08:10:11 crc kubenswrapper[4799]: E0127 08:10:11.200462 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f74f27a4472fd0bc7a2c0c30991e6137b8352af73c3c4b60f7bdc36f8c7a525\": container with ID starting with 6f74f27a4472fd0bc7a2c0c30991e6137b8352af73c3c4b60f7bdc36f8c7a525 not found: ID does not exist" containerID="6f74f27a4472fd0bc7a2c0c30991e6137b8352af73c3c4b60f7bdc36f8c7a525" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.200508 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f74f27a4472fd0bc7a2c0c30991e6137b8352af73c3c4b60f7bdc36f8c7a525"} err="failed to get container status \"6f74f27a4472fd0bc7a2c0c30991e6137b8352af73c3c4b60f7bdc36f8c7a525\": rpc error: code = NotFound desc = could not find container \"6f74f27a4472fd0bc7a2c0c30991e6137b8352af73c3c4b60f7bdc36f8c7a525\": container with ID starting with 6f74f27a4472fd0bc7a2c0c30991e6137b8352af73c3c4b60f7bdc36f8c7a525 not found: ID does not exist" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.200539 4799 scope.go:117] "RemoveContainer" containerID="6af69ba88db18a0107f5a2fc7952f42a2c374dca6b5c54d3104910c53491e829" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.200918 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 08:10:11 crc kubenswrapper[4799]: E0127 08:10:11.205644 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6af69ba88db18a0107f5a2fc7952f42a2c374dca6b5c54d3104910c53491e829\": container with ID starting with 6af69ba88db18a0107f5a2fc7952f42a2c374dca6b5c54d3104910c53491e829 not found: ID does not exist" containerID="6af69ba88db18a0107f5a2fc7952f42a2c374dca6b5c54d3104910c53491e829" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.205683 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6af69ba88db18a0107f5a2fc7952f42a2c374dca6b5c54d3104910c53491e829"} err="failed to get container status \"6af69ba88db18a0107f5a2fc7952f42a2c374dca6b5c54d3104910c53491e829\": rpc error: code = NotFound desc = could not find container \"6af69ba88db18a0107f5a2fc7952f42a2c374dca6b5c54d3104910c53491e829\": container with ID starting with 6af69ba88db18a0107f5a2fc7952f42a2c374dca6b5c54d3104910c53491e829 not found: ID does not exist" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.205706 4799 scope.go:117] "RemoveContainer" containerID="781c0a9144fbd29331b5dd302173f8cbce4b7c250ebe6fbee0f15b852941dbd1" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.220011 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.221381 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 08:10:11 crc kubenswrapper[4799]: E0127 08:10:11.221744 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"781c0a9144fbd29331b5dd302173f8cbce4b7c250ebe6fbee0f15b852941dbd1\": container with ID starting with 781c0a9144fbd29331b5dd302173f8cbce4b7c250ebe6fbee0f15b852941dbd1 not found: ID does not exist" containerID="781c0a9144fbd29331b5dd302173f8cbce4b7c250ebe6fbee0f15b852941dbd1" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.221772 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"781c0a9144fbd29331b5dd302173f8cbce4b7c250ebe6fbee0f15b852941dbd1"} err="failed to get container status \"781c0a9144fbd29331b5dd302173f8cbce4b7c250ebe6fbee0f15b852941dbd1\": rpc error: code = NotFound desc = could not find container \"781c0a9144fbd29331b5dd302173f8cbce4b7c250ebe6fbee0f15b852941dbd1\": container with ID starting with 781c0a9144fbd29331b5dd302173f8cbce4b7c250ebe6fbee0f15b852941dbd1 not found: ID does not exist" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.239775 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.260377 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.293588 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c9cbe4-083b-4e5a-99af-c94244b447fe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"04c9cbe4-083b-4e5a-99af-c94244b447fe\") " pod="openstack/nova-api-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.293674 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04c9cbe4-083b-4e5a-99af-c94244b447fe-config-data\") pod \"nova-api-0\" (UID: \"04c9cbe4-083b-4e5a-99af-c94244b447fe\") " pod="openstack/nova-api-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.293738 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47pvh\" (UniqueName: \"kubernetes.io/projected/04c9cbe4-083b-4e5a-99af-c94244b447fe-kube-api-access-47pvh\") pod \"nova-api-0\" (UID: \"04c9cbe4-083b-4e5a-99af-c94244b447fe\") " pod="openstack/nova-api-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.293825 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04c9cbe4-083b-4e5a-99af-c94244b447fe-logs\") pod \"nova-api-0\" (UID: \"04c9cbe4-083b-4e5a-99af-c94244b447fe\") " pod="openstack/nova-api-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.327812 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.329525 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.341684 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.368657 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.403631 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04c9cbe4-083b-4e5a-99af-c94244b447fe-config-data\") pod \"nova-api-0\" (UID: \"04c9cbe4-083b-4e5a-99af-c94244b447fe\") " pod="openstack/nova-api-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.403714 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47pvh\" (UniqueName: \"kubernetes.io/projected/04c9cbe4-083b-4e5a-99af-c94244b447fe-kube-api-access-47pvh\") pod \"nova-api-0\" (UID: \"04c9cbe4-083b-4e5a-99af-c94244b447fe\") " pod="openstack/nova-api-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.403745 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccgf8\" (UniqueName: \"kubernetes.io/projected/53d9fc29-5465-481c-a83a-9ad95df32c3e-kube-api-access-ccgf8\") pod \"nova-scheduler-0\" (UID: \"53d9fc29-5465-481c-a83a-9ad95df32c3e\") " pod="openstack/nova-scheduler-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.403810 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04c9cbe4-083b-4e5a-99af-c94244b447fe-logs\") pod \"nova-api-0\" (UID: \"04c9cbe4-083b-4e5a-99af-c94244b447fe\") " pod="openstack/nova-api-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.403829 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d9fc29-5465-481c-a83a-9ad95df32c3e-config-data\") pod \"nova-scheduler-0\" (UID: \"53d9fc29-5465-481c-a83a-9ad95df32c3e\") " pod="openstack/nova-scheduler-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.403912 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d9fc29-5465-481c-a83a-9ad95df32c3e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"53d9fc29-5465-481c-a83a-9ad95df32c3e\") " pod="openstack/nova-scheduler-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.403995 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c9cbe4-083b-4e5a-99af-c94244b447fe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"04c9cbe4-083b-4e5a-99af-c94244b447fe\") " pod="openstack/nova-api-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.405068 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04c9cbe4-083b-4e5a-99af-c94244b447fe-logs\") pod \"nova-api-0\" (UID: \"04c9cbe4-083b-4e5a-99af-c94244b447fe\") " pod="openstack/nova-api-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.435883 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.438456 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.448327 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c9cbe4-083b-4e5a-99af-c94244b447fe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"04c9cbe4-083b-4e5a-99af-c94244b447fe\") " pod="openstack/nova-api-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.448410 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04c9cbe4-083b-4e5a-99af-c94244b447fe-config-data\") pod \"nova-api-0\" (UID: \"04c9cbe4-083b-4e5a-99af-c94244b447fe\") " pod="openstack/nova-api-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.449205 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.474410 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47pvh\" (UniqueName: \"kubernetes.io/projected/04c9cbe4-083b-4e5a-99af-c94244b447fe-kube-api-access-47pvh\") pod \"nova-api-0\" (UID: \"04c9cbe4-083b-4e5a-99af-c94244b447fe\") " pod="openstack/nova-api-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.505909 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.508872 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d9fc29-5465-481c-a83a-9ad95df32c3e-config-data\") pod \"nova-scheduler-0\" (UID: \"53d9fc29-5465-481c-a83a-9ad95df32c3e\") " pod="openstack/nova-scheduler-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.508961 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvdbg\" (UniqueName: \"kubernetes.io/projected/6fc34be8-91df-440e-8b68-662a2741d332-kube-api-access-hvdbg\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc34be8-91df-440e-8b68-662a2741d332\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.509016 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d9fc29-5465-481c-a83a-9ad95df32c3e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"53d9fc29-5465-481c-a83a-9ad95df32c3e\") " pod="openstack/nova-scheduler-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.509040 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc34be8-91df-440e-8b68-662a2741d332-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc34be8-91df-440e-8b68-662a2741d332\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.509111 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fc34be8-91df-440e-8b68-662a2741d332-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc34be8-91df-440e-8b68-662a2741d332\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.509168 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccgf8\" (UniqueName: \"kubernetes.io/projected/53d9fc29-5465-481c-a83a-9ad95df32c3e-kube-api-access-ccgf8\") pod \"nova-scheduler-0\" (UID: \"53d9fc29-5465-481c-a83a-9ad95df32c3e\") " pod="openstack/nova-scheduler-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.522208 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d9fc29-5465-481c-a83a-9ad95df32c3e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"53d9fc29-5465-481c-a83a-9ad95df32c3e\") " pod="openstack/nova-scheduler-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.543263 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccgf8\" (UniqueName: \"kubernetes.io/projected/53d9fc29-5465-481c-a83a-9ad95df32c3e-kube-api-access-ccgf8\") pod \"nova-scheduler-0\" (UID: \"53d9fc29-5465-481c-a83a-9ad95df32c3e\") " pod="openstack/nova-scheduler-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.546999 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d9fc29-5465-481c-a83a-9ad95df32c3e-config-data\") pod \"nova-scheduler-0\" (UID: \"53d9fc29-5465-481c-a83a-9ad95df32c3e\") " pod="openstack/nova-scheduler-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.561863 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-22jbb"] Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.563799 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.584121 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-22jbb"] Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.611280 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvdbg\" (UniqueName: \"kubernetes.io/projected/6fc34be8-91df-440e-8b68-662a2741d332-kube-api-access-hvdbg\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc34be8-91df-440e-8b68-662a2741d332\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.611375 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d595653a-d634-4c23-bf4e-ad1998e76134-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d595653a-d634-4c23-bf4e-ad1998e76134\") " pod="openstack/nova-metadata-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.611423 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d595653a-d634-4c23-bf4e-ad1998e76134-config-data\") pod \"nova-metadata-0\" (UID: \"d595653a-d634-4c23-bf4e-ad1998e76134\") " pod="openstack/nova-metadata-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.611466 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc34be8-91df-440e-8b68-662a2741d332-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc34be8-91df-440e-8b68-662a2741d332\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.611573 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fc34be8-91df-440e-8b68-662a2741d332-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc34be8-91df-440e-8b68-662a2741d332\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.611648 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d595653a-d634-4c23-bf4e-ad1998e76134-logs\") pod \"nova-metadata-0\" (UID: \"d595653a-d634-4c23-bf4e-ad1998e76134\") " pod="openstack/nova-metadata-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.611671 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thwsh\" (UniqueName: \"kubernetes.io/projected/d595653a-d634-4c23-bf4e-ad1998e76134-kube-api-access-thwsh\") pod \"nova-metadata-0\" (UID: \"d595653a-d634-4c23-bf4e-ad1998e76134\") " pod="openstack/nova-metadata-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.616164 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fc34be8-91df-440e-8b68-662a2741d332-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc34be8-91df-440e-8b68-662a2741d332\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.616628 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc34be8-91df-440e-8b68-662a2741d332-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc34be8-91df-440e-8b68-662a2741d332\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.632938 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.633256 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvdbg\" (UniqueName: \"kubernetes.io/projected/6fc34be8-91df-440e-8b68-662a2741d332-kube-api-access-hvdbg\") pod \"nova-cell1-novncproxy-0\" (UID: \"6fc34be8-91df-440e-8b68-662a2741d332\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.713828 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-22jbb\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.714485 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d595653a-d634-4c23-bf4e-ad1998e76134-logs\") pod \"nova-metadata-0\" (UID: \"d595653a-d634-4c23-bf4e-ad1998e76134\") " pod="openstack/nova-metadata-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.714526 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thwsh\" (UniqueName: \"kubernetes.io/projected/d595653a-d634-4c23-bf4e-ad1998e76134-kube-api-access-thwsh\") pod \"nova-metadata-0\" (UID: \"d595653a-d634-4c23-bf4e-ad1998e76134\") " pod="openstack/nova-metadata-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.714572 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-dns-svc\") pod \"dnsmasq-dns-757b4f8459-22jbb\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.714648 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-22jbb\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.714685 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-config\") pod \"dnsmasq-dns-757b4f8459-22jbb\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.714843 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d595653a-d634-4c23-bf4e-ad1998e76134-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d595653a-d634-4c23-bf4e-ad1998e76134\") " pod="openstack/nova-metadata-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.715435 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d595653a-d634-4c23-bf4e-ad1998e76134-logs\") pod \"nova-metadata-0\" (UID: \"d595653a-d634-4c23-bf4e-ad1998e76134\") " pod="openstack/nova-metadata-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.716135 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-22jbb\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.716243 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d595653a-d634-4c23-bf4e-ad1998e76134-config-data\") pod \"nova-metadata-0\" (UID: \"d595653a-d634-4c23-bf4e-ad1998e76134\") " pod="openstack/nova-metadata-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.716387 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xx6q\" (UniqueName: \"kubernetes.io/projected/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-kube-api-access-4xx6q\") pod \"dnsmasq-dns-757b4f8459-22jbb\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.724029 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d595653a-d634-4c23-bf4e-ad1998e76134-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d595653a-d634-4c23-bf4e-ad1998e76134\") " pod="openstack/nova-metadata-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.724884 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d595653a-d634-4c23-bf4e-ad1998e76134-config-data\") pod \"nova-metadata-0\" (UID: \"d595653a-d634-4c23-bf4e-ad1998e76134\") " pod="openstack/nova-metadata-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.731944 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thwsh\" (UniqueName: \"kubernetes.io/projected/d595653a-d634-4c23-bf4e-ad1998e76134-kube-api-access-thwsh\") pod \"nova-metadata-0\" (UID: \"d595653a-d634-4c23-bf4e-ad1998e76134\") " pod="openstack/nova-metadata-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.749968 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.780930 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.794594 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.824290 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-22jbb\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.824380 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-config\") pod \"dnsmasq-dns-757b4f8459-22jbb\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.824401 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-22jbb\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.824459 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xx6q\" (UniqueName: \"kubernetes.io/projected/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-kube-api-access-4xx6q\") pod \"dnsmasq-dns-757b4f8459-22jbb\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.824520 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-22jbb\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.824557 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-dns-svc\") pod \"dnsmasq-dns-757b4f8459-22jbb\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.825354 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-dns-svc\") pod \"dnsmasq-dns-757b4f8459-22jbb\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.826072 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-22jbb\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.826386 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-config\") pod \"dnsmasq-dns-757b4f8459-22jbb\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.827990 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-22jbb\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.828201 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-22jbb\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.843653 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xx6q\" (UniqueName: \"kubernetes.io/projected/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-kube-api-access-4xx6q\") pod \"dnsmasq-dns-757b4f8459-22jbb\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.896593 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:10:11 crc kubenswrapper[4799]: I0127 08:10:11.897634 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-m8v2m"] Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.106128 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 08:10:12 crc kubenswrapper[4799]: W0127 08:10:12.115775 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04c9cbe4_083b_4e5a_99af_c94244b447fe.slice/crio-f13283d3e73b3e1b0f539ddb9023720a2bf06de340fb9fa6665d8804af4206c4 WatchSource:0}: Error finding container f13283d3e73b3e1b0f539ddb9023720a2bf06de340fb9fa6665d8804af4206c4: Status 404 returned error can't find the container with id f13283d3e73b3e1b0f539ddb9023720a2bf06de340fb9fa6665d8804af4206c4 Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.120157 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.318109 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.334101 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tfq6j"] Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.335986 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tfq6j" Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.348144 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.348363 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.363835 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tfq6j"] Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.457407 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8974c0d-814e-4e96-b79e-41971c3761c7-scripts\") pod \"nova-cell1-conductor-db-sync-tfq6j\" (UID: \"e8974c0d-814e-4e96-b79e-41971c3761c7\") " pod="openstack/nova-cell1-conductor-db-sync-tfq6j" Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.457513 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjdtt\" (UniqueName: \"kubernetes.io/projected/e8974c0d-814e-4e96-b79e-41971c3761c7-kube-api-access-hjdtt\") pod \"nova-cell1-conductor-db-sync-tfq6j\" (UID: \"e8974c0d-814e-4e96-b79e-41971c3761c7\") " pod="openstack/nova-cell1-conductor-db-sync-tfq6j" Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.457551 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8974c0d-814e-4e96-b79e-41971c3761c7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tfq6j\" (UID: \"e8974c0d-814e-4e96-b79e-41971c3761c7\") " pod="openstack/nova-cell1-conductor-db-sync-tfq6j" Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.457676 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8974c0d-814e-4e96-b79e-41971c3761c7-config-data\") pod \"nova-cell1-conductor-db-sync-tfq6j\" (UID: \"e8974c0d-814e-4e96-b79e-41971c3761c7\") " pod="openstack/nova-cell1-conductor-db-sync-tfq6j" Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.463179 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3" path="/var/lib/kubelet/pods/f8d14d0c-37a0-442b-8a6d-06e6fc0c72f3/volumes" Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.509659 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.521285 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 08:10:12 crc kubenswrapper[4799]: W0127 08:10:12.540198 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fc34be8_91df_440e_8b68_662a2741d332.slice/crio-32a089b824ba585bb0a21e21c713b7ba1c0f962e2167152029a8954157a5f46f WatchSource:0}: Error finding container 32a089b824ba585bb0a21e21c713b7ba1c0f962e2167152029a8954157a5f46f: Status 404 returned error can't find the container with id 32a089b824ba585bb0a21e21c713b7ba1c0f962e2167152029a8954157a5f46f Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.558913 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjdtt\" (UniqueName: \"kubernetes.io/projected/e8974c0d-814e-4e96-b79e-41971c3761c7-kube-api-access-hjdtt\") pod \"nova-cell1-conductor-db-sync-tfq6j\" (UID: \"e8974c0d-814e-4e96-b79e-41971c3761c7\") " pod="openstack/nova-cell1-conductor-db-sync-tfq6j" Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.558963 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8974c0d-814e-4e96-b79e-41971c3761c7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tfq6j\" (UID: \"e8974c0d-814e-4e96-b79e-41971c3761c7\") " pod="openstack/nova-cell1-conductor-db-sync-tfq6j" Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.559023 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8974c0d-814e-4e96-b79e-41971c3761c7-config-data\") pod \"nova-cell1-conductor-db-sync-tfq6j\" (UID: \"e8974c0d-814e-4e96-b79e-41971c3761c7\") " pod="openstack/nova-cell1-conductor-db-sync-tfq6j" Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.559167 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8974c0d-814e-4e96-b79e-41971c3761c7-scripts\") pod \"nova-cell1-conductor-db-sync-tfq6j\" (UID: \"e8974c0d-814e-4e96-b79e-41971c3761c7\") " pod="openstack/nova-cell1-conductor-db-sync-tfq6j" Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.571004 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8974c0d-814e-4e96-b79e-41971c3761c7-config-data\") pod \"nova-cell1-conductor-db-sync-tfq6j\" (UID: \"e8974c0d-814e-4e96-b79e-41971c3761c7\") " pod="openstack/nova-cell1-conductor-db-sync-tfq6j" Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.565905 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8974c0d-814e-4e96-b79e-41971c3761c7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tfq6j\" (UID: \"e8974c0d-814e-4e96-b79e-41971c3761c7\") " pod="openstack/nova-cell1-conductor-db-sync-tfq6j" Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.572085 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8974c0d-814e-4e96-b79e-41971c3761c7-scripts\") pod \"nova-cell1-conductor-db-sync-tfq6j\" (UID: \"e8974c0d-814e-4e96-b79e-41971c3761c7\") " pod="openstack/nova-cell1-conductor-db-sync-tfq6j" Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.592885 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjdtt\" (UniqueName: \"kubernetes.io/projected/e8974c0d-814e-4e96-b79e-41971c3761c7-kube-api-access-hjdtt\") pod \"nova-cell1-conductor-db-sync-tfq6j\" (UID: \"e8974c0d-814e-4e96-b79e-41971c3761c7\") " pod="openstack/nova-cell1-conductor-db-sync-tfq6j" Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.644012 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-22jbb"] Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.670471 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tfq6j" Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.930691 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04c9cbe4-083b-4e5a-99af-c94244b447fe","Type":"ContainerStarted","Data":"f13283d3e73b3e1b0f539ddb9023720a2bf06de340fb9fa6665d8804af4206c4"} Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.936621 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m8v2m" event={"ID":"c8c82402-d3a9-494f-a979-881fa184a4e1","Type":"ContainerStarted","Data":"44bc46982a56c1d5622c1a64e37403dc64800a0b2badf4a2f6a3a1d809304011"} Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.936664 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m8v2m" event={"ID":"c8c82402-d3a9-494f-a979-881fa184a4e1","Type":"ContainerStarted","Data":"5a971526f525b33a4b6fe767175e7e728b73a111aad7eb6b563a1e32d38e6f18"} Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.946843 4799 generic.go:334] "Generic (PLEG): container finished" podID="cc9dd129-7e6c-4e2b-97cb-339f1bb23d73" containerID="1a3483f254a4ccddb358a85f5075a97a4b4d9b8f0c07206e170e1566a3b7db9a" exitCode=0 Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.946917 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-22jbb" event={"ID":"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73","Type":"ContainerDied","Data":"1a3483f254a4ccddb358a85f5075a97a4b4d9b8f0c07206e170e1566a3b7db9a"} Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.946957 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-22jbb" event={"ID":"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73","Type":"ContainerStarted","Data":"15d449fc9f3aaa7a6b2aad5ffceb76c001700b4969b4b5f30a20460f01bccd92"} Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.955095 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"53d9fc29-5465-481c-a83a-9ad95df32c3e","Type":"ContainerStarted","Data":"78d9e3b7a35d99aaea869777e1de829e9f07ed9be8715fc9507099ff10ec347a"} Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.959294 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6fc34be8-91df-440e-8b68-662a2741d332","Type":"ContainerStarted","Data":"32a089b824ba585bb0a21e21c713b7ba1c0f962e2167152029a8954157a5f46f"} Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.963371 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d595653a-d634-4c23-bf4e-ad1998e76134","Type":"ContainerStarted","Data":"b158f94775abb0d761d7d11a2aedf2edb06bab512f345c0d3b21bbaba23a5ec9"} Jan 27 08:10:12 crc kubenswrapper[4799]: I0127 08:10:12.996426 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-m8v2m" podStartSLOduration=2.996395381 podStartE2EDuration="2.996395381s" podCreationTimestamp="2026-01-27 08:10:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:10:12.955148384 +0000 UTC m=+1479.266252469" watchObservedRunningTime="2026-01-27 08:10:12.996395381 +0000 UTC m=+1479.307499446" Jan 27 08:10:13 crc kubenswrapper[4799]: I0127 08:10:13.143827 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tfq6j"] Jan 27 08:10:13 crc kubenswrapper[4799]: I0127 08:10:13.984082 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-22jbb" event={"ID":"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73","Type":"ContainerStarted","Data":"17dfb08fe8178c1d306515db03bb46849ecda3ce4eaa4bed4488e6fe713665e9"} Jan 27 08:10:13 crc kubenswrapper[4799]: I0127 08:10:13.984958 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:10:14 crc kubenswrapper[4799]: I0127 08:10:14.006966 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tfq6j" event={"ID":"e8974c0d-814e-4e96-b79e-41971c3761c7","Type":"ContainerStarted","Data":"63680fd5fbc7fe6d82c8a34dc3d4dfb3482e02133edccfcbed602cebabc84481"} Jan 27 08:10:14 crc kubenswrapper[4799]: I0127 08:10:14.007004 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tfq6j" event={"ID":"e8974c0d-814e-4e96-b79e-41971c3761c7","Type":"ContainerStarted","Data":"11762b5eb9254ba435161dc755ba7a4d460d24d5f0845ef5db6004d9574d2b64"} Jan 27 08:10:14 crc kubenswrapper[4799]: I0127 08:10:14.017940 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-22jbb" podStartSLOduration=3.017902738 podStartE2EDuration="3.017902738s" podCreationTimestamp="2026-01-27 08:10:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:10:14.006174604 +0000 UTC m=+1480.317278679" watchObservedRunningTime="2026-01-27 08:10:14.017902738 +0000 UTC m=+1480.329006803" Jan 27 08:10:14 crc kubenswrapper[4799]: I0127 08:10:14.025780 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-tfq6j" podStartSLOduration=2.025750959 podStartE2EDuration="2.025750959s" podCreationTimestamp="2026-01-27 08:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:10:14.020155499 +0000 UTC m=+1480.331259574" watchObservedRunningTime="2026-01-27 08:10:14.025750959 +0000 UTC m=+1480.336855024" Jan 27 08:10:15 crc kubenswrapper[4799]: I0127 08:10:15.062281 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:10:15 crc kubenswrapper[4799]: I0127 08:10:15.077776 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.042149 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04c9cbe4-083b-4e5a-99af-c94244b447fe","Type":"ContainerStarted","Data":"0196696a99d83cff567b7ccdc4fa86e6f603ead4b71c1756125b08538beeae47"} Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.043716 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04c9cbe4-083b-4e5a-99af-c94244b447fe","Type":"ContainerStarted","Data":"e5651c5b7379a75592901438d55a99de8bbec7d0d80b2ae8262facd046d1c1d4"} Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.047993 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"53d9fc29-5465-481c-a83a-9ad95df32c3e","Type":"ContainerStarted","Data":"9a6f9203e142d72c89da4fe927f3707bfd831adba03cf3bc2ed32320acaaefe0"} Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.055923 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6fc34be8-91df-440e-8b68-662a2741d332","Type":"ContainerStarted","Data":"48ea990903a64faec8e0c5a01b67130e4b95d295399d1eee9793849691c09044"} Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.055970 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="6fc34be8-91df-440e-8b68-662a2741d332" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://48ea990903a64faec8e0c5a01b67130e4b95d295399d1eee9793849691c09044" gracePeriod=30 Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.058413 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d595653a-d634-4c23-bf4e-ad1998e76134","Type":"ContainerStarted","Data":"cb6f15f4dc37f2c38ee66435a68441c3da6f82ecbaa5344f83528bc4d1b5ac07"} Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.058451 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d595653a-d634-4c23-bf4e-ad1998e76134","Type":"ContainerStarted","Data":"967e68f824c6ccc64b8acad58a062ef0440b917242c4bc94d5d13ecc7ebe62d3"} Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.058695 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d595653a-d634-4c23-bf4e-ad1998e76134" containerName="nova-metadata-metadata" containerID="cri-o://cb6f15f4dc37f2c38ee66435a68441c3da6f82ecbaa5344f83528bc4d1b5ac07" gracePeriod=30 Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.058694 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d595653a-d634-4c23-bf4e-ad1998e76134" containerName="nova-metadata-log" containerID="cri-o://967e68f824c6ccc64b8acad58a062ef0440b917242c4bc94d5d13ecc7ebe62d3" gracePeriod=30 Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.070730 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.219612648 podStartE2EDuration="7.070708306s" podCreationTimestamp="2026-01-27 08:10:10 +0000 UTC" firstStartedPulling="2026-01-27 08:10:12.11977657 +0000 UTC m=+1478.430880645" lastFinishedPulling="2026-01-27 08:10:15.970872238 +0000 UTC m=+1482.281976303" observedRunningTime="2026-01-27 08:10:17.068822185 +0000 UTC m=+1483.379926260" watchObservedRunningTime="2026-01-27 08:10:17.070708306 +0000 UTC m=+1483.381812381" Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.101339 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.347394795 podStartE2EDuration="6.101295816s" podCreationTimestamp="2026-01-27 08:10:11 +0000 UTC" firstStartedPulling="2026-01-27 08:10:12.318550341 +0000 UTC m=+1478.629654406" lastFinishedPulling="2026-01-27 08:10:16.072451352 +0000 UTC m=+1482.383555427" observedRunningTime="2026-01-27 08:10:17.0868933 +0000 UTC m=+1483.397997385" watchObservedRunningTime="2026-01-27 08:10:17.101295816 +0000 UTC m=+1483.412399891" Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.112967 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.698967374 podStartE2EDuration="6.112948489s" podCreationTimestamp="2026-01-27 08:10:11 +0000 UTC" firstStartedPulling="2026-01-27 08:10:12.518435772 +0000 UTC m=+1478.829539837" lastFinishedPulling="2026-01-27 08:10:15.932416847 +0000 UTC m=+1482.243520952" observedRunningTime="2026-01-27 08:10:17.108824698 +0000 UTC m=+1483.419928763" watchObservedRunningTime="2026-01-27 08:10:17.112948489 +0000 UTC m=+1483.424052554" Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.129827 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.740063127 podStartE2EDuration="6.129809721s" podCreationTimestamp="2026-01-27 08:10:11 +0000 UTC" firstStartedPulling="2026-01-27 08:10:12.542495397 +0000 UTC m=+1478.853599462" lastFinishedPulling="2026-01-27 08:10:15.932241981 +0000 UTC m=+1482.243346056" observedRunningTime="2026-01-27 08:10:17.124399606 +0000 UTC m=+1483.435503681" watchObservedRunningTime="2026-01-27 08:10:17.129809721 +0000 UTC m=+1483.440913786" Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.657997 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.770020 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thwsh\" (UniqueName: \"kubernetes.io/projected/d595653a-d634-4c23-bf4e-ad1998e76134-kube-api-access-thwsh\") pod \"d595653a-d634-4c23-bf4e-ad1998e76134\" (UID: \"d595653a-d634-4c23-bf4e-ad1998e76134\") " Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.770086 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d595653a-d634-4c23-bf4e-ad1998e76134-config-data\") pod \"d595653a-d634-4c23-bf4e-ad1998e76134\" (UID: \"d595653a-d634-4c23-bf4e-ad1998e76134\") " Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.770315 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d595653a-d634-4c23-bf4e-ad1998e76134-logs\") pod \"d595653a-d634-4c23-bf4e-ad1998e76134\" (UID: \"d595653a-d634-4c23-bf4e-ad1998e76134\") " Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.770393 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d595653a-d634-4c23-bf4e-ad1998e76134-combined-ca-bundle\") pod \"d595653a-d634-4c23-bf4e-ad1998e76134\" (UID: \"d595653a-d634-4c23-bf4e-ad1998e76134\") " Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.770745 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d595653a-d634-4c23-bf4e-ad1998e76134-logs" (OuterVolumeSpecName: "logs") pod "d595653a-d634-4c23-bf4e-ad1998e76134" (UID: "d595653a-d634-4c23-bf4e-ad1998e76134"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.770846 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d595653a-d634-4c23-bf4e-ad1998e76134-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.775745 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d595653a-d634-4c23-bf4e-ad1998e76134-kube-api-access-thwsh" (OuterVolumeSpecName: "kube-api-access-thwsh") pod "d595653a-d634-4c23-bf4e-ad1998e76134" (UID: "d595653a-d634-4c23-bf4e-ad1998e76134"). InnerVolumeSpecName "kube-api-access-thwsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.797396 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d595653a-d634-4c23-bf4e-ad1998e76134-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d595653a-d634-4c23-bf4e-ad1998e76134" (UID: "d595653a-d634-4c23-bf4e-ad1998e76134"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.797816 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d595653a-d634-4c23-bf4e-ad1998e76134-config-data" (OuterVolumeSpecName: "config-data") pod "d595653a-d634-4c23-bf4e-ad1998e76134" (UID: "d595653a-d634-4c23-bf4e-ad1998e76134"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.873551 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d595653a-d634-4c23-bf4e-ad1998e76134-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.873658 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thwsh\" (UniqueName: \"kubernetes.io/projected/d595653a-d634-4c23-bf4e-ad1998e76134-kube-api-access-thwsh\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:17 crc kubenswrapper[4799]: I0127 08:10:17.873682 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d595653a-d634-4c23-bf4e-ad1998e76134-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.069774 4799 generic.go:334] "Generic (PLEG): container finished" podID="d595653a-d634-4c23-bf4e-ad1998e76134" containerID="cb6f15f4dc37f2c38ee66435a68441c3da6f82ecbaa5344f83528bc4d1b5ac07" exitCode=0 Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.070387 4799 generic.go:334] "Generic (PLEG): container finished" podID="d595653a-d634-4c23-bf4e-ad1998e76134" containerID="967e68f824c6ccc64b8acad58a062ef0440b917242c4bc94d5d13ecc7ebe62d3" exitCode=143 Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.069873 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.069857 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d595653a-d634-4c23-bf4e-ad1998e76134","Type":"ContainerDied","Data":"cb6f15f4dc37f2c38ee66435a68441c3da6f82ecbaa5344f83528bc4d1b5ac07"} Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.070572 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d595653a-d634-4c23-bf4e-ad1998e76134","Type":"ContainerDied","Data":"967e68f824c6ccc64b8acad58a062ef0440b917242c4bc94d5d13ecc7ebe62d3"} Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.070609 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d595653a-d634-4c23-bf4e-ad1998e76134","Type":"ContainerDied","Data":"b158f94775abb0d761d7d11a2aedf2edb06bab512f345c0d3b21bbaba23a5ec9"} Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.070642 4799 scope.go:117] "RemoveContainer" containerID="cb6f15f4dc37f2c38ee66435a68441c3da6f82ecbaa5344f83528bc4d1b5ac07" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.099482 4799 scope.go:117] "RemoveContainer" containerID="967e68f824c6ccc64b8acad58a062ef0440b917242c4bc94d5d13ecc7ebe62d3" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.108601 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.117073 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.121626 4799 scope.go:117] "RemoveContainer" containerID="cb6f15f4dc37f2c38ee66435a68441c3da6f82ecbaa5344f83528bc4d1b5ac07" Jan 27 08:10:18 crc kubenswrapper[4799]: E0127 08:10:18.122639 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb6f15f4dc37f2c38ee66435a68441c3da6f82ecbaa5344f83528bc4d1b5ac07\": container with ID starting with cb6f15f4dc37f2c38ee66435a68441c3da6f82ecbaa5344f83528bc4d1b5ac07 not found: ID does not exist" containerID="cb6f15f4dc37f2c38ee66435a68441c3da6f82ecbaa5344f83528bc4d1b5ac07" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.122694 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb6f15f4dc37f2c38ee66435a68441c3da6f82ecbaa5344f83528bc4d1b5ac07"} err="failed to get container status \"cb6f15f4dc37f2c38ee66435a68441c3da6f82ecbaa5344f83528bc4d1b5ac07\": rpc error: code = NotFound desc = could not find container \"cb6f15f4dc37f2c38ee66435a68441c3da6f82ecbaa5344f83528bc4d1b5ac07\": container with ID starting with cb6f15f4dc37f2c38ee66435a68441c3da6f82ecbaa5344f83528bc4d1b5ac07 not found: ID does not exist" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.122730 4799 scope.go:117] "RemoveContainer" containerID="967e68f824c6ccc64b8acad58a062ef0440b917242c4bc94d5d13ecc7ebe62d3" Jan 27 08:10:18 crc kubenswrapper[4799]: E0127 08:10:18.128190 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"967e68f824c6ccc64b8acad58a062ef0440b917242c4bc94d5d13ecc7ebe62d3\": container with ID starting with 967e68f824c6ccc64b8acad58a062ef0440b917242c4bc94d5d13ecc7ebe62d3 not found: ID does not exist" containerID="967e68f824c6ccc64b8acad58a062ef0440b917242c4bc94d5d13ecc7ebe62d3" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.128240 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"967e68f824c6ccc64b8acad58a062ef0440b917242c4bc94d5d13ecc7ebe62d3"} err="failed to get container status \"967e68f824c6ccc64b8acad58a062ef0440b917242c4bc94d5d13ecc7ebe62d3\": rpc error: code = NotFound desc = could not find container \"967e68f824c6ccc64b8acad58a062ef0440b917242c4bc94d5d13ecc7ebe62d3\": container with ID starting with 967e68f824c6ccc64b8acad58a062ef0440b917242c4bc94d5d13ecc7ebe62d3 not found: ID does not exist" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.128272 4799 scope.go:117] "RemoveContainer" containerID="cb6f15f4dc37f2c38ee66435a68441c3da6f82ecbaa5344f83528bc4d1b5ac07" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.128669 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb6f15f4dc37f2c38ee66435a68441c3da6f82ecbaa5344f83528bc4d1b5ac07"} err="failed to get container status \"cb6f15f4dc37f2c38ee66435a68441c3da6f82ecbaa5344f83528bc4d1b5ac07\": rpc error: code = NotFound desc = could not find container \"cb6f15f4dc37f2c38ee66435a68441c3da6f82ecbaa5344f83528bc4d1b5ac07\": container with ID starting with cb6f15f4dc37f2c38ee66435a68441c3da6f82ecbaa5344f83528bc4d1b5ac07 not found: ID does not exist" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.128712 4799 scope.go:117] "RemoveContainer" containerID="967e68f824c6ccc64b8acad58a062ef0440b917242c4bc94d5d13ecc7ebe62d3" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.129065 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"967e68f824c6ccc64b8acad58a062ef0440b917242c4bc94d5d13ecc7ebe62d3"} err="failed to get container status \"967e68f824c6ccc64b8acad58a062ef0440b917242c4bc94d5d13ecc7ebe62d3\": rpc error: code = NotFound desc = could not find container \"967e68f824c6ccc64b8acad58a062ef0440b917242c4bc94d5d13ecc7ebe62d3\": container with ID starting with 967e68f824c6ccc64b8acad58a062ef0440b917242c4bc94d5d13ecc7ebe62d3 not found: ID does not exist" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.138171 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:10:18 crc kubenswrapper[4799]: E0127 08:10:18.138668 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d595653a-d634-4c23-bf4e-ad1998e76134" containerName="nova-metadata-metadata" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.138689 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d595653a-d634-4c23-bf4e-ad1998e76134" containerName="nova-metadata-metadata" Jan 27 08:10:18 crc kubenswrapper[4799]: E0127 08:10:18.138709 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d595653a-d634-4c23-bf4e-ad1998e76134" containerName="nova-metadata-log" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.138717 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d595653a-d634-4c23-bf4e-ad1998e76134" containerName="nova-metadata-log" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.138951 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="d595653a-d634-4c23-bf4e-ad1998e76134" containerName="nova-metadata-metadata" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.138970 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="d595653a-d634-4c23-bf4e-ad1998e76134" containerName="nova-metadata-log" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.140175 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.142806 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.142901 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.149026 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.282288 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gbnx\" (UniqueName: \"kubernetes.io/projected/cab714c9-c2f8-481b-be0f-bac28096cc59-kube-api-access-7gbnx\") pod \"nova-metadata-0\" (UID: \"cab714c9-c2f8-481b-be0f-bac28096cc59\") " pod="openstack/nova-metadata-0" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.282406 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab714c9-c2f8-481b-be0f-bac28096cc59-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cab714c9-c2f8-481b-be0f-bac28096cc59\") " pod="openstack/nova-metadata-0" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.282462 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cab714c9-c2f8-481b-be0f-bac28096cc59-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cab714c9-c2f8-481b-be0f-bac28096cc59\") " pod="openstack/nova-metadata-0" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.282492 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cab714c9-c2f8-481b-be0f-bac28096cc59-logs\") pod \"nova-metadata-0\" (UID: \"cab714c9-c2f8-481b-be0f-bac28096cc59\") " pod="openstack/nova-metadata-0" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.282542 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab714c9-c2f8-481b-be0f-bac28096cc59-config-data\") pod \"nova-metadata-0\" (UID: \"cab714c9-c2f8-481b-be0f-bac28096cc59\") " pod="openstack/nova-metadata-0" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.384049 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gbnx\" (UniqueName: \"kubernetes.io/projected/cab714c9-c2f8-481b-be0f-bac28096cc59-kube-api-access-7gbnx\") pod \"nova-metadata-0\" (UID: \"cab714c9-c2f8-481b-be0f-bac28096cc59\") " pod="openstack/nova-metadata-0" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.384117 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab714c9-c2f8-481b-be0f-bac28096cc59-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cab714c9-c2f8-481b-be0f-bac28096cc59\") " pod="openstack/nova-metadata-0" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.384160 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cab714c9-c2f8-481b-be0f-bac28096cc59-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cab714c9-c2f8-481b-be0f-bac28096cc59\") " pod="openstack/nova-metadata-0" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.384185 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cab714c9-c2f8-481b-be0f-bac28096cc59-logs\") pod \"nova-metadata-0\" (UID: \"cab714c9-c2f8-481b-be0f-bac28096cc59\") " pod="openstack/nova-metadata-0" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.384216 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab714c9-c2f8-481b-be0f-bac28096cc59-config-data\") pod \"nova-metadata-0\" (UID: \"cab714c9-c2f8-481b-be0f-bac28096cc59\") " pod="openstack/nova-metadata-0" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.384876 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cab714c9-c2f8-481b-be0f-bac28096cc59-logs\") pod \"nova-metadata-0\" (UID: \"cab714c9-c2f8-481b-be0f-bac28096cc59\") " pod="openstack/nova-metadata-0" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.393876 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cab714c9-c2f8-481b-be0f-bac28096cc59-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cab714c9-c2f8-481b-be0f-bac28096cc59\") " pod="openstack/nova-metadata-0" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.393967 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab714c9-c2f8-481b-be0f-bac28096cc59-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cab714c9-c2f8-481b-be0f-bac28096cc59\") " pod="openstack/nova-metadata-0" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.394048 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab714c9-c2f8-481b-be0f-bac28096cc59-config-data\") pod \"nova-metadata-0\" (UID: \"cab714c9-c2f8-481b-be0f-bac28096cc59\") " pod="openstack/nova-metadata-0" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.401104 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gbnx\" (UniqueName: \"kubernetes.io/projected/cab714c9-c2f8-481b-be0f-bac28096cc59-kube-api-access-7gbnx\") pod \"nova-metadata-0\" (UID: \"cab714c9-c2f8-481b-be0f-bac28096cc59\") " pod="openstack/nova-metadata-0" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.466832 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d595653a-d634-4c23-bf4e-ad1998e76134" path="/var/lib/kubelet/pods/d595653a-d634-4c23-bf4e-ad1998e76134/volumes" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.470568 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 08:10:18 crc kubenswrapper[4799]: I0127 08:10:18.910840 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:10:19 crc kubenswrapper[4799]: I0127 08:10:19.087471 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cab714c9-c2f8-481b-be0f-bac28096cc59","Type":"ContainerStarted","Data":"5917675fd0824f63a9dc8a3cfeccb795210d1ee61b4329c2de10ff5d6e5af175"} Jan 27 08:10:20 crc kubenswrapper[4799]: I0127 08:10:20.097692 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cab714c9-c2f8-481b-be0f-bac28096cc59","Type":"ContainerStarted","Data":"0aecc580646d7f26ce8326fb5b422ec685ac4aaf4508c31fc3759fa671205894"} Jan 27 08:10:20 crc kubenswrapper[4799]: I0127 08:10:20.097797 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cab714c9-c2f8-481b-be0f-bac28096cc59","Type":"ContainerStarted","Data":"ef842a3eacf672e4dbafbaeec8963f4a7f2ae7e8b92fb4cc7484e29ae1f707a5"} Jan 27 08:10:21 crc kubenswrapper[4799]: I0127 08:10:21.634174 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 08:10:21 crc kubenswrapper[4799]: I0127 08:10:21.635477 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 08:10:21 crc kubenswrapper[4799]: I0127 08:10:21.750985 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 08:10:21 crc kubenswrapper[4799]: I0127 08:10:21.751082 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 08:10:21 crc kubenswrapper[4799]: I0127 08:10:21.781912 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:21 crc kubenswrapper[4799]: I0127 08:10:21.797005 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 08:10:21 crc kubenswrapper[4799]: I0127 08:10:21.834004 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.833975229 podStartE2EDuration="3.833975229s" podCreationTimestamp="2026-01-27 08:10:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:10:20.122726472 +0000 UTC m=+1486.433830547" watchObservedRunningTime="2026-01-27 08:10:21.833975229 +0000 UTC m=+1488.145079334" Jan 27 08:10:21 crc kubenswrapper[4799]: I0127 08:10:21.899625 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:10:21 crc kubenswrapper[4799]: I0127 08:10:21.980244 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-m98pb"] Jan 27 08:10:21 crc kubenswrapper[4799]: I0127 08:10:21.980494 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" podUID="38d14031-750d-40d9-9894-e7e81fcb6538" containerName="dnsmasq-dns" containerID="cri-o://607f87d36e1034d05e1aad79171a81d35a409182a32c94104026b68380ebce9b" gracePeriod=10 Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.079703 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" podUID="38d14031-750d-40d9-9894-e7e81fcb6538" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.158:5353: connect: connection refused" Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.137398 4799 generic.go:334] "Generic (PLEG): container finished" podID="e8974c0d-814e-4e96-b79e-41971c3761c7" containerID="63680fd5fbc7fe6d82c8a34dc3d4dfb3482e02133edccfcbed602cebabc84481" exitCode=0 Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.137461 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tfq6j" event={"ID":"e8974c0d-814e-4e96-b79e-41971c3761c7","Type":"ContainerDied","Data":"63680fd5fbc7fe6d82c8a34dc3d4dfb3482e02133edccfcbed602cebabc84481"} Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.143912 4799 generic.go:334] "Generic (PLEG): container finished" podID="38d14031-750d-40d9-9894-e7e81fcb6538" containerID="607f87d36e1034d05e1aad79171a81d35a409182a32c94104026b68380ebce9b" exitCode=0 Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.143974 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" event={"ID":"38d14031-750d-40d9-9894-e7e81fcb6538","Type":"ContainerDied","Data":"607f87d36e1034d05e1aad79171a81d35a409182a32c94104026b68380ebce9b"} Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.145549 4799 generic.go:334] "Generic (PLEG): container finished" podID="c8c82402-d3a9-494f-a979-881fa184a4e1" containerID="44bc46982a56c1d5622c1a64e37403dc64800a0b2badf4a2f6a3a1d809304011" exitCode=0 Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.145614 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m8v2m" event={"ID":"c8c82402-d3a9-494f-a979-881fa184a4e1","Type":"ContainerDied","Data":"44bc46982a56c1d5622c1a64e37403dc64800a0b2badf4a2f6a3a1d809304011"} Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.209486 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.568525 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.673015 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-config\") pod \"38d14031-750d-40d9-9894-e7e81fcb6538\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.673121 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-dns-swift-storage-0\") pod \"38d14031-750d-40d9-9894-e7e81fcb6538\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.673143 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-ovsdbserver-nb\") pod \"38d14031-750d-40d9-9894-e7e81fcb6538\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.673200 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-ovsdbserver-sb\") pod \"38d14031-750d-40d9-9894-e7e81fcb6538\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.673235 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx7kg\" (UniqueName: \"kubernetes.io/projected/38d14031-750d-40d9-9894-e7e81fcb6538-kube-api-access-sx7kg\") pod \"38d14031-750d-40d9-9894-e7e81fcb6538\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.673293 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-dns-svc\") pod \"38d14031-750d-40d9-9894-e7e81fcb6538\" (UID: \"38d14031-750d-40d9-9894-e7e81fcb6538\") " Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.705756 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38d14031-750d-40d9-9894-e7e81fcb6538-kube-api-access-sx7kg" (OuterVolumeSpecName: "kube-api-access-sx7kg") pod "38d14031-750d-40d9-9894-e7e81fcb6538" (UID: "38d14031-750d-40d9-9894-e7e81fcb6538"). InnerVolumeSpecName "kube-api-access-sx7kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.718752 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="04c9cbe4-083b-4e5a-99af-c94244b447fe" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.180:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.718868 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="04c9cbe4-083b-4e5a-99af-c94244b447fe" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.180:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.737885 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "38d14031-750d-40d9-9894-e7e81fcb6538" (UID: "38d14031-750d-40d9-9894-e7e81fcb6538"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.738749 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "38d14031-750d-40d9-9894-e7e81fcb6538" (UID: "38d14031-750d-40d9-9894-e7e81fcb6538"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.744122 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "38d14031-750d-40d9-9894-e7e81fcb6538" (UID: "38d14031-750d-40d9-9894-e7e81fcb6538"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.758886 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-config" (OuterVolumeSpecName: "config") pod "38d14031-750d-40d9-9894-e7e81fcb6538" (UID: "38d14031-750d-40d9-9894-e7e81fcb6538"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.761913 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "38d14031-750d-40d9-9894-e7e81fcb6538" (UID: "38d14031-750d-40d9-9894-e7e81fcb6538"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.775504 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.775542 4799 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.775554 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.775564 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.775575 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx7kg\" (UniqueName: \"kubernetes.io/projected/38d14031-750d-40d9-9894-e7e81fcb6538-kube-api-access-sx7kg\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:22 crc kubenswrapper[4799]: I0127 08:10:22.775586 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38d14031-750d-40d9-9894-e7e81fcb6538-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.154991 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" event={"ID":"38d14031-750d-40d9-9894-e7e81fcb6538","Type":"ContainerDied","Data":"725c1ce1458fe76229638d83fcba2a275566be8f03a632787824873108d01d89"} Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.155123 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-m98pb" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.155322 4799 scope.go:117] "RemoveContainer" containerID="607f87d36e1034d05e1aad79171a81d35a409182a32c94104026b68380ebce9b" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.208536 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-m98pb"] Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.219185 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-m98pb"] Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.223548 4799 scope.go:117] "RemoveContainer" containerID="4a1b97fbce6eba591e536c18ca8edce65cec1e1907b9883c719af90b0583a283" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.471870 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.471921 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.652245 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m8v2m" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.658561 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tfq6j" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.731191 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.731273 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.732038 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.733236 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"45e8464efa823c0efb39a137ea02aa341a85fc57fd2ab60277b88ead10fb975d"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.733439 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://45e8464efa823c0efb39a137ea02aa341a85fc57fd2ab60277b88ead10fb975d" gracePeriod=600 Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.811576 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjdtt\" (UniqueName: \"kubernetes.io/projected/e8974c0d-814e-4e96-b79e-41971c3761c7-kube-api-access-hjdtt\") pod \"e8974c0d-814e-4e96-b79e-41971c3761c7\" (UID: \"e8974c0d-814e-4e96-b79e-41971c3761c7\") " Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.811641 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8974c0d-814e-4e96-b79e-41971c3761c7-config-data\") pod \"e8974c0d-814e-4e96-b79e-41971c3761c7\" (UID: \"e8974c0d-814e-4e96-b79e-41971c3761c7\") " Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.811732 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8974c0d-814e-4e96-b79e-41971c3761c7-combined-ca-bundle\") pod \"e8974c0d-814e-4e96-b79e-41971c3761c7\" (UID: \"e8974c0d-814e-4e96-b79e-41971c3761c7\") " Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.811754 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzbd9\" (UniqueName: \"kubernetes.io/projected/c8c82402-d3a9-494f-a979-881fa184a4e1-kube-api-access-kzbd9\") pod \"c8c82402-d3a9-494f-a979-881fa184a4e1\" (UID: \"c8c82402-d3a9-494f-a979-881fa184a4e1\") " Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.811793 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8974c0d-814e-4e96-b79e-41971c3761c7-scripts\") pod \"e8974c0d-814e-4e96-b79e-41971c3761c7\" (UID: \"e8974c0d-814e-4e96-b79e-41971c3761c7\") " Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.811822 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8c82402-d3a9-494f-a979-881fa184a4e1-combined-ca-bundle\") pod \"c8c82402-d3a9-494f-a979-881fa184a4e1\" (UID: \"c8c82402-d3a9-494f-a979-881fa184a4e1\") " Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.811843 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c82402-d3a9-494f-a979-881fa184a4e1-config-data\") pod \"c8c82402-d3a9-494f-a979-881fa184a4e1\" (UID: \"c8c82402-d3a9-494f-a979-881fa184a4e1\") " Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.811920 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8c82402-d3a9-494f-a979-881fa184a4e1-scripts\") pod \"c8c82402-d3a9-494f-a979-881fa184a4e1\" (UID: \"c8c82402-d3a9-494f-a979-881fa184a4e1\") " Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.817575 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8974c0d-814e-4e96-b79e-41971c3761c7-scripts" (OuterVolumeSpecName: "scripts") pod "e8974c0d-814e-4e96-b79e-41971c3761c7" (UID: "e8974c0d-814e-4e96-b79e-41971c3761c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.818889 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8c82402-d3a9-494f-a979-881fa184a4e1-kube-api-access-kzbd9" (OuterVolumeSpecName: "kube-api-access-kzbd9") pod "c8c82402-d3a9-494f-a979-881fa184a4e1" (UID: "c8c82402-d3a9-494f-a979-881fa184a4e1"). InnerVolumeSpecName "kube-api-access-kzbd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.819110 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8974c0d-814e-4e96-b79e-41971c3761c7-kube-api-access-hjdtt" (OuterVolumeSpecName: "kube-api-access-hjdtt") pod "e8974c0d-814e-4e96-b79e-41971c3761c7" (UID: "e8974c0d-814e-4e96-b79e-41971c3761c7"). InnerVolumeSpecName "kube-api-access-hjdtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.819402 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8c82402-d3a9-494f-a979-881fa184a4e1-scripts" (OuterVolumeSpecName: "scripts") pod "c8c82402-d3a9-494f-a979-881fa184a4e1" (UID: "c8c82402-d3a9-494f-a979-881fa184a4e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.838389 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8c82402-d3a9-494f-a979-881fa184a4e1-config-data" (OuterVolumeSpecName: "config-data") pod "c8c82402-d3a9-494f-a979-881fa184a4e1" (UID: "c8c82402-d3a9-494f-a979-881fa184a4e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.839577 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8974c0d-814e-4e96-b79e-41971c3761c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8974c0d-814e-4e96-b79e-41971c3761c7" (UID: "e8974c0d-814e-4e96-b79e-41971c3761c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.842730 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8c82402-d3a9-494f-a979-881fa184a4e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8c82402-d3a9-494f-a979-881fa184a4e1" (UID: "c8c82402-d3a9-494f-a979-881fa184a4e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.847927 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8974c0d-814e-4e96-b79e-41971c3761c7-config-data" (OuterVolumeSpecName: "config-data") pod "e8974c0d-814e-4e96-b79e-41971c3761c7" (UID: "e8974c0d-814e-4e96-b79e-41971c3761c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.914421 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8c82402-d3a9-494f-a979-881fa184a4e1-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.914472 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjdtt\" (UniqueName: \"kubernetes.io/projected/e8974c0d-814e-4e96-b79e-41971c3761c7-kube-api-access-hjdtt\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.914491 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8974c0d-814e-4e96-b79e-41971c3761c7-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.914508 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8974c0d-814e-4e96-b79e-41971c3761c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.914524 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzbd9\" (UniqueName: \"kubernetes.io/projected/c8c82402-d3a9-494f-a979-881fa184a4e1-kube-api-access-kzbd9\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.914539 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8974c0d-814e-4e96-b79e-41971c3761c7-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.914555 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8c82402-d3a9-494f-a979-881fa184a4e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:23 crc kubenswrapper[4799]: I0127 08:10:23.914570 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c82402-d3a9-494f-a979-881fa184a4e1-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.198865 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m8v2m" event={"ID":"c8c82402-d3a9-494f-a979-881fa184a4e1","Type":"ContainerDied","Data":"5a971526f525b33a4b6fe767175e7e728b73a111aad7eb6b563a1e32d38e6f18"} Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.198931 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a971526f525b33a4b6fe767175e7e728b73a111aad7eb6b563a1e32d38e6f18" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.199022 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m8v2m" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.207666 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="45e8464efa823c0efb39a137ea02aa341a85fc57fd2ab60277b88ead10fb975d" exitCode=0 Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.207943 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"45e8464efa823c0efb39a137ea02aa341a85fc57fd2ab60277b88ead10fb975d"} Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.207986 4799 scope.go:117] "RemoveContainer" containerID="13185769064c6ec2b432a1350a219c32fc8634c50a20acef33753a2a5d7615d7" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.212673 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tfq6j" event={"ID":"e8974c0d-814e-4e96-b79e-41971c3761c7","Type":"ContainerDied","Data":"11762b5eb9254ba435161dc755ba7a4d460d24d5f0845ef5db6004d9574d2b64"} Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.212718 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11762b5eb9254ba435161dc755ba7a4d460d24d5f0845ef5db6004d9574d2b64" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.212818 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tfq6j" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.282396 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 08:10:24 crc kubenswrapper[4799]: E0127 08:10:24.283128 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c82402-d3a9-494f-a979-881fa184a4e1" containerName="nova-manage" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.283227 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c82402-d3a9-494f-a979-881fa184a4e1" containerName="nova-manage" Jan 27 08:10:24 crc kubenswrapper[4799]: E0127 08:10:24.283290 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d14031-750d-40d9-9894-e7e81fcb6538" containerName="init" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.283354 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d14031-750d-40d9-9894-e7e81fcb6538" containerName="init" Jan 27 08:10:24 crc kubenswrapper[4799]: E0127 08:10:24.283424 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8974c0d-814e-4e96-b79e-41971c3761c7" containerName="nova-cell1-conductor-db-sync" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.283502 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8974c0d-814e-4e96-b79e-41971c3761c7" containerName="nova-cell1-conductor-db-sync" Jan 27 08:10:24 crc kubenswrapper[4799]: E0127 08:10:24.283571 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d14031-750d-40d9-9894-e7e81fcb6538" containerName="dnsmasq-dns" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.283621 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d14031-750d-40d9-9894-e7e81fcb6538" containerName="dnsmasq-dns" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.283856 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8974c0d-814e-4e96-b79e-41971c3761c7" containerName="nova-cell1-conductor-db-sync" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.283932 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d14031-750d-40d9-9894-e7e81fcb6538" containerName="dnsmasq-dns" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.283985 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8c82402-d3a9-494f-a979-881fa184a4e1" containerName="nova-manage" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.284746 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.292772 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.295662 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.378383 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.378593 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="04c9cbe4-083b-4e5a-99af-c94244b447fe" containerName="nova-api-log" containerID="cri-o://e5651c5b7379a75592901438d55a99de8bbec7d0d80b2ae8262facd046d1c1d4" gracePeriod=30 Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.379013 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="04c9cbe4-083b-4e5a-99af-c94244b447fe" containerName="nova-api-api" containerID="cri-o://0196696a99d83cff567b7ccdc4fa86e6f603ead4b71c1756125b08538beeae47" gracePeriod=30 Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.397764 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.398389 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="53d9fc29-5465-481c-a83a-9ad95df32c3e" containerName="nova-scheduler-scheduler" containerID="cri-o://9a6f9203e142d72c89da4fe927f3707bfd831adba03cf3bc2ed32320acaaefe0" gracePeriod=30 Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.416662 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.416908 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cab714c9-c2f8-481b-be0f-bac28096cc59" containerName="nova-metadata-log" containerID="cri-o://ef842a3eacf672e4dbafbaeec8963f4a7f2ae7e8b92fb4cc7484e29ae1f707a5" gracePeriod=30 Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.417040 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cab714c9-c2f8-481b-be0f-bac28096cc59" containerName="nova-metadata-metadata" containerID="cri-o://0aecc580646d7f26ce8326fb5b422ec685ac4aaf4508c31fc3759fa671205894" gracePeriod=30 Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.425998 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bca1b10-545f-4e35-a5af-e760d464d0ff-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8bca1b10-545f-4e35-a5af-e760d464d0ff\") " pod="openstack/nova-cell1-conductor-0" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.426100 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxb2x\" (UniqueName: \"kubernetes.io/projected/8bca1b10-545f-4e35-a5af-e760d464d0ff-kube-api-access-vxb2x\") pod \"nova-cell1-conductor-0\" (UID: \"8bca1b10-545f-4e35-a5af-e760d464d0ff\") " pod="openstack/nova-cell1-conductor-0" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.426197 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bca1b10-545f-4e35-a5af-e760d464d0ff-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8bca1b10-545f-4e35-a5af-e760d464d0ff\") " pod="openstack/nova-cell1-conductor-0" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.480153 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38d14031-750d-40d9-9894-e7e81fcb6538" path="/var/lib/kubelet/pods/38d14031-750d-40d9-9894-e7e81fcb6538/volumes" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.527857 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxb2x\" (UniqueName: \"kubernetes.io/projected/8bca1b10-545f-4e35-a5af-e760d464d0ff-kube-api-access-vxb2x\") pod \"nova-cell1-conductor-0\" (UID: \"8bca1b10-545f-4e35-a5af-e760d464d0ff\") " pod="openstack/nova-cell1-conductor-0" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.527953 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bca1b10-545f-4e35-a5af-e760d464d0ff-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8bca1b10-545f-4e35-a5af-e760d464d0ff\") " pod="openstack/nova-cell1-conductor-0" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.528001 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bca1b10-545f-4e35-a5af-e760d464d0ff-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8bca1b10-545f-4e35-a5af-e760d464d0ff\") " pod="openstack/nova-cell1-conductor-0" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.532536 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bca1b10-545f-4e35-a5af-e760d464d0ff-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8bca1b10-545f-4e35-a5af-e760d464d0ff\") " pod="openstack/nova-cell1-conductor-0" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.533939 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bca1b10-545f-4e35-a5af-e760d464d0ff-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8bca1b10-545f-4e35-a5af-e760d464d0ff\") " pod="openstack/nova-cell1-conductor-0" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.546726 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxb2x\" (UniqueName: \"kubernetes.io/projected/8bca1b10-545f-4e35-a5af-e760d464d0ff-kube-api-access-vxb2x\") pod \"nova-cell1-conductor-0\" (UID: \"8bca1b10-545f-4e35-a5af-e760d464d0ff\") " pod="openstack/nova-cell1-conductor-0" Jan 27 08:10:24 crc kubenswrapper[4799]: I0127 08:10:24.607749 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.149391 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.247792 4799 generic.go:334] "Generic (PLEG): container finished" podID="04c9cbe4-083b-4e5a-99af-c94244b447fe" containerID="e5651c5b7379a75592901438d55a99de8bbec7d0d80b2ae8262facd046d1c1d4" exitCode=143 Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.248047 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04c9cbe4-083b-4e5a-99af-c94244b447fe","Type":"ContainerDied","Data":"e5651c5b7379a75592901438d55a99de8bbec7d0d80b2ae8262facd046d1c1d4"} Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.253588 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8bca1b10-545f-4e35-a5af-e760d464d0ff","Type":"ContainerStarted","Data":"a4775c61029a392a18b16843bdfea2a3a01d17c8fb63acb95483fef0852b91c8"} Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.256945 4799 generic.go:334] "Generic (PLEG): container finished" podID="cab714c9-c2f8-481b-be0f-bac28096cc59" containerID="0aecc580646d7f26ce8326fb5b422ec685ac4aaf4508c31fc3759fa671205894" exitCode=0 Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.256998 4799 generic.go:334] "Generic (PLEG): container finished" podID="cab714c9-c2f8-481b-be0f-bac28096cc59" containerID="ef842a3eacf672e4dbafbaeec8963f4a7f2ae7e8b92fb4cc7484e29ae1f707a5" exitCode=143 Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.257021 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cab714c9-c2f8-481b-be0f-bac28096cc59","Type":"ContainerDied","Data":"0aecc580646d7f26ce8326fb5b422ec685ac4aaf4508c31fc3759fa671205894"} Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.257074 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cab714c9-c2f8-481b-be0f-bac28096cc59","Type":"ContainerDied","Data":"ef842a3eacf672e4dbafbaeec8963f4a7f2ae7e8b92fb4cc7484e29ae1f707a5"} Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.271423 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f"} Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.395744 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.549523 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab714c9-c2f8-481b-be0f-bac28096cc59-config-data\") pod \"cab714c9-c2f8-481b-be0f-bac28096cc59\" (UID: \"cab714c9-c2f8-481b-be0f-bac28096cc59\") " Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.549590 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab714c9-c2f8-481b-be0f-bac28096cc59-combined-ca-bundle\") pod \"cab714c9-c2f8-481b-be0f-bac28096cc59\" (UID: \"cab714c9-c2f8-481b-be0f-bac28096cc59\") " Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.549654 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cab714c9-c2f8-481b-be0f-bac28096cc59-logs\") pod \"cab714c9-c2f8-481b-be0f-bac28096cc59\" (UID: \"cab714c9-c2f8-481b-be0f-bac28096cc59\") " Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.549721 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cab714c9-c2f8-481b-be0f-bac28096cc59-nova-metadata-tls-certs\") pod \"cab714c9-c2f8-481b-be0f-bac28096cc59\" (UID: \"cab714c9-c2f8-481b-be0f-bac28096cc59\") " Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.549739 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gbnx\" (UniqueName: \"kubernetes.io/projected/cab714c9-c2f8-481b-be0f-bac28096cc59-kube-api-access-7gbnx\") pod \"cab714c9-c2f8-481b-be0f-bac28096cc59\" (UID: \"cab714c9-c2f8-481b-be0f-bac28096cc59\") " Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.550915 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cab714c9-c2f8-481b-be0f-bac28096cc59-logs" (OuterVolumeSpecName: "logs") pod "cab714c9-c2f8-481b-be0f-bac28096cc59" (UID: "cab714c9-c2f8-481b-be0f-bac28096cc59"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.551569 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cab714c9-c2f8-481b-be0f-bac28096cc59-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.554207 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cab714c9-c2f8-481b-be0f-bac28096cc59-kube-api-access-7gbnx" (OuterVolumeSpecName: "kube-api-access-7gbnx") pod "cab714c9-c2f8-481b-be0f-bac28096cc59" (UID: "cab714c9-c2f8-481b-be0f-bac28096cc59"). InnerVolumeSpecName "kube-api-access-7gbnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.601034 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab714c9-c2f8-481b-be0f-bac28096cc59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cab714c9-c2f8-481b-be0f-bac28096cc59" (UID: "cab714c9-c2f8-481b-be0f-bac28096cc59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.601442 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab714c9-c2f8-481b-be0f-bac28096cc59-config-data" (OuterVolumeSpecName: "config-data") pod "cab714c9-c2f8-481b-be0f-bac28096cc59" (UID: "cab714c9-c2f8-481b-be0f-bac28096cc59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.626028 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab714c9-c2f8-481b-be0f-bac28096cc59-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "cab714c9-c2f8-481b-be0f-bac28096cc59" (UID: "cab714c9-c2f8-481b-be0f-bac28096cc59"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.653631 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab714c9-c2f8-481b-be0f-bac28096cc59-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.653664 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab714c9-c2f8-481b-be0f-bac28096cc59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.653676 4799 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cab714c9-c2f8-481b-be0f-bac28096cc59-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:25 crc kubenswrapper[4799]: I0127 08:10:25.653686 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gbnx\" (UniqueName: \"kubernetes.io/projected/cab714c9-c2f8-481b-be0f-bac28096cc59-kube-api-access-7gbnx\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.174927 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.282778 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cab714c9-c2f8-481b-be0f-bac28096cc59","Type":"ContainerDied","Data":"5917675fd0824f63a9dc8a3cfeccb795210d1ee61b4329c2de10ff5d6e5af175"} Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.282850 4799 scope.go:117] "RemoveContainer" containerID="0aecc580646d7f26ce8326fb5b422ec685ac4aaf4508c31fc3759fa671205894" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.282791 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.287218 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8bca1b10-545f-4e35-a5af-e760d464d0ff","Type":"ContainerStarted","Data":"647797099f96b25df47d1cc66e23dbb35585ab19b6a105db8444e78a1585d8dc"} Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.287264 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.312979 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.312955895 podStartE2EDuration="2.312955895s" podCreationTimestamp="2026-01-27 08:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:10:26.30144818 +0000 UTC m=+1492.612552255" watchObservedRunningTime="2026-01-27 08:10:26.312955895 +0000 UTC m=+1492.624059960" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.320192 4799 scope.go:117] "RemoveContainer" containerID="ef842a3eacf672e4dbafbaeec8963f4a7f2ae7e8b92fb4cc7484e29ae1f707a5" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.334474 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.348789 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.358047 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:10:26 crc kubenswrapper[4799]: E0127 08:10:26.358590 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab714c9-c2f8-481b-be0f-bac28096cc59" containerName="nova-metadata-metadata" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.358615 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab714c9-c2f8-481b-be0f-bac28096cc59" containerName="nova-metadata-metadata" Jan 27 08:10:26 crc kubenswrapper[4799]: E0127 08:10:26.358647 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab714c9-c2f8-481b-be0f-bac28096cc59" containerName="nova-metadata-log" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.358655 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab714c9-c2f8-481b-be0f-bac28096cc59" containerName="nova-metadata-log" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.358883 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="cab714c9-c2f8-481b-be0f-bac28096cc59" containerName="nova-metadata-log" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.358904 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="cab714c9-c2f8-481b-be0f-bac28096cc59" containerName="nova-metadata-metadata" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.360104 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.364868 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.365008 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.365656 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.464281 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cab714c9-c2f8-481b-be0f-bac28096cc59" path="/var/lib/kubelet/pods/cab714c9-c2f8-481b-be0f-bac28096cc59/volumes" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.468456 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxqs2\" (UniqueName: \"kubernetes.io/projected/0b1a8d96-854c-4df2-9b33-19c50ca49e14-kube-api-access-wxqs2\") pod \"nova-metadata-0\" (UID: \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\") " pod="openstack/nova-metadata-0" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.468542 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b1a8d96-854c-4df2-9b33-19c50ca49e14-config-data\") pod \"nova-metadata-0\" (UID: \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\") " pod="openstack/nova-metadata-0" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.468592 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b1a8d96-854c-4df2-9b33-19c50ca49e14-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\") " pod="openstack/nova-metadata-0" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.468720 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b1a8d96-854c-4df2-9b33-19c50ca49e14-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\") " pod="openstack/nova-metadata-0" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.468749 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b1a8d96-854c-4df2-9b33-19c50ca49e14-logs\") pod \"nova-metadata-0\" (UID: \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\") " pod="openstack/nova-metadata-0" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.570993 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxqs2\" (UniqueName: \"kubernetes.io/projected/0b1a8d96-854c-4df2-9b33-19c50ca49e14-kube-api-access-wxqs2\") pod \"nova-metadata-0\" (UID: \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\") " pod="openstack/nova-metadata-0" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.571122 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b1a8d96-854c-4df2-9b33-19c50ca49e14-config-data\") pod \"nova-metadata-0\" (UID: \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\") " pod="openstack/nova-metadata-0" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.571204 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b1a8d96-854c-4df2-9b33-19c50ca49e14-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\") " pod="openstack/nova-metadata-0" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.571445 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b1a8d96-854c-4df2-9b33-19c50ca49e14-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\") " pod="openstack/nova-metadata-0" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.571488 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b1a8d96-854c-4df2-9b33-19c50ca49e14-logs\") pod \"nova-metadata-0\" (UID: \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\") " pod="openstack/nova-metadata-0" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.572984 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b1a8d96-854c-4df2-9b33-19c50ca49e14-logs\") pod \"nova-metadata-0\" (UID: \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\") " pod="openstack/nova-metadata-0" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.578334 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b1a8d96-854c-4df2-9b33-19c50ca49e14-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\") " pod="openstack/nova-metadata-0" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.578366 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b1a8d96-854c-4df2-9b33-19c50ca49e14-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\") " pod="openstack/nova-metadata-0" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.579326 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b1a8d96-854c-4df2-9b33-19c50ca49e14-config-data\") pod \"nova-metadata-0\" (UID: \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\") " pod="openstack/nova-metadata-0" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.596630 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxqs2\" (UniqueName: \"kubernetes.io/projected/0b1a8d96-854c-4df2-9b33-19c50ca49e14-kube-api-access-wxqs2\") pod \"nova-metadata-0\" (UID: \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\") " pod="openstack/nova-metadata-0" Jan 27 08:10:26 crc kubenswrapper[4799]: I0127 08:10:26.697548 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 08:10:26 crc kubenswrapper[4799]: E0127 08:10:26.753608 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9a6f9203e142d72c89da4fe927f3707bfd831adba03cf3bc2ed32320acaaefe0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 08:10:26 crc kubenswrapper[4799]: E0127 08:10:26.755044 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9a6f9203e142d72c89da4fe927f3707bfd831adba03cf3bc2ed32320acaaefe0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 08:10:26 crc kubenswrapper[4799]: E0127 08:10:26.757774 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9a6f9203e142d72c89da4fe927f3707bfd831adba03cf3bc2ed32320acaaefe0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 08:10:26 crc kubenswrapper[4799]: E0127 08:10:26.757866 4799 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="53d9fc29-5465-481c-a83a-9ad95df32c3e" containerName="nova-scheduler-scheduler" Jan 27 08:10:27 crc kubenswrapper[4799]: I0127 08:10:27.155830 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:10:27 crc kubenswrapper[4799]: W0127 08:10:27.156365 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b1a8d96_854c_4df2_9b33_19c50ca49e14.slice/crio-1435e9119d11448800be298b9a3ede23d2d1afdbd9af013b5fbf3d57c9fdfd3b WatchSource:0}: Error finding container 1435e9119d11448800be298b9a3ede23d2d1afdbd9af013b5fbf3d57c9fdfd3b: Status 404 returned error can't find the container with id 1435e9119d11448800be298b9a3ede23d2d1afdbd9af013b5fbf3d57c9fdfd3b Jan 27 08:10:27 crc kubenswrapper[4799]: I0127 08:10:27.297092 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b1a8d96-854c-4df2-9b33-19c50ca49e14","Type":"ContainerStarted","Data":"1435e9119d11448800be298b9a3ede23d2d1afdbd9af013b5fbf3d57c9fdfd3b"} Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.025649 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.103427 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccgf8\" (UniqueName: \"kubernetes.io/projected/53d9fc29-5465-481c-a83a-9ad95df32c3e-kube-api-access-ccgf8\") pod \"53d9fc29-5465-481c-a83a-9ad95df32c3e\" (UID: \"53d9fc29-5465-481c-a83a-9ad95df32c3e\") " Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.103632 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d9fc29-5465-481c-a83a-9ad95df32c3e-config-data\") pod \"53d9fc29-5465-481c-a83a-9ad95df32c3e\" (UID: \"53d9fc29-5465-481c-a83a-9ad95df32c3e\") " Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.103677 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d9fc29-5465-481c-a83a-9ad95df32c3e-combined-ca-bundle\") pod \"53d9fc29-5465-481c-a83a-9ad95df32c3e\" (UID: \"53d9fc29-5465-481c-a83a-9ad95df32c3e\") " Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.107975 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53d9fc29-5465-481c-a83a-9ad95df32c3e-kube-api-access-ccgf8" (OuterVolumeSpecName: "kube-api-access-ccgf8") pod "53d9fc29-5465-481c-a83a-9ad95df32c3e" (UID: "53d9fc29-5465-481c-a83a-9ad95df32c3e"). InnerVolumeSpecName "kube-api-access-ccgf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.128855 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53d9fc29-5465-481c-a83a-9ad95df32c3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53d9fc29-5465-481c-a83a-9ad95df32c3e" (UID: "53d9fc29-5465-481c-a83a-9ad95df32c3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.141726 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53d9fc29-5465-481c-a83a-9ad95df32c3e-config-data" (OuterVolumeSpecName: "config-data") pod "53d9fc29-5465-481c-a83a-9ad95df32c3e" (UID: "53d9fc29-5465-481c-a83a-9ad95df32c3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.205608 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d9fc29-5465-481c-a83a-9ad95df32c3e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.205639 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d9fc29-5465-481c-a83a-9ad95df32c3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.205651 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccgf8\" (UniqueName: \"kubernetes.io/projected/53d9fc29-5465-481c-a83a-9ad95df32c3e-kube-api-access-ccgf8\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.309393 4799 generic.go:334] "Generic (PLEG): container finished" podID="53d9fc29-5465-481c-a83a-9ad95df32c3e" containerID="9a6f9203e142d72c89da4fe927f3707bfd831adba03cf3bc2ed32320acaaefe0" exitCode=0 Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.309451 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"53d9fc29-5465-481c-a83a-9ad95df32c3e","Type":"ContainerDied","Data":"9a6f9203e142d72c89da4fe927f3707bfd831adba03cf3bc2ed32320acaaefe0"} Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.309489 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.310321 4799 scope.go:117] "RemoveContainer" containerID="9a6f9203e142d72c89da4fe927f3707bfd831adba03cf3bc2ed32320acaaefe0" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.310784 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"53d9fc29-5465-481c-a83a-9ad95df32c3e","Type":"ContainerDied","Data":"78d9e3b7a35d99aaea869777e1de829e9f07ed9be8715fc9507099ff10ec347a"} Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.312967 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b1a8d96-854c-4df2-9b33-19c50ca49e14","Type":"ContainerStarted","Data":"d56c07390cc5edc50fe902563f76b32b1e98d074a3bd9ad80e74c8cc9750a5fc"} Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.313006 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b1a8d96-854c-4df2-9b33-19c50ca49e14","Type":"ContainerStarted","Data":"94e371e14fdfd6d2434bc6d030c057627a9a058f4deb4a8a037fd8e849d3a4e5"} Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.346403 4799 scope.go:117] "RemoveContainer" containerID="9a6f9203e142d72c89da4fe927f3707bfd831adba03cf3bc2ed32320acaaefe0" Jan 27 08:10:28 crc kubenswrapper[4799]: E0127 08:10:28.346914 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a6f9203e142d72c89da4fe927f3707bfd831adba03cf3bc2ed32320acaaefe0\": container with ID starting with 9a6f9203e142d72c89da4fe927f3707bfd831adba03cf3bc2ed32320acaaefe0 not found: ID does not exist" containerID="9a6f9203e142d72c89da4fe927f3707bfd831adba03cf3bc2ed32320acaaefe0" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.346950 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a6f9203e142d72c89da4fe927f3707bfd831adba03cf3bc2ed32320acaaefe0"} err="failed to get container status \"9a6f9203e142d72c89da4fe927f3707bfd831adba03cf3bc2ed32320acaaefe0\": rpc error: code = NotFound desc = could not find container \"9a6f9203e142d72c89da4fe927f3707bfd831adba03cf3bc2ed32320acaaefe0\": container with ID starting with 9a6f9203e142d72c89da4fe927f3707bfd831adba03cf3bc2ed32320acaaefe0 not found: ID does not exist" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.357847 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.357801987 podStartE2EDuration="2.357801987s" podCreationTimestamp="2026-01-27 08:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:10:28.331857136 +0000 UTC m=+1494.642961211" watchObservedRunningTime="2026-01-27 08:10:28.357801987 +0000 UTC m=+1494.668906052" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.380397 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.394030 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.410725 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 08:10:28 crc kubenswrapper[4799]: E0127 08:10:28.411189 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d9fc29-5465-481c-a83a-9ad95df32c3e" containerName="nova-scheduler-scheduler" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.411212 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d9fc29-5465-481c-a83a-9ad95df32c3e" containerName="nova-scheduler-scheduler" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.411467 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="53d9fc29-5465-481c-a83a-9ad95df32c3e" containerName="nova-scheduler-scheduler" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.412163 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.415514 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.419617 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.461010 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53d9fc29-5465-481c-a83a-9ad95df32c3e" path="/var/lib/kubelet/pods/53d9fc29-5465-481c-a83a-9ad95df32c3e/volumes" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.512434 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzht9\" (UniqueName: \"kubernetes.io/projected/57046e54-7327-4252-90e0-2a4420ea98c9-kube-api-access-tzht9\") pod \"nova-scheduler-0\" (UID: \"57046e54-7327-4252-90e0-2a4420ea98c9\") " pod="openstack/nova-scheduler-0" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.512644 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57046e54-7327-4252-90e0-2a4420ea98c9-config-data\") pod \"nova-scheduler-0\" (UID: \"57046e54-7327-4252-90e0-2a4420ea98c9\") " pod="openstack/nova-scheduler-0" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.512765 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57046e54-7327-4252-90e0-2a4420ea98c9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"57046e54-7327-4252-90e0-2a4420ea98c9\") " pod="openstack/nova-scheduler-0" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.614067 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57046e54-7327-4252-90e0-2a4420ea98c9-config-data\") pod \"nova-scheduler-0\" (UID: \"57046e54-7327-4252-90e0-2a4420ea98c9\") " pod="openstack/nova-scheduler-0" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.614178 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57046e54-7327-4252-90e0-2a4420ea98c9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"57046e54-7327-4252-90e0-2a4420ea98c9\") " pod="openstack/nova-scheduler-0" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.614241 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzht9\" (UniqueName: \"kubernetes.io/projected/57046e54-7327-4252-90e0-2a4420ea98c9-kube-api-access-tzht9\") pod \"nova-scheduler-0\" (UID: \"57046e54-7327-4252-90e0-2a4420ea98c9\") " pod="openstack/nova-scheduler-0" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.626884 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57046e54-7327-4252-90e0-2a4420ea98c9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"57046e54-7327-4252-90e0-2a4420ea98c9\") " pod="openstack/nova-scheduler-0" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.628725 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzht9\" (UniqueName: \"kubernetes.io/projected/57046e54-7327-4252-90e0-2a4420ea98c9-kube-api-access-tzht9\") pod \"nova-scheduler-0\" (UID: \"57046e54-7327-4252-90e0-2a4420ea98c9\") " pod="openstack/nova-scheduler-0" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.629787 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57046e54-7327-4252-90e0-2a4420ea98c9-config-data\") pod \"nova-scheduler-0\" (UID: \"57046e54-7327-4252-90e0-2a4420ea98c9\") " pod="openstack/nova-scheduler-0" Jan 27 08:10:28 crc kubenswrapper[4799]: I0127 08:10:28.734861 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 08:10:29 crc kubenswrapper[4799]: I0127 08:10:29.241240 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 08:10:29 crc kubenswrapper[4799]: W0127 08:10:29.241868 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57046e54_7327_4252_90e0_2a4420ea98c9.slice/crio-7bf2868714cae3f04d63c5b5ec7f9e975ea919a857a298c83a3c57c279a5fd74 WatchSource:0}: Error finding container 7bf2868714cae3f04d63c5b5ec7f9e975ea919a857a298c83a3c57c279a5fd74: Status 404 returned error can't find the container with id 7bf2868714cae3f04d63c5b5ec7f9e975ea919a857a298c83a3c57c279a5fd74 Jan 27 08:10:29 crc kubenswrapper[4799]: I0127 08:10:29.322798 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"57046e54-7327-4252-90e0-2a4420ea98c9","Type":"ContainerStarted","Data":"7bf2868714cae3f04d63c5b5ec7f9e975ea919a857a298c83a3c57c279a5fd74"} Jan 27 08:10:29 crc kubenswrapper[4799]: I0127 08:10:29.337513 4799 generic.go:334] "Generic (PLEG): container finished" podID="04c9cbe4-083b-4e5a-99af-c94244b447fe" containerID="0196696a99d83cff567b7ccdc4fa86e6f603ead4b71c1756125b08538beeae47" exitCode=0 Jan 27 08:10:29 crc kubenswrapper[4799]: I0127 08:10:29.338545 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04c9cbe4-083b-4e5a-99af-c94244b447fe","Type":"ContainerDied","Data":"0196696a99d83cff567b7ccdc4fa86e6f603ead4b71c1756125b08538beeae47"} Jan 27 08:10:29 crc kubenswrapper[4799]: I0127 08:10:29.338575 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04c9cbe4-083b-4e5a-99af-c94244b447fe","Type":"ContainerDied","Data":"f13283d3e73b3e1b0f539ddb9023720a2bf06de340fb9fa6665d8804af4206c4"} Jan 27 08:10:29 crc kubenswrapper[4799]: I0127 08:10:29.338585 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f13283d3e73b3e1b0f539ddb9023720a2bf06de340fb9fa6665d8804af4206c4" Jan 27 08:10:29 crc kubenswrapper[4799]: I0127 08:10:29.396866 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 08:10:29 crc kubenswrapper[4799]: I0127 08:10:29.441277 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04c9cbe4-083b-4e5a-99af-c94244b447fe-logs\") pod \"04c9cbe4-083b-4e5a-99af-c94244b447fe\" (UID: \"04c9cbe4-083b-4e5a-99af-c94244b447fe\") " Jan 27 08:10:29 crc kubenswrapper[4799]: I0127 08:10:29.441384 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04c9cbe4-083b-4e5a-99af-c94244b447fe-config-data\") pod \"04c9cbe4-083b-4e5a-99af-c94244b447fe\" (UID: \"04c9cbe4-083b-4e5a-99af-c94244b447fe\") " Jan 27 08:10:29 crc kubenswrapper[4799]: I0127 08:10:29.441451 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c9cbe4-083b-4e5a-99af-c94244b447fe-combined-ca-bundle\") pod \"04c9cbe4-083b-4e5a-99af-c94244b447fe\" (UID: \"04c9cbe4-083b-4e5a-99af-c94244b447fe\") " Jan 27 08:10:29 crc kubenswrapper[4799]: I0127 08:10:29.441484 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47pvh\" (UniqueName: \"kubernetes.io/projected/04c9cbe4-083b-4e5a-99af-c94244b447fe-kube-api-access-47pvh\") pod \"04c9cbe4-083b-4e5a-99af-c94244b447fe\" (UID: \"04c9cbe4-083b-4e5a-99af-c94244b447fe\") " Jan 27 08:10:29 crc kubenswrapper[4799]: I0127 08:10:29.442855 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04c9cbe4-083b-4e5a-99af-c94244b447fe-logs" (OuterVolumeSpecName: "logs") pod "04c9cbe4-083b-4e5a-99af-c94244b447fe" (UID: "04c9cbe4-083b-4e5a-99af-c94244b447fe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:10:29 crc kubenswrapper[4799]: I0127 08:10:29.446139 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04c9cbe4-083b-4e5a-99af-c94244b447fe-kube-api-access-47pvh" (OuterVolumeSpecName: "kube-api-access-47pvh") pod "04c9cbe4-083b-4e5a-99af-c94244b447fe" (UID: "04c9cbe4-083b-4e5a-99af-c94244b447fe"). InnerVolumeSpecName "kube-api-access-47pvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:10:29 crc kubenswrapper[4799]: I0127 08:10:29.473405 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04c9cbe4-083b-4e5a-99af-c94244b447fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04c9cbe4-083b-4e5a-99af-c94244b447fe" (UID: "04c9cbe4-083b-4e5a-99af-c94244b447fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:29 crc kubenswrapper[4799]: I0127 08:10:29.481639 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04c9cbe4-083b-4e5a-99af-c94244b447fe-config-data" (OuterVolumeSpecName: "config-data") pod "04c9cbe4-083b-4e5a-99af-c94244b447fe" (UID: "04c9cbe4-083b-4e5a-99af-c94244b447fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:29 crc kubenswrapper[4799]: I0127 08:10:29.544518 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04c9cbe4-083b-4e5a-99af-c94244b447fe-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:29 crc kubenswrapper[4799]: I0127 08:10:29.544547 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04c9cbe4-083b-4e5a-99af-c94244b447fe-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:29 crc kubenswrapper[4799]: I0127 08:10:29.544559 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04c9cbe4-083b-4e5a-99af-c94244b447fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:29 crc kubenswrapper[4799]: I0127 08:10:29.544570 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47pvh\" (UniqueName: \"kubernetes.io/projected/04c9cbe4-083b-4e5a-99af-c94244b447fe-kube-api-access-47pvh\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.302067 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.306737 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="c20f20a7-a62c-4138-92dc-e34db63251fa" containerName="kube-state-metrics" containerID="cri-o://090d01c6769707631cf51f040360ac39c4037b96b52b3ff88c6c9990989e0f30" gracePeriod=30 Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.355847 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.357363 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"57046e54-7327-4252-90e0-2a4420ea98c9","Type":"ContainerStarted","Data":"a0163079c0724b85d7db787a56f1f25e7534892ebfd6e188a02d7ed01475622b"} Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.398400 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.398376472 podStartE2EDuration="2.398376472s" podCreationTimestamp="2026-01-27 08:10:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:10:30.384799881 +0000 UTC m=+1496.695903946" watchObservedRunningTime="2026-01-27 08:10:30.398376472 +0000 UTC m=+1496.709480547" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.426065 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.448193 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.467098 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04c9cbe4-083b-4e5a-99af-c94244b447fe" path="/var/lib/kubelet/pods/04c9cbe4-083b-4e5a-99af-c94244b447fe/volumes" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.468452 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 08:10:30 crc kubenswrapper[4799]: E0127 08:10:30.468835 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c9cbe4-083b-4e5a-99af-c94244b447fe" containerName="nova-api-log" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.468850 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c9cbe4-083b-4e5a-99af-c94244b447fe" containerName="nova-api-log" Jan 27 08:10:30 crc kubenswrapper[4799]: E0127 08:10:30.468876 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c9cbe4-083b-4e5a-99af-c94244b447fe" containerName="nova-api-api" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.468884 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c9cbe4-083b-4e5a-99af-c94244b447fe" containerName="nova-api-api" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.469071 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="04c9cbe4-083b-4e5a-99af-c94244b447fe" containerName="nova-api-api" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.469093 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="04c9cbe4-083b-4e5a-99af-c94244b447fe" containerName="nova-api-log" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.470023 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.505199 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.515352 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.561477 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-logs\") pod \"nova-api-0\" (UID: \"7f8754bb-2d84-4d00-a9fc-ae268f1ac580\") " pod="openstack/nova-api-0" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.561793 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-config-data\") pod \"nova-api-0\" (UID: \"7f8754bb-2d84-4d00-a9fc-ae268f1ac580\") " pod="openstack/nova-api-0" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.561978 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7f8754bb-2d84-4d00-a9fc-ae268f1ac580\") " pod="openstack/nova-api-0" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.562011 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgjmp\" (UniqueName: \"kubernetes.io/projected/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-kube-api-access-pgjmp\") pod \"nova-api-0\" (UID: \"7f8754bb-2d84-4d00-a9fc-ae268f1ac580\") " pod="openstack/nova-api-0" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.663395 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7f8754bb-2d84-4d00-a9fc-ae268f1ac580\") " pod="openstack/nova-api-0" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.663444 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgjmp\" (UniqueName: \"kubernetes.io/projected/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-kube-api-access-pgjmp\") pod \"nova-api-0\" (UID: \"7f8754bb-2d84-4d00-a9fc-ae268f1ac580\") " pod="openstack/nova-api-0" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.663491 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-logs\") pod \"nova-api-0\" (UID: \"7f8754bb-2d84-4d00-a9fc-ae268f1ac580\") " pod="openstack/nova-api-0" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.663507 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-config-data\") pod \"nova-api-0\" (UID: \"7f8754bb-2d84-4d00-a9fc-ae268f1ac580\") " pod="openstack/nova-api-0" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.665011 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-logs\") pod \"nova-api-0\" (UID: \"7f8754bb-2d84-4d00-a9fc-ae268f1ac580\") " pod="openstack/nova-api-0" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.672098 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-config-data\") pod \"nova-api-0\" (UID: \"7f8754bb-2d84-4d00-a9fc-ae268f1ac580\") " pod="openstack/nova-api-0" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.672120 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7f8754bb-2d84-4d00-a9fc-ae268f1ac580\") " pod="openstack/nova-api-0" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.686513 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgjmp\" (UniqueName: \"kubernetes.io/projected/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-kube-api-access-pgjmp\") pod \"nova-api-0\" (UID: \"7f8754bb-2d84-4d00-a9fc-ae268f1ac580\") " pod="openstack/nova-api-0" Jan 27 08:10:30 crc kubenswrapper[4799]: I0127 08:10:30.924008 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.358530 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.365243 4799 generic.go:334] "Generic (PLEG): container finished" podID="c20f20a7-a62c-4138-92dc-e34db63251fa" containerID="090d01c6769707631cf51f040360ac39c4037b96b52b3ff88c6c9990989e0f30" exitCode=2 Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.365288 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.365323 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c20f20a7-a62c-4138-92dc-e34db63251fa","Type":"ContainerDied","Data":"090d01c6769707631cf51f040360ac39c4037b96b52b3ff88c6c9990989e0f30"} Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.365378 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c20f20a7-a62c-4138-92dc-e34db63251fa","Type":"ContainerDied","Data":"dd076b51607d0964990c4d24374be42b87582fca08ad4c0c8f30cecdefcfcbba"} Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.365422 4799 scope.go:117] "RemoveContainer" containerID="090d01c6769707631cf51f040360ac39c4037b96b52b3ff88c6c9990989e0f30" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.391324 4799 scope.go:117] "RemoveContainer" containerID="090d01c6769707631cf51f040360ac39c4037b96b52b3ff88c6c9990989e0f30" Jan 27 08:10:31 crc kubenswrapper[4799]: E0127 08:10:31.397070 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"090d01c6769707631cf51f040360ac39c4037b96b52b3ff88c6c9990989e0f30\": container with ID starting with 090d01c6769707631cf51f040360ac39c4037b96b52b3ff88c6c9990989e0f30 not found: ID does not exist" containerID="090d01c6769707631cf51f040360ac39c4037b96b52b3ff88c6c9990989e0f30" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.397120 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"090d01c6769707631cf51f040360ac39c4037b96b52b3ff88c6c9990989e0f30"} err="failed to get container status \"090d01c6769707631cf51f040360ac39c4037b96b52b3ff88c6c9990989e0f30\": rpc error: code = NotFound desc = could not find container \"090d01c6769707631cf51f040360ac39c4037b96b52b3ff88c6c9990989e0f30\": container with ID starting with 090d01c6769707631cf51f040360ac39c4037b96b52b3ff88c6c9990989e0f30 not found: ID does not exist" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.471446 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.487094 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb996\" (UniqueName: \"kubernetes.io/projected/c20f20a7-a62c-4138-92dc-e34db63251fa-kube-api-access-hb996\") pod \"c20f20a7-a62c-4138-92dc-e34db63251fa\" (UID: \"c20f20a7-a62c-4138-92dc-e34db63251fa\") " Jan 27 08:10:31 crc kubenswrapper[4799]: W0127 08:10:31.492807 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f8754bb_2d84_4d00_a9fc_ae268f1ac580.slice/crio-1420ffda4658c767e140162ba8d3445cffc34f66bb27a28d08bbf71f9698ae32 WatchSource:0}: Error finding container 1420ffda4658c767e140162ba8d3445cffc34f66bb27a28d08bbf71f9698ae32: Status 404 returned error can't find the container with id 1420ffda4658c767e140162ba8d3445cffc34f66bb27a28d08bbf71f9698ae32 Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.497575 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c20f20a7-a62c-4138-92dc-e34db63251fa-kube-api-access-hb996" (OuterVolumeSpecName: "kube-api-access-hb996") pod "c20f20a7-a62c-4138-92dc-e34db63251fa" (UID: "c20f20a7-a62c-4138-92dc-e34db63251fa"). InnerVolumeSpecName "kube-api-access-hb996". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.589031 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb996\" (UniqueName: \"kubernetes.io/projected/c20f20a7-a62c-4138-92dc-e34db63251fa-kube-api-access-hb996\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.698550 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.698613 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.702111 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.715049 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.730885 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 08:10:31 crc kubenswrapper[4799]: E0127 08:10:31.733553 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20f20a7-a62c-4138-92dc-e34db63251fa" containerName="kube-state-metrics" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.733586 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20f20a7-a62c-4138-92dc-e34db63251fa" containerName="kube-state-metrics" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.733855 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c20f20a7-a62c-4138-92dc-e34db63251fa" containerName="kube-state-metrics" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.734672 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.736736 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.739163 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.753481 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.796694 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0707039f-a588-4975-a71f-dfe2054ba4e6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0707039f-a588-4975-a71f-dfe2054ba4e6\") " pod="openstack/kube-state-metrics-0" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.797021 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rtx8\" (UniqueName: \"kubernetes.io/projected/0707039f-a588-4975-a71f-dfe2054ba4e6-kube-api-access-7rtx8\") pod \"kube-state-metrics-0\" (UID: \"0707039f-a588-4975-a71f-dfe2054ba4e6\") " pod="openstack/kube-state-metrics-0" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.797118 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0707039f-a588-4975-a71f-dfe2054ba4e6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0707039f-a588-4975-a71f-dfe2054ba4e6\") " pod="openstack/kube-state-metrics-0" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.797515 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0707039f-a588-4975-a71f-dfe2054ba4e6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0707039f-a588-4975-a71f-dfe2054ba4e6\") " pod="openstack/kube-state-metrics-0" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.899951 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0707039f-a588-4975-a71f-dfe2054ba4e6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0707039f-a588-4975-a71f-dfe2054ba4e6\") " pod="openstack/kube-state-metrics-0" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.900202 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0707039f-a588-4975-a71f-dfe2054ba4e6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0707039f-a588-4975-a71f-dfe2054ba4e6\") " pod="openstack/kube-state-metrics-0" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.900289 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rtx8\" (UniqueName: \"kubernetes.io/projected/0707039f-a588-4975-a71f-dfe2054ba4e6-kube-api-access-7rtx8\") pod \"kube-state-metrics-0\" (UID: \"0707039f-a588-4975-a71f-dfe2054ba4e6\") " pod="openstack/kube-state-metrics-0" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.900416 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0707039f-a588-4975-a71f-dfe2054ba4e6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0707039f-a588-4975-a71f-dfe2054ba4e6\") " pod="openstack/kube-state-metrics-0" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.903523 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0707039f-a588-4975-a71f-dfe2054ba4e6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0707039f-a588-4975-a71f-dfe2054ba4e6\") " pod="openstack/kube-state-metrics-0" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.904097 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0707039f-a588-4975-a71f-dfe2054ba4e6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0707039f-a588-4975-a71f-dfe2054ba4e6\") " pod="openstack/kube-state-metrics-0" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.904995 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0707039f-a588-4975-a71f-dfe2054ba4e6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0707039f-a588-4975-a71f-dfe2054ba4e6\") " pod="openstack/kube-state-metrics-0" Jan 27 08:10:31 crc kubenswrapper[4799]: I0127 08:10:31.915355 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rtx8\" (UniqueName: \"kubernetes.io/projected/0707039f-a588-4975-a71f-dfe2054ba4e6-kube-api-access-7rtx8\") pod \"kube-state-metrics-0\" (UID: \"0707039f-a588-4975-a71f-dfe2054ba4e6\") " pod="openstack/kube-state-metrics-0" Jan 27 08:10:32 crc kubenswrapper[4799]: I0127 08:10:32.069593 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 08:10:32 crc kubenswrapper[4799]: I0127 08:10:32.378532 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7f8754bb-2d84-4d00-a9fc-ae268f1ac580","Type":"ContainerStarted","Data":"e981df74b1e7b3cb77548fc68751ea7af7f33057b3fbad0e0a6c50ead847e31e"} Jan 27 08:10:32 crc kubenswrapper[4799]: I0127 08:10:32.378879 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7f8754bb-2d84-4d00-a9fc-ae268f1ac580","Type":"ContainerStarted","Data":"567aafb834b99170785b5244330b42feb6c892eea3c5280568e3137dd5433d96"} Jan 27 08:10:32 crc kubenswrapper[4799]: I0127 08:10:32.378893 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7f8754bb-2d84-4d00-a9fc-ae268f1ac580","Type":"ContainerStarted","Data":"1420ffda4658c767e140162ba8d3445cffc34f66bb27a28d08bbf71f9698ae32"} Jan 27 08:10:32 crc kubenswrapper[4799]: I0127 08:10:32.398131 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.39810657 podStartE2EDuration="2.39810657s" podCreationTimestamp="2026-01-27 08:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:10:32.396613289 +0000 UTC m=+1498.707717404" watchObservedRunningTime="2026-01-27 08:10:32.39810657 +0000 UTC m=+1498.709210665" Jan 27 08:10:32 crc kubenswrapper[4799]: I0127 08:10:32.467202 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c20f20a7-a62c-4138-92dc-e34db63251fa" path="/var/lib/kubelet/pods/c20f20a7-a62c-4138-92dc-e34db63251fa/volumes" Jan 27 08:10:32 crc kubenswrapper[4799]: I0127 08:10:32.482337 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:10:32 crc kubenswrapper[4799]: I0127 08:10:32.483130 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4193641c-2be2-4364-a5c2-1936a70fed09" containerName="ceilometer-central-agent" containerID="cri-o://c6e81c4d0d9e73f77c719f74d718a77a927161e8f1203f2edbee208dc69900fd" gracePeriod=30 Jan 27 08:10:32 crc kubenswrapper[4799]: I0127 08:10:32.483488 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4193641c-2be2-4364-a5c2-1936a70fed09" containerName="proxy-httpd" containerID="cri-o://9de33aa9b38ab46f4a0efc0a6d236be5aa63fa32c2b6e33f17ef6f429710779e" gracePeriod=30 Jan 27 08:10:32 crc kubenswrapper[4799]: I0127 08:10:32.483680 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4193641c-2be2-4364-a5c2-1936a70fed09" containerName="sg-core" containerID="cri-o://d688349b2b0d522967e1641dc4f7fed6bb0a6c010d83f570991d2a95bfaa43bd" gracePeriod=30 Jan 27 08:10:32 crc kubenswrapper[4799]: I0127 08:10:32.483859 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4193641c-2be2-4364-a5c2-1936a70fed09" containerName="ceilometer-notification-agent" containerID="cri-o://9fadf65fffa0252dfe2078e49622c8bf130cfea23359cbaf7a371e5921309cf4" gracePeriod=30 Jan 27 08:10:32 crc kubenswrapper[4799]: I0127 08:10:32.540335 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 08:10:32 crc kubenswrapper[4799]: W0127 08:10:32.550333 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0707039f_a588_4975_a71f_dfe2054ba4e6.slice/crio-981ae664c354205006d9e304e0fee3a52c3ca02640576e79a28ad441b8a3f40f WatchSource:0}: Error finding container 981ae664c354205006d9e304e0fee3a52c3ca02640576e79a28ad441b8a3f40f: Status 404 returned error can't find the container with id 981ae664c354205006d9e304e0fee3a52c3ca02640576e79a28ad441b8a3f40f Jan 27 08:10:33 crc kubenswrapper[4799]: I0127 08:10:33.390927 4799 generic.go:334] "Generic (PLEG): container finished" podID="4193641c-2be2-4364-a5c2-1936a70fed09" containerID="9de33aa9b38ab46f4a0efc0a6d236be5aa63fa32c2b6e33f17ef6f429710779e" exitCode=0 Jan 27 08:10:33 crc kubenswrapper[4799]: I0127 08:10:33.391635 4799 generic.go:334] "Generic (PLEG): container finished" podID="4193641c-2be2-4364-a5c2-1936a70fed09" containerID="d688349b2b0d522967e1641dc4f7fed6bb0a6c010d83f570991d2a95bfaa43bd" exitCode=2 Jan 27 08:10:33 crc kubenswrapper[4799]: I0127 08:10:33.391655 4799 generic.go:334] "Generic (PLEG): container finished" podID="4193641c-2be2-4364-a5c2-1936a70fed09" containerID="c6e81c4d0d9e73f77c719f74d718a77a927161e8f1203f2edbee208dc69900fd" exitCode=0 Jan 27 08:10:33 crc kubenswrapper[4799]: I0127 08:10:33.391002 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4193641c-2be2-4364-a5c2-1936a70fed09","Type":"ContainerDied","Data":"9de33aa9b38ab46f4a0efc0a6d236be5aa63fa32c2b6e33f17ef6f429710779e"} Jan 27 08:10:33 crc kubenswrapper[4799]: I0127 08:10:33.391743 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4193641c-2be2-4364-a5c2-1936a70fed09","Type":"ContainerDied","Data":"d688349b2b0d522967e1641dc4f7fed6bb0a6c010d83f570991d2a95bfaa43bd"} Jan 27 08:10:33 crc kubenswrapper[4799]: I0127 08:10:33.391767 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4193641c-2be2-4364-a5c2-1936a70fed09","Type":"ContainerDied","Data":"c6e81c4d0d9e73f77c719f74d718a77a927161e8f1203f2edbee208dc69900fd"} Jan 27 08:10:33 crc kubenswrapper[4799]: I0127 08:10:33.393610 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0707039f-a588-4975-a71f-dfe2054ba4e6","Type":"ContainerStarted","Data":"72014f16587b1075c455dccabf89b73f3156eb870ef19e90bd26b184dfc0c813"} Jan 27 08:10:33 crc kubenswrapper[4799]: I0127 08:10:33.393651 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0707039f-a588-4975-a71f-dfe2054ba4e6","Type":"ContainerStarted","Data":"981ae664c354205006d9e304e0fee3a52c3ca02640576e79a28ad441b8a3f40f"} Jan 27 08:10:33 crc kubenswrapper[4799]: I0127 08:10:33.420893 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.023396622 podStartE2EDuration="2.42087215s" podCreationTimestamp="2026-01-27 08:10:31 +0000 UTC" firstStartedPulling="2026-01-27 08:10:32.553185093 +0000 UTC m=+1498.864289158" lastFinishedPulling="2026-01-27 08:10:32.950660631 +0000 UTC m=+1499.261764686" observedRunningTime="2026-01-27 08:10:33.406783334 +0000 UTC m=+1499.717887419" watchObservedRunningTime="2026-01-27 08:10:33.42087215 +0000 UTC m=+1499.731976235" Jan 27 08:10:33 crc kubenswrapper[4799]: I0127 08:10:33.735516 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.260857 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.350429 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsps5\" (UniqueName: \"kubernetes.io/projected/4193641c-2be2-4364-a5c2-1936a70fed09-kube-api-access-zsps5\") pod \"4193641c-2be2-4364-a5c2-1936a70fed09\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.350556 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-sg-core-conf-yaml\") pod \"4193641c-2be2-4364-a5c2-1936a70fed09\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.350626 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-config-data\") pod \"4193641c-2be2-4364-a5c2-1936a70fed09\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.350664 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4193641c-2be2-4364-a5c2-1936a70fed09-run-httpd\") pod \"4193641c-2be2-4364-a5c2-1936a70fed09\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.350681 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-combined-ca-bundle\") pod \"4193641c-2be2-4364-a5c2-1936a70fed09\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.350796 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4193641c-2be2-4364-a5c2-1936a70fed09-log-httpd\") pod \"4193641c-2be2-4364-a5c2-1936a70fed09\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.350831 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-scripts\") pod \"4193641c-2be2-4364-a5c2-1936a70fed09\" (UID: \"4193641c-2be2-4364-a5c2-1936a70fed09\") " Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.351054 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4193641c-2be2-4364-a5c2-1936a70fed09-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4193641c-2be2-4364-a5c2-1936a70fed09" (UID: "4193641c-2be2-4364-a5c2-1936a70fed09"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.351128 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4193641c-2be2-4364-a5c2-1936a70fed09-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4193641c-2be2-4364-a5c2-1936a70fed09" (UID: "4193641c-2be2-4364-a5c2-1936a70fed09"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.351511 4799 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4193641c-2be2-4364-a5c2-1936a70fed09-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.351537 4799 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4193641c-2be2-4364-a5c2-1936a70fed09-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.357397 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-scripts" (OuterVolumeSpecName: "scripts") pod "4193641c-2be2-4364-a5c2-1936a70fed09" (UID: "4193641c-2be2-4364-a5c2-1936a70fed09"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.358443 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4193641c-2be2-4364-a5c2-1936a70fed09-kube-api-access-zsps5" (OuterVolumeSpecName: "kube-api-access-zsps5") pod "4193641c-2be2-4364-a5c2-1936a70fed09" (UID: "4193641c-2be2-4364-a5c2-1936a70fed09"). InnerVolumeSpecName "kube-api-access-zsps5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.408377 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4193641c-2be2-4364-a5c2-1936a70fed09" (UID: "4193641c-2be2-4364-a5c2-1936a70fed09"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.435812 4799 generic.go:334] "Generic (PLEG): container finished" podID="4193641c-2be2-4364-a5c2-1936a70fed09" containerID="9fadf65fffa0252dfe2078e49622c8bf130cfea23359cbaf7a371e5921309cf4" exitCode=0 Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.437294 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.440491 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4193641c-2be2-4364-a5c2-1936a70fed09","Type":"ContainerDied","Data":"9fadf65fffa0252dfe2078e49622c8bf130cfea23359cbaf7a371e5921309cf4"} Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.440544 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.440563 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4193641c-2be2-4364-a5c2-1936a70fed09","Type":"ContainerDied","Data":"eb21a47c7d03b5cc1f1c72639ddad1aee455f7882d1432f656c824d30bfb3457"} Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.440606 4799 scope.go:117] "RemoveContainer" containerID="9de33aa9b38ab46f4a0efc0a6d236be5aa63fa32c2b6e33f17ef6f429710779e" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.453386 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.453425 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsps5\" (UniqueName: \"kubernetes.io/projected/4193641c-2be2-4364-a5c2-1936a70fed09-kube-api-access-zsps5\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.453442 4799 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.497505 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4193641c-2be2-4364-a5c2-1936a70fed09" (UID: "4193641c-2be2-4364-a5c2-1936a70fed09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.516065 4799 scope.go:117] "RemoveContainer" containerID="d688349b2b0d522967e1641dc4f7fed6bb0a6c010d83f570991d2a95bfaa43bd" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.544491 4799 scope.go:117] "RemoveContainer" containerID="9fadf65fffa0252dfe2078e49622c8bf130cfea23359cbaf7a371e5921309cf4" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.555354 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.563329 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-config-data" (OuterVolumeSpecName: "config-data") pod "4193641c-2be2-4364-a5c2-1936a70fed09" (UID: "4193641c-2be2-4364-a5c2-1936a70fed09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.563510 4799 scope.go:117] "RemoveContainer" containerID="c6e81c4d0d9e73f77c719f74d718a77a927161e8f1203f2edbee208dc69900fd" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.587231 4799 scope.go:117] "RemoveContainer" containerID="9de33aa9b38ab46f4a0efc0a6d236be5aa63fa32c2b6e33f17ef6f429710779e" Jan 27 08:10:34 crc kubenswrapper[4799]: E0127 08:10:34.587980 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9de33aa9b38ab46f4a0efc0a6d236be5aa63fa32c2b6e33f17ef6f429710779e\": container with ID starting with 9de33aa9b38ab46f4a0efc0a6d236be5aa63fa32c2b6e33f17ef6f429710779e not found: ID does not exist" containerID="9de33aa9b38ab46f4a0efc0a6d236be5aa63fa32c2b6e33f17ef6f429710779e" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.588042 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de33aa9b38ab46f4a0efc0a6d236be5aa63fa32c2b6e33f17ef6f429710779e"} err="failed to get container status \"9de33aa9b38ab46f4a0efc0a6d236be5aa63fa32c2b6e33f17ef6f429710779e\": rpc error: code = NotFound desc = could not find container \"9de33aa9b38ab46f4a0efc0a6d236be5aa63fa32c2b6e33f17ef6f429710779e\": container with ID starting with 9de33aa9b38ab46f4a0efc0a6d236be5aa63fa32c2b6e33f17ef6f429710779e not found: ID does not exist" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.588067 4799 scope.go:117] "RemoveContainer" containerID="d688349b2b0d522967e1641dc4f7fed6bb0a6c010d83f570991d2a95bfaa43bd" Jan 27 08:10:34 crc kubenswrapper[4799]: E0127 08:10:34.588395 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d688349b2b0d522967e1641dc4f7fed6bb0a6c010d83f570991d2a95bfaa43bd\": container with ID starting with d688349b2b0d522967e1641dc4f7fed6bb0a6c010d83f570991d2a95bfaa43bd not found: ID does not exist" containerID="d688349b2b0d522967e1641dc4f7fed6bb0a6c010d83f570991d2a95bfaa43bd" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.588432 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d688349b2b0d522967e1641dc4f7fed6bb0a6c010d83f570991d2a95bfaa43bd"} err="failed to get container status \"d688349b2b0d522967e1641dc4f7fed6bb0a6c010d83f570991d2a95bfaa43bd\": rpc error: code = NotFound desc = could not find container \"d688349b2b0d522967e1641dc4f7fed6bb0a6c010d83f570991d2a95bfaa43bd\": container with ID starting with d688349b2b0d522967e1641dc4f7fed6bb0a6c010d83f570991d2a95bfaa43bd not found: ID does not exist" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.588456 4799 scope.go:117] "RemoveContainer" containerID="9fadf65fffa0252dfe2078e49622c8bf130cfea23359cbaf7a371e5921309cf4" Jan 27 08:10:34 crc kubenswrapper[4799]: E0127 08:10:34.588721 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fadf65fffa0252dfe2078e49622c8bf130cfea23359cbaf7a371e5921309cf4\": container with ID starting with 9fadf65fffa0252dfe2078e49622c8bf130cfea23359cbaf7a371e5921309cf4 not found: ID does not exist" containerID="9fadf65fffa0252dfe2078e49622c8bf130cfea23359cbaf7a371e5921309cf4" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.588737 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fadf65fffa0252dfe2078e49622c8bf130cfea23359cbaf7a371e5921309cf4"} err="failed to get container status \"9fadf65fffa0252dfe2078e49622c8bf130cfea23359cbaf7a371e5921309cf4\": rpc error: code = NotFound desc = could not find container \"9fadf65fffa0252dfe2078e49622c8bf130cfea23359cbaf7a371e5921309cf4\": container with ID starting with 9fadf65fffa0252dfe2078e49622c8bf130cfea23359cbaf7a371e5921309cf4 not found: ID does not exist" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.588749 4799 scope.go:117] "RemoveContainer" containerID="c6e81c4d0d9e73f77c719f74d718a77a927161e8f1203f2edbee208dc69900fd" Jan 27 08:10:34 crc kubenswrapper[4799]: E0127 08:10:34.589400 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6e81c4d0d9e73f77c719f74d718a77a927161e8f1203f2edbee208dc69900fd\": container with ID starting with c6e81c4d0d9e73f77c719f74d718a77a927161e8f1203f2edbee208dc69900fd not found: ID does not exist" containerID="c6e81c4d0d9e73f77c719f74d718a77a927161e8f1203f2edbee208dc69900fd" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.589419 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6e81c4d0d9e73f77c719f74d718a77a927161e8f1203f2edbee208dc69900fd"} err="failed to get container status \"c6e81c4d0d9e73f77c719f74d718a77a927161e8f1203f2edbee208dc69900fd\": rpc error: code = NotFound desc = could not find container \"c6e81c4d0d9e73f77c719f74d718a77a927161e8f1203f2edbee208dc69900fd\": container with ID starting with c6e81c4d0d9e73f77c719f74d718a77a927161e8f1203f2edbee208dc69900fd not found: ID does not exist" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.645130 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.656766 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4193641c-2be2-4364-a5c2-1936a70fed09-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.789550 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.797328 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.819898 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:10:34 crc kubenswrapper[4799]: E0127 08:10:34.820729 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4193641c-2be2-4364-a5c2-1936a70fed09" containerName="proxy-httpd" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.820752 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4193641c-2be2-4364-a5c2-1936a70fed09" containerName="proxy-httpd" Jan 27 08:10:34 crc kubenswrapper[4799]: E0127 08:10:34.820777 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4193641c-2be2-4364-a5c2-1936a70fed09" containerName="ceilometer-notification-agent" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.820784 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4193641c-2be2-4364-a5c2-1936a70fed09" containerName="ceilometer-notification-agent" Jan 27 08:10:34 crc kubenswrapper[4799]: E0127 08:10:34.820802 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4193641c-2be2-4364-a5c2-1936a70fed09" containerName="sg-core" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.820808 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4193641c-2be2-4364-a5c2-1936a70fed09" containerName="sg-core" Jan 27 08:10:34 crc kubenswrapper[4799]: E0127 08:10:34.820821 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4193641c-2be2-4364-a5c2-1936a70fed09" containerName="ceilometer-central-agent" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.820826 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4193641c-2be2-4364-a5c2-1936a70fed09" containerName="ceilometer-central-agent" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.820986 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="4193641c-2be2-4364-a5c2-1936a70fed09" containerName="ceilometer-central-agent" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.821008 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="4193641c-2be2-4364-a5c2-1936a70fed09" containerName="ceilometer-notification-agent" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.821023 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="4193641c-2be2-4364-a5c2-1936a70fed09" containerName="sg-core" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.821035 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="4193641c-2be2-4364-a5c2-1936a70fed09" containerName="proxy-httpd" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.822659 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.825868 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.829349 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.830091 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.838402 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.862581 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-config-data\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.862655 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.862682 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-log-httpd\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.862808 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-run-httpd\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.862829 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqvxr\" (UniqueName: \"kubernetes.io/projected/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-kube-api-access-gqvxr\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.862953 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-scripts\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.862972 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.862996 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.964669 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-log-httpd\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.964825 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-run-httpd\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.964847 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqvxr\" (UniqueName: \"kubernetes.io/projected/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-kube-api-access-gqvxr\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.964968 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-scripts\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.965246 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-log-httpd\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.965277 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-run-httpd\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.965355 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.965956 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.966316 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-config-data\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.966357 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.970473 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.970689 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-scripts\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.971102 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.973545 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.980360 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-config-data\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:34 crc kubenswrapper[4799]: I0127 08:10:34.986811 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqvxr\" (UniqueName: \"kubernetes.io/projected/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-kube-api-access-gqvxr\") pod \"ceilometer-0\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " pod="openstack/ceilometer-0" Jan 27 08:10:35 crc kubenswrapper[4799]: I0127 08:10:35.142170 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:10:35 crc kubenswrapper[4799]: I0127 08:10:35.609799 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:10:36 crc kubenswrapper[4799]: I0127 08:10:36.466644 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4193641c-2be2-4364-a5c2-1936a70fed09" path="/var/lib/kubelet/pods/4193641c-2be2-4364-a5c2-1936a70fed09/volumes" Jan 27 08:10:36 crc kubenswrapper[4799]: I0127 08:10:36.474324 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d8f5ac9-657e-4d76-9505-a6efc258cd5e","Type":"ContainerStarted","Data":"dd0801f210a79e6ad4eeb0c4d143f010275155a4e0984bdd62f31eeffdca73e6"} Jan 27 08:10:36 crc kubenswrapper[4799]: I0127 08:10:36.474394 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d8f5ac9-657e-4d76-9505-a6efc258cd5e","Type":"ContainerStarted","Data":"3827451892a050153be453296daf5b9dd76c21e6227b5c5c6ae3bb3dbe506380"} Jan 27 08:10:36 crc kubenswrapper[4799]: I0127 08:10:36.698168 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 08:10:36 crc kubenswrapper[4799]: I0127 08:10:36.698219 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 08:10:37 crc kubenswrapper[4799]: I0127 08:10:37.497750 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d8f5ac9-657e-4d76-9505-a6efc258cd5e","Type":"ContainerStarted","Data":"000f70d5d24443a1023dae2d5919a68320d0e937d8ad66fcb38a56c8d626aef2"} Jan 27 08:10:37 crc kubenswrapper[4799]: I0127 08:10:37.707260 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0b1a8d96-854c-4df2-9b33-19c50ca49e14" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 08:10:37 crc kubenswrapper[4799]: I0127 08:10:37.719966 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0b1a8d96-854c-4df2-9b33-19c50ca49e14" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 08:10:38 crc kubenswrapper[4799]: I0127 08:10:38.508715 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d8f5ac9-657e-4d76-9505-a6efc258cd5e","Type":"ContainerStarted","Data":"36d3c47a4fb4c05ab358ffb6ef0cb731aa5b60931faac4aeb646a67eaf8b722b"} Jan 27 08:10:38 crc kubenswrapper[4799]: I0127 08:10:38.735847 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 08:10:38 crc kubenswrapper[4799]: I0127 08:10:38.777438 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 08:10:39 crc kubenswrapper[4799]: I0127 08:10:39.537590 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d8f5ac9-657e-4d76-9505-a6efc258cd5e","Type":"ContainerStarted","Data":"2ef421f88153da5bc56c61be9f86d9194881a350799587f0e432210361a5b839"} Jan 27 08:10:39 crc kubenswrapper[4799]: I0127 08:10:39.573652 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.142631346 podStartE2EDuration="5.573634354s" podCreationTimestamp="2026-01-27 08:10:34 +0000 UTC" firstStartedPulling="2026-01-27 08:10:35.618081661 +0000 UTC m=+1501.929185726" lastFinishedPulling="2026-01-27 08:10:39.049084649 +0000 UTC m=+1505.360188734" observedRunningTime="2026-01-27 08:10:39.568532325 +0000 UTC m=+1505.879636410" watchObservedRunningTime="2026-01-27 08:10:39.573634354 +0000 UTC m=+1505.884738419" Jan 27 08:10:39 crc kubenswrapper[4799]: I0127 08:10:39.583537 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 08:10:40 crc kubenswrapper[4799]: I0127 08:10:40.545896 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 08:10:40 crc kubenswrapper[4799]: I0127 08:10:40.924876 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 08:10:40 crc kubenswrapper[4799]: I0127 08:10:40.924978 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 08:10:42 crc kubenswrapper[4799]: I0127 08:10:42.007520 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7f8754bb-2d84-4d00-a9fc-ae268f1ac580" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 08:10:42 crc kubenswrapper[4799]: I0127 08:10:42.007581 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7f8754bb-2d84-4d00-a9fc-ae268f1ac580" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 08:10:42 crc kubenswrapper[4799]: I0127 08:10:42.082086 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 08:10:46 crc kubenswrapper[4799]: I0127 08:10:46.704599 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 08:10:46 crc kubenswrapper[4799]: I0127 08:10:46.713344 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 08:10:46 crc kubenswrapper[4799]: I0127 08:10:46.716943 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.491532 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.544774 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc34be8-91df-440e-8b68-662a2741d332-combined-ca-bundle\") pod \"6fc34be8-91df-440e-8b68-662a2741d332\" (UID: \"6fc34be8-91df-440e-8b68-662a2741d332\") " Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.546017 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fc34be8-91df-440e-8b68-662a2741d332-config-data\") pod \"6fc34be8-91df-440e-8b68-662a2741d332\" (UID: \"6fc34be8-91df-440e-8b68-662a2741d332\") " Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.547261 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvdbg\" (UniqueName: \"kubernetes.io/projected/6fc34be8-91df-440e-8b68-662a2741d332-kube-api-access-hvdbg\") pod \"6fc34be8-91df-440e-8b68-662a2741d332\" (UID: \"6fc34be8-91df-440e-8b68-662a2741d332\") " Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.551388 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fc34be8-91df-440e-8b68-662a2741d332-kube-api-access-hvdbg" (OuterVolumeSpecName: "kube-api-access-hvdbg") pod "6fc34be8-91df-440e-8b68-662a2741d332" (UID: "6fc34be8-91df-440e-8b68-662a2741d332"). InnerVolumeSpecName "kube-api-access-hvdbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.574163 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fc34be8-91df-440e-8b68-662a2741d332-config-data" (OuterVolumeSpecName: "config-data") pod "6fc34be8-91df-440e-8b68-662a2741d332" (UID: "6fc34be8-91df-440e-8b68-662a2741d332"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.589398 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fc34be8-91df-440e-8b68-662a2741d332-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fc34be8-91df-440e-8b68-662a2741d332" (UID: "6fc34be8-91df-440e-8b68-662a2741d332"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.610867 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.610892 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6fc34be8-91df-440e-8b68-662a2741d332","Type":"ContainerDied","Data":"48ea990903a64faec8e0c5a01b67130e4b95d295399d1eee9793849691c09044"} Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.610946 4799 scope.go:117] "RemoveContainer" containerID="48ea990903a64faec8e0c5a01b67130e4b95d295399d1eee9793849691c09044" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.610813 4799 generic.go:334] "Generic (PLEG): container finished" podID="6fc34be8-91df-440e-8b68-662a2741d332" containerID="48ea990903a64faec8e0c5a01b67130e4b95d295399d1eee9793849691c09044" exitCode=137 Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.611091 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6fc34be8-91df-440e-8b68-662a2741d332","Type":"ContainerDied","Data":"32a089b824ba585bb0a21e21c713b7ba1c0f962e2167152029a8954157a5f46f"} Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.621453 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.656933 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc34be8-91df-440e-8b68-662a2741d332-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.656975 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fc34be8-91df-440e-8b68-662a2741d332-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.656994 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvdbg\" (UniqueName: \"kubernetes.io/projected/6fc34be8-91df-440e-8b68-662a2741d332-kube-api-access-hvdbg\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.684045 4799 scope.go:117] "RemoveContainer" containerID="48ea990903a64faec8e0c5a01b67130e4b95d295399d1eee9793849691c09044" Jan 27 08:10:47 crc kubenswrapper[4799]: E0127 08:10:47.687415 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48ea990903a64faec8e0c5a01b67130e4b95d295399d1eee9793849691c09044\": container with ID starting with 48ea990903a64faec8e0c5a01b67130e4b95d295399d1eee9793849691c09044 not found: ID does not exist" containerID="48ea990903a64faec8e0c5a01b67130e4b95d295399d1eee9793849691c09044" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.687458 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48ea990903a64faec8e0c5a01b67130e4b95d295399d1eee9793849691c09044"} err="failed to get container status \"48ea990903a64faec8e0c5a01b67130e4b95d295399d1eee9793849691c09044\": rpc error: code = NotFound desc = could not find container \"48ea990903a64faec8e0c5a01b67130e4b95d295399d1eee9793849691c09044\": container with ID starting with 48ea990903a64faec8e0c5a01b67130e4b95d295399d1eee9793849691c09044 not found: ID does not exist" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.696473 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.726737 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.735434 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 08:10:47 crc kubenswrapper[4799]: E0127 08:10:47.736541 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fc34be8-91df-440e-8b68-662a2741d332" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.736574 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fc34be8-91df-440e-8b68-662a2741d332" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.736951 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fc34be8-91df-440e-8b68-662a2741d332" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.737698 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.740023 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.740261 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.741446 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.747366 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.860749 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf8ts\" (UniqueName: \"kubernetes.io/projected/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-kube-api-access-pf8ts\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.860790 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.860839 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.860861 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.860910 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.963137 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf8ts\" (UniqueName: \"kubernetes.io/projected/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-kube-api-access-pf8ts\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.963533 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.963693 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.963811 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.963977 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.968646 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.968904 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.969509 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.969666 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:47 crc kubenswrapper[4799]: I0127 08:10:47.981810 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf8ts\" (UniqueName: \"kubernetes.io/projected/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-kube-api-access-pf8ts\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:48 crc kubenswrapper[4799]: I0127 08:10:48.058580 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:48 crc kubenswrapper[4799]: I0127 08:10:48.466908 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fc34be8-91df-440e-8b68-662a2741d332" path="/var/lib/kubelet/pods/6fc34be8-91df-440e-8b68-662a2741d332/volumes" Jan 27 08:10:48 crc kubenswrapper[4799]: I0127 08:10:48.511384 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 08:10:48 crc kubenswrapper[4799]: I0127 08:10:48.624731 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813","Type":"ContainerStarted","Data":"f623259fd3de0140b07d67af2f49130edee9e18c7f5bc29a500ec3619f972381"} Jan 27 08:10:49 crc kubenswrapper[4799]: I0127 08:10:49.644194 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813","Type":"ContainerStarted","Data":"7dc576b58ec5fe8802a2fd56053e84ba4ee332d5e7f250677126c60dfef2ae46"} Jan 27 08:10:49 crc kubenswrapper[4799]: I0127 08:10:49.682645 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.68262337 podStartE2EDuration="2.68262337s" podCreationTimestamp="2026-01-27 08:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:10:49.669940893 +0000 UTC m=+1515.981044998" watchObservedRunningTime="2026-01-27 08:10:49.68262337 +0000 UTC m=+1515.993727445" Jan 27 08:10:50 crc kubenswrapper[4799]: I0127 08:10:50.934793 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 08:10:50 crc kubenswrapper[4799]: I0127 08:10:50.935649 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 08:10:50 crc kubenswrapper[4799]: I0127 08:10:50.939282 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 08:10:50 crc kubenswrapper[4799]: I0127 08:10:50.942700 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 08:10:51 crc kubenswrapper[4799]: I0127 08:10:51.661977 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 08:10:51 crc kubenswrapper[4799]: I0127 08:10:51.665813 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 08:10:51 crc kubenswrapper[4799]: I0127 08:10:51.844413 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-hd42p"] Jan 27 08:10:51 crc kubenswrapper[4799]: I0127 08:10:51.846238 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:10:51 crc kubenswrapper[4799]: I0127 08:10:51.865872 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-hd42p"] Jan 27 08:10:51 crc kubenswrapper[4799]: I0127 08:10:51.968158 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-hd42p\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:10:51 crc kubenswrapper[4799]: I0127 08:10:51.968518 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-config\") pod \"dnsmasq-dns-89c5cd4d5-hd42p\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:10:51 crc kubenswrapper[4799]: I0127 08:10:51.968569 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-hd42p\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:10:51 crc kubenswrapper[4799]: I0127 08:10:51.968610 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlgkw\" (UniqueName: \"kubernetes.io/projected/d57ed20b-0573-4924-aeee-bef05838e330-kube-api-access-vlgkw\") pod \"dnsmasq-dns-89c5cd4d5-hd42p\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:10:51 crc kubenswrapper[4799]: I0127 08:10:51.968664 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-hd42p\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:10:51 crc kubenswrapper[4799]: I0127 08:10:51.968696 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-hd42p\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:10:52 crc kubenswrapper[4799]: I0127 08:10:52.070823 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-config\") pod \"dnsmasq-dns-89c5cd4d5-hd42p\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:10:52 crc kubenswrapper[4799]: I0127 08:10:52.070893 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-hd42p\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:10:52 crc kubenswrapper[4799]: I0127 08:10:52.070929 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlgkw\" (UniqueName: \"kubernetes.io/projected/d57ed20b-0573-4924-aeee-bef05838e330-kube-api-access-vlgkw\") pod \"dnsmasq-dns-89c5cd4d5-hd42p\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:10:52 crc kubenswrapper[4799]: I0127 08:10:52.070955 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-hd42p\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:10:52 crc kubenswrapper[4799]: I0127 08:10:52.070993 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-hd42p\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:10:52 crc kubenswrapper[4799]: I0127 08:10:52.071046 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-hd42p\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:10:52 crc kubenswrapper[4799]: I0127 08:10:52.071989 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-config\") pod \"dnsmasq-dns-89c5cd4d5-hd42p\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:10:52 crc kubenswrapper[4799]: I0127 08:10:52.072118 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-hd42p\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:10:52 crc kubenswrapper[4799]: I0127 08:10:52.072266 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-hd42p\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:10:52 crc kubenswrapper[4799]: I0127 08:10:52.072390 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-hd42p\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:10:52 crc kubenswrapper[4799]: I0127 08:10:52.072402 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-hd42p\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:10:52 crc kubenswrapper[4799]: I0127 08:10:52.095296 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlgkw\" (UniqueName: \"kubernetes.io/projected/d57ed20b-0573-4924-aeee-bef05838e330-kube-api-access-vlgkw\") pod \"dnsmasq-dns-89c5cd4d5-hd42p\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:10:52 crc kubenswrapper[4799]: I0127 08:10:52.189911 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:10:52 crc kubenswrapper[4799]: W0127 08:10:52.681708 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd57ed20b_0573_4924_aeee_bef05838e330.slice/crio-612e1ae8b857473df4951719e8cdf4f8a94e6a4f6a82cd7fa585c44a8683de04 WatchSource:0}: Error finding container 612e1ae8b857473df4951719e8cdf4f8a94e6a4f6a82cd7fa585c44a8683de04: Status 404 returned error can't find the container with id 612e1ae8b857473df4951719e8cdf4f8a94e6a4f6a82cd7fa585c44a8683de04 Jan 27 08:10:52 crc kubenswrapper[4799]: I0127 08:10:52.687927 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-hd42p"] Jan 27 08:10:53 crc kubenswrapper[4799]: I0127 08:10:53.059556 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:53 crc kubenswrapper[4799]: I0127 08:10:53.684738 4799 generic.go:334] "Generic (PLEG): container finished" podID="d57ed20b-0573-4924-aeee-bef05838e330" containerID="6ddb09c90bfd7470ba6d8bdea139c01bef7b6743e8c08b484044d02f2fa7ad61" exitCode=0 Jan 27 08:10:53 crc kubenswrapper[4799]: I0127 08:10:53.684850 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" event={"ID":"d57ed20b-0573-4924-aeee-bef05838e330","Type":"ContainerDied","Data":"6ddb09c90bfd7470ba6d8bdea139c01bef7b6743e8c08b484044d02f2fa7ad61"} Jan 27 08:10:53 crc kubenswrapper[4799]: I0127 08:10:53.684923 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" event={"ID":"d57ed20b-0573-4924-aeee-bef05838e330","Type":"ContainerStarted","Data":"612e1ae8b857473df4951719e8cdf4f8a94e6a4f6a82cd7fa585c44a8683de04"} Jan 27 08:10:54 crc kubenswrapper[4799]: I0127 08:10:54.169783 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:10:54 crc kubenswrapper[4799]: I0127 08:10:54.170440 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d8f5ac9-657e-4d76-9505-a6efc258cd5e" containerName="ceilometer-central-agent" containerID="cri-o://dd0801f210a79e6ad4eeb0c4d143f010275155a4e0984bdd62f31eeffdca73e6" gracePeriod=30 Jan 27 08:10:54 crc kubenswrapper[4799]: I0127 08:10:54.170515 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d8f5ac9-657e-4d76-9505-a6efc258cd5e" containerName="sg-core" containerID="cri-o://36d3c47a4fb4c05ab358ffb6ef0cb731aa5b60931faac4aeb646a67eaf8b722b" gracePeriod=30 Jan 27 08:10:54 crc kubenswrapper[4799]: I0127 08:10:54.170555 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d8f5ac9-657e-4d76-9505-a6efc258cd5e" containerName="ceilometer-notification-agent" containerID="cri-o://000f70d5d24443a1023dae2d5919a68320d0e937d8ad66fcb38a56c8d626aef2" gracePeriod=30 Jan 27 08:10:54 crc kubenswrapper[4799]: I0127 08:10:54.170563 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d8f5ac9-657e-4d76-9505-a6efc258cd5e" containerName="proxy-httpd" containerID="cri-o://2ef421f88153da5bc56c61be9f86d9194881a350799587f0e432210361a5b839" gracePeriod=30 Jan 27 08:10:54 crc kubenswrapper[4799]: I0127 08:10:54.197274 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5d8f5ac9-657e-4d76-9505-a6efc258cd5e" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 27 08:10:54 crc kubenswrapper[4799]: I0127 08:10:54.422045 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 08:10:54 crc kubenswrapper[4799]: I0127 08:10:54.694942 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" event={"ID":"d57ed20b-0573-4924-aeee-bef05838e330","Type":"ContainerStarted","Data":"242ed1a16fd5f7f954693a993a1d2ded4c83efdbd95645efc9040027f0bb6c24"} Jan 27 08:10:54 crc kubenswrapper[4799]: I0127 08:10:54.695343 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:10:54 crc kubenswrapper[4799]: I0127 08:10:54.698068 4799 generic.go:334] "Generic (PLEG): container finished" podID="5d8f5ac9-657e-4d76-9505-a6efc258cd5e" containerID="2ef421f88153da5bc56c61be9f86d9194881a350799587f0e432210361a5b839" exitCode=0 Jan 27 08:10:54 crc kubenswrapper[4799]: I0127 08:10:54.698095 4799 generic.go:334] "Generic (PLEG): container finished" podID="5d8f5ac9-657e-4d76-9505-a6efc258cd5e" containerID="36d3c47a4fb4c05ab358ffb6ef0cb731aa5b60931faac4aeb646a67eaf8b722b" exitCode=2 Jan 27 08:10:54 crc kubenswrapper[4799]: I0127 08:10:54.698104 4799 generic.go:334] "Generic (PLEG): container finished" podID="5d8f5ac9-657e-4d76-9505-a6efc258cd5e" containerID="dd0801f210a79e6ad4eeb0c4d143f010275155a4e0984bdd62f31eeffdca73e6" exitCode=0 Jan 27 08:10:54 crc kubenswrapper[4799]: I0127 08:10:54.698275 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7f8754bb-2d84-4d00-a9fc-ae268f1ac580" containerName="nova-api-log" containerID="cri-o://567aafb834b99170785b5244330b42feb6c892eea3c5280568e3137dd5433d96" gracePeriod=30 Jan 27 08:10:54 crc kubenswrapper[4799]: I0127 08:10:54.698519 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d8f5ac9-657e-4d76-9505-a6efc258cd5e","Type":"ContainerDied","Data":"2ef421f88153da5bc56c61be9f86d9194881a350799587f0e432210361a5b839"} Jan 27 08:10:54 crc kubenswrapper[4799]: I0127 08:10:54.698551 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d8f5ac9-657e-4d76-9505-a6efc258cd5e","Type":"ContainerDied","Data":"36d3c47a4fb4c05ab358ffb6ef0cb731aa5b60931faac4aeb646a67eaf8b722b"} Jan 27 08:10:54 crc kubenswrapper[4799]: I0127 08:10:54.698566 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d8f5ac9-657e-4d76-9505-a6efc258cd5e","Type":"ContainerDied","Data":"dd0801f210a79e6ad4eeb0c4d143f010275155a4e0984bdd62f31eeffdca73e6"} Jan 27 08:10:54 crc kubenswrapper[4799]: I0127 08:10:54.698628 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7f8754bb-2d84-4d00-a9fc-ae268f1ac580" containerName="nova-api-api" containerID="cri-o://e981df74b1e7b3cb77548fc68751ea7af7f33057b3fbad0e0a6c50ead847e31e" gracePeriod=30 Jan 27 08:10:54 crc kubenswrapper[4799]: I0127 08:10:54.719863 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" podStartSLOduration=3.719841706 podStartE2EDuration="3.719841706s" podCreationTimestamp="2026-01-27 08:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:10:54.712850354 +0000 UTC m=+1521.023954429" watchObservedRunningTime="2026-01-27 08:10:54.719841706 +0000 UTC m=+1521.030945781" Jan 27 08:10:55 crc kubenswrapper[4799]: I0127 08:10:55.709451 4799 generic.go:334] "Generic (PLEG): container finished" podID="7f8754bb-2d84-4d00-a9fc-ae268f1ac580" containerID="567aafb834b99170785b5244330b42feb6c892eea3c5280568e3137dd5433d96" exitCode=143 Jan 27 08:10:55 crc kubenswrapper[4799]: I0127 08:10:55.710380 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7f8754bb-2d84-4d00-a9fc-ae268f1ac580","Type":"ContainerDied","Data":"567aafb834b99170785b5244330b42feb6c892eea3c5280568e3137dd5433d96"} Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.060019 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.081654 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.340148 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.386752 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-config-data\") pod \"7f8754bb-2d84-4d00-a9fc-ae268f1ac580\" (UID: \"7f8754bb-2d84-4d00-a9fc-ae268f1ac580\") " Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.386972 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-logs\") pod \"7f8754bb-2d84-4d00-a9fc-ae268f1ac580\" (UID: \"7f8754bb-2d84-4d00-a9fc-ae268f1ac580\") " Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.387018 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-combined-ca-bundle\") pod \"7f8754bb-2d84-4d00-a9fc-ae268f1ac580\" (UID: \"7f8754bb-2d84-4d00-a9fc-ae268f1ac580\") " Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.387066 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgjmp\" (UniqueName: \"kubernetes.io/projected/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-kube-api-access-pgjmp\") pod \"7f8754bb-2d84-4d00-a9fc-ae268f1ac580\" (UID: \"7f8754bb-2d84-4d00-a9fc-ae268f1ac580\") " Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.387404 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-logs" (OuterVolumeSpecName: "logs") pod "7f8754bb-2d84-4d00-a9fc-ae268f1ac580" (UID: "7f8754bb-2d84-4d00-a9fc-ae268f1ac580"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.387621 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.393613 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-kube-api-access-pgjmp" (OuterVolumeSpecName: "kube-api-access-pgjmp") pod "7f8754bb-2d84-4d00-a9fc-ae268f1ac580" (UID: "7f8754bb-2d84-4d00-a9fc-ae268f1ac580"). InnerVolumeSpecName "kube-api-access-pgjmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.424610 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-config-data" (OuterVolumeSpecName: "config-data") pod "7f8754bb-2d84-4d00-a9fc-ae268f1ac580" (UID: "7f8754bb-2d84-4d00-a9fc-ae268f1ac580"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.442656 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f8754bb-2d84-4d00-a9fc-ae268f1ac580" (UID: "7f8754bb-2d84-4d00-a9fc-ae268f1ac580"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.452932 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.499352 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.499380 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgjmp\" (UniqueName: \"kubernetes.io/projected/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-kube-api-access-pgjmp\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.499395 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f8754bb-2d84-4d00-a9fc-ae268f1ac580-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.600603 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-config-data\") pod \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.600697 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-scripts\") pod \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.600782 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqvxr\" (UniqueName: \"kubernetes.io/projected/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-kube-api-access-gqvxr\") pod \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.600824 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-combined-ca-bundle\") pod \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.600856 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-log-httpd\") pod \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.600906 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-sg-core-conf-yaml\") pod \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.601002 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-ceilometer-tls-certs\") pod \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.601028 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-run-httpd\") pod \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\" (UID: \"5d8f5ac9-657e-4d76-9505-a6efc258cd5e\") " Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.601500 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5d8f5ac9-657e-4d76-9505-a6efc258cd5e" (UID: "5d8f5ac9-657e-4d76-9505-a6efc258cd5e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.601807 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5d8f5ac9-657e-4d76-9505-a6efc258cd5e" (UID: "5d8f5ac9-657e-4d76-9505-a6efc258cd5e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.604277 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-scripts" (OuterVolumeSpecName: "scripts") pod "5d8f5ac9-657e-4d76-9505-a6efc258cd5e" (UID: "5d8f5ac9-657e-4d76-9505-a6efc258cd5e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.606102 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-kube-api-access-gqvxr" (OuterVolumeSpecName: "kube-api-access-gqvxr") pod "5d8f5ac9-657e-4d76-9505-a6efc258cd5e" (UID: "5d8f5ac9-657e-4d76-9505-a6efc258cd5e"). InnerVolumeSpecName "kube-api-access-gqvxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.634115 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5d8f5ac9-657e-4d76-9505-a6efc258cd5e" (UID: "5d8f5ac9-657e-4d76-9505-a6efc258cd5e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.653333 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5d8f5ac9-657e-4d76-9505-a6efc258cd5e" (UID: "5d8f5ac9-657e-4d76-9505-a6efc258cd5e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.682233 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d8f5ac9-657e-4d76-9505-a6efc258cd5e" (UID: "5d8f5ac9-657e-4d76-9505-a6efc258cd5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.703288 4799 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.703334 4799 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.703345 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.703353 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqvxr\" (UniqueName: \"kubernetes.io/projected/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-kube-api-access-gqvxr\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.703363 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.703371 4799 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.703379 4799 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.719310 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-config-data" (OuterVolumeSpecName: "config-data") pod "5d8f5ac9-657e-4d76-9505-a6efc258cd5e" (UID: "5d8f5ac9-657e-4d76-9505-a6efc258cd5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.744201 4799 generic.go:334] "Generic (PLEG): container finished" podID="5d8f5ac9-657e-4d76-9505-a6efc258cd5e" containerID="000f70d5d24443a1023dae2d5919a68320d0e937d8ad66fcb38a56c8d626aef2" exitCode=0 Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.744279 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d8f5ac9-657e-4d76-9505-a6efc258cd5e","Type":"ContainerDied","Data":"000f70d5d24443a1023dae2d5919a68320d0e937d8ad66fcb38a56c8d626aef2"} Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.744353 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d8f5ac9-657e-4d76-9505-a6efc258cd5e","Type":"ContainerDied","Data":"3827451892a050153be453296daf5b9dd76c21e6227b5c5c6ae3bb3dbe506380"} Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.744375 4799 scope.go:117] "RemoveContainer" containerID="2ef421f88153da5bc56c61be9f86d9194881a350799587f0e432210361a5b839" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.744553 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.747250 4799 generic.go:334] "Generic (PLEG): container finished" podID="7f8754bb-2d84-4d00-a9fc-ae268f1ac580" containerID="e981df74b1e7b3cb77548fc68751ea7af7f33057b3fbad0e0a6c50ead847e31e" exitCode=0 Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.747347 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.747352 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7f8754bb-2d84-4d00-a9fc-ae268f1ac580","Type":"ContainerDied","Data":"e981df74b1e7b3cb77548fc68751ea7af7f33057b3fbad0e0a6c50ead847e31e"} Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.747419 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7f8754bb-2d84-4d00-a9fc-ae268f1ac580","Type":"ContainerDied","Data":"1420ffda4658c767e140162ba8d3445cffc34f66bb27a28d08bbf71f9698ae32"} Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.775975 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.783120 4799 scope.go:117] "RemoveContainer" containerID="36d3c47a4fb4c05ab358ffb6ef0cb731aa5b60931faac4aeb646a67eaf8b722b" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.784018 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.799449 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.807724 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d8f5ac9-657e-4d76-9505-a6efc258cd5e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.820355 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.821307 4799 scope.go:117] "RemoveContainer" containerID="000f70d5d24443a1023dae2d5919a68320d0e937d8ad66fcb38a56c8d626aef2" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.834769 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.841356 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 08:10:58 crc kubenswrapper[4799]: E0127 08:10:58.841937 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d8f5ac9-657e-4d76-9505-a6efc258cd5e" containerName="proxy-httpd" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.841957 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d8f5ac9-657e-4d76-9505-a6efc258cd5e" containerName="proxy-httpd" Jan 27 08:10:58 crc kubenswrapper[4799]: E0127 08:10:58.841975 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d8f5ac9-657e-4d76-9505-a6efc258cd5e" containerName="ceilometer-notification-agent" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.841984 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d8f5ac9-657e-4d76-9505-a6efc258cd5e" containerName="ceilometer-notification-agent" Jan 27 08:10:58 crc kubenswrapper[4799]: E0127 08:10:58.842001 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8754bb-2d84-4d00-a9fc-ae268f1ac580" containerName="nova-api-api" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.842007 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8754bb-2d84-4d00-a9fc-ae268f1ac580" containerName="nova-api-api" Jan 27 08:10:58 crc kubenswrapper[4799]: E0127 08:10:58.842017 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d8f5ac9-657e-4d76-9505-a6efc258cd5e" containerName="sg-core" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.842023 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d8f5ac9-657e-4d76-9505-a6efc258cd5e" containerName="sg-core" Jan 27 08:10:58 crc kubenswrapper[4799]: E0127 08:10:58.842033 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8754bb-2d84-4d00-a9fc-ae268f1ac580" containerName="nova-api-log" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.842039 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8754bb-2d84-4d00-a9fc-ae268f1ac580" containerName="nova-api-log" Jan 27 08:10:58 crc kubenswrapper[4799]: E0127 08:10:58.842053 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d8f5ac9-657e-4d76-9505-a6efc258cd5e" containerName="ceilometer-central-agent" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.842060 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d8f5ac9-657e-4d76-9505-a6efc258cd5e" containerName="ceilometer-central-agent" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.842240 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d8f5ac9-657e-4d76-9505-a6efc258cd5e" containerName="ceilometer-central-agent" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.842267 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d8f5ac9-657e-4d76-9505-a6efc258cd5e" containerName="ceilometer-notification-agent" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.842283 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f8754bb-2d84-4d00-a9fc-ae268f1ac580" containerName="nova-api-log" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.842294 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f8754bb-2d84-4d00-a9fc-ae268f1ac580" containerName="nova-api-api" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.842325 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d8f5ac9-657e-4d76-9505-a6efc258cd5e" containerName="sg-core" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.842342 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d8f5ac9-657e-4d76-9505-a6efc258cd5e" containerName="proxy-httpd" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.843798 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.845717 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.846264 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.846424 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.852491 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.862221 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.864809 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.864984 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.865093 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.870270 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.876409 4799 scope.go:117] "RemoveContainer" containerID="dd0801f210a79e6ad4eeb0c4d143f010275155a4e0984bdd62f31eeffdca73e6" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.878384 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.910188 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-internal-tls-certs\") pod \"nova-api-0\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " pod="openstack/nova-api-0" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.910248 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmnlg\" (UniqueName: \"kubernetes.io/projected/99120af8-c2e0-4d5d-8abf-84a350b88689-kube-api-access-wmnlg\") pod \"nova-api-0\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " pod="openstack/nova-api-0" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.910284 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " pod="openstack/nova-api-0" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.911491 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99120af8-c2e0-4d5d-8abf-84a350b88689-logs\") pod \"nova-api-0\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " pod="openstack/nova-api-0" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.911599 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-config-data\") pod \"nova-api-0\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " pod="openstack/nova-api-0" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.911667 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-public-tls-certs\") pod \"nova-api-0\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " pod="openstack/nova-api-0" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.937067 4799 scope.go:117] "RemoveContainer" containerID="2ef421f88153da5bc56c61be9f86d9194881a350799587f0e432210361a5b839" Jan 27 08:10:58 crc kubenswrapper[4799]: E0127 08:10:58.939787 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ef421f88153da5bc56c61be9f86d9194881a350799587f0e432210361a5b839\": container with ID starting with 2ef421f88153da5bc56c61be9f86d9194881a350799587f0e432210361a5b839 not found: ID does not exist" containerID="2ef421f88153da5bc56c61be9f86d9194881a350799587f0e432210361a5b839" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.939859 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef421f88153da5bc56c61be9f86d9194881a350799587f0e432210361a5b839"} err="failed to get container status \"2ef421f88153da5bc56c61be9f86d9194881a350799587f0e432210361a5b839\": rpc error: code = NotFound desc = could not find container \"2ef421f88153da5bc56c61be9f86d9194881a350799587f0e432210361a5b839\": container with ID starting with 2ef421f88153da5bc56c61be9f86d9194881a350799587f0e432210361a5b839 not found: ID does not exist" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.939886 4799 scope.go:117] "RemoveContainer" containerID="36d3c47a4fb4c05ab358ffb6ef0cb731aa5b60931faac4aeb646a67eaf8b722b" Jan 27 08:10:58 crc kubenswrapper[4799]: E0127 08:10:58.940240 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36d3c47a4fb4c05ab358ffb6ef0cb731aa5b60931faac4aeb646a67eaf8b722b\": container with ID starting with 36d3c47a4fb4c05ab358ffb6ef0cb731aa5b60931faac4aeb646a67eaf8b722b not found: ID does not exist" containerID="36d3c47a4fb4c05ab358ffb6ef0cb731aa5b60931faac4aeb646a67eaf8b722b" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.940284 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36d3c47a4fb4c05ab358ffb6ef0cb731aa5b60931faac4aeb646a67eaf8b722b"} err="failed to get container status \"36d3c47a4fb4c05ab358ffb6ef0cb731aa5b60931faac4aeb646a67eaf8b722b\": rpc error: code = NotFound desc = could not find container \"36d3c47a4fb4c05ab358ffb6ef0cb731aa5b60931faac4aeb646a67eaf8b722b\": container with ID starting with 36d3c47a4fb4c05ab358ffb6ef0cb731aa5b60931faac4aeb646a67eaf8b722b not found: ID does not exist" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.940323 4799 scope.go:117] "RemoveContainer" containerID="000f70d5d24443a1023dae2d5919a68320d0e937d8ad66fcb38a56c8d626aef2" Jan 27 08:10:58 crc kubenswrapper[4799]: E0127 08:10:58.941402 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"000f70d5d24443a1023dae2d5919a68320d0e937d8ad66fcb38a56c8d626aef2\": container with ID starting with 000f70d5d24443a1023dae2d5919a68320d0e937d8ad66fcb38a56c8d626aef2 not found: ID does not exist" containerID="000f70d5d24443a1023dae2d5919a68320d0e937d8ad66fcb38a56c8d626aef2" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.941433 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"000f70d5d24443a1023dae2d5919a68320d0e937d8ad66fcb38a56c8d626aef2"} err="failed to get container status \"000f70d5d24443a1023dae2d5919a68320d0e937d8ad66fcb38a56c8d626aef2\": rpc error: code = NotFound desc = could not find container \"000f70d5d24443a1023dae2d5919a68320d0e937d8ad66fcb38a56c8d626aef2\": container with ID starting with 000f70d5d24443a1023dae2d5919a68320d0e937d8ad66fcb38a56c8d626aef2 not found: ID does not exist" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.941455 4799 scope.go:117] "RemoveContainer" containerID="dd0801f210a79e6ad4eeb0c4d143f010275155a4e0984bdd62f31eeffdca73e6" Jan 27 08:10:58 crc kubenswrapper[4799]: E0127 08:10:58.941683 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd0801f210a79e6ad4eeb0c4d143f010275155a4e0984bdd62f31eeffdca73e6\": container with ID starting with dd0801f210a79e6ad4eeb0c4d143f010275155a4e0984bdd62f31eeffdca73e6 not found: ID does not exist" containerID="dd0801f210a79e6ad4eeb0c4d143f010275155a4e0984bdd62f31eeffdca73e6" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.941713 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd0801f210a79e6ad4eeb0c4d143f010275155a4e0984bdd62f31eeffdca73e6"} err="failed to get container status \"dd0801f210a79e6ad4eeb0c4d143f010275155a4e0984bdd62f31eeffdca73e6\": rpc error: code = NotFound desc = could not find container \"dd0801f210a79e6ad4eeb0c4d143f010275155a4e0984bdd62f31eeffdca73e6\": container with ID starting with dd0801f210a79e6ad4eeb0c4d143f010275155a4e0984bdd62f31eeffdca73e6 not found: ID does not exist" Jan 27 08:10:58 crc kubenswrapper[4799]: I0127 08:10:58.941726 4799 scope.go:117] "RemoveContainer" containerID="e981df74b1e7b3cb77548fc68751ea7af7f33057b3fbad0e0a6c50ead847e31e" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.013910 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99120af8-c2e0-4d5d-8abf-84a350b88689-logs\") pod \"nova-api-0\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " pod="openstack/nova-api-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.013987 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-scripts\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.014286 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-config-data\") pod \"nova-api-0\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " pod="openstack/nova-api-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.014345 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.014367 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-public-tls-certs\") pod \"nova-api-0\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " pod="openstack/nova-api-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.014400 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-config-data\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.014613 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.014653 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99120af8-c2e0-4d5d-8abf-84a350b88689-logs\") pod \"nova-api-0\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " pod="openstack/nova-api-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.014796 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzw8x\" (UniqueName: \"kubernetes.io/projected/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-kube-api-access-xzw8x\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.014841 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-internal-tls-certs\") pod \"nova-api-0\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " pod="openstack/nova-api-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.014922 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmnlg\" (UniqueName: \"kubernetes.io/projected/99120af8-c2e0-4d5d-8abf-84a350b88689-kube-api-access-wmnlg\") pod \"nova-api-0\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " pod="openstack/nova-api-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.014962 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.015001 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-log-httpd\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.015058 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " pod="openstack/nova-api-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.015112 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-run-httpd\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.023276 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-config-data\") pod \"nova-api-0\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " pod="openstack/nova-api-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.024015 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-internal-tls-certs\") pod \"nova-api-0\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " pod="openstack/nova-api-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.035327 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-kw8tr"] Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.035763 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " pod="openstack/nova-api-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.036338 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmnlg\" (UniqueName: \"kubernetes.io/projected/99120af8-c2e0-4d5d-8abf-84a350b88689-kube-api-access-wmnlg\") pod \"nova-api-0\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " pod="openstack/nova-api-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.036786 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kw8tr" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.037958 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-public-tls-certs\") pod \"nova-api-0\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " pod="openstack/nova-api-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.039408 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.039426 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.046345 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-kw8tr"] Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.117379 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.117432 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81cece98-fc44-4b30-b861-92affbfe1e8a-config-data\") pod \"nova-cell1-cell-mapping-kw8tr\" (UID: \"81cece98-fc44-4b30-b861-92affbfe1e8a\") " pod="openstack/nova-cell1-cell-mapping-kw8tr" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.117470 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81cece98-fc44-4b30-b861-92affbfe1e8a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-kw8tr\" (UID: \"81cece98-fc44-4b30-b861-92affbfe1e8a\") " pod="openstack/nova-cell1-cell-mapping-kw8tr" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.117507 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzw8x\" (UniqueName: \"kubernetes.io/projected/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-kube-api-access-xzw8x\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.117548 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx6k4\" (UniqueName: \"kubernetes.io/projected/81cece98-fc44-4b30-b861-92affbfe1e8a-kube-api-access-fx6k4\") pod \"nova-cell1-cell-mapping-kw8tr\" (UID: \"81cece98-fc44-4b30-b861-92affbfe1e8a\") " pod="openstack/nova-cell1-cell-mapping-kw8tr" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.117574 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.117597 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-log-httpd\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.118608 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-run-httpd\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.118637 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-log-httpd\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.118719 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-scripts\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.118799 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81cece98-fc44-4b30-b861-92affbfe1e8a-scripts\") pod \"nova-cell1-cell-mapping-kw8tr\" (UID: \"81cece98-fc44-4b30-b861-92affbfe1e8a\") " pod="openstack/nova-cell1-cell-mapping-kw8tr" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.118863 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.118897 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-config-data\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.118931 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-run-httpd\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.121380 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.122119 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.122239 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-scripts\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.123886 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-config-data\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.124055 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.139262 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzw8x\" (UniqueName: \"kubernetes.io/projected/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-kube-api-access-xzw8x\") pod \"ceilometer-0\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.176103 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.191584 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.220807 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81cece98-fc44-4b30-b861-92affbfe1e8a-scripts\") pod \"nova-cell1-cell-mapping-kw8tr\" (UID: \"81cece98-fc44-4b30-b861-92affbfe1e8a\") " pod="openstack/nova-cell1-cell-mapping-kw8tr" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.220937 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81cece98-fc44-4b30-b861-92affbfe1e8a-config-data\") pod \"nova-cell1-cell-mapping-kw8tr\" (UID: \"81cece98-fc44-4b30-b861-92affbfe1e8a\") " pod="openstack/nova-cell1-cell-mapping-kw8tr" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.220989 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81cece98-fc44-4b30-b861-92affbfe1e8a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-kw8tr\" (UID: \"81cece98-fc44-4b30-b861-92affbfe1e8a\") " pod="openstack/nova-cell1-cell-mapping-kw8tr" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.221050 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx6k4\" (UniqueName: \"kubernetes.io/projected/81cece98-fc44-4b30-b861-92affbfe1e8a-kube-api-access-fx6k4\") pod \"nova-cell1-cell-mapping-kw8tr\" (UID: \"81cece98-fc44-4b30-b861-92affbfe1e8a\") " pod="openstack/nova-cell1-cell-mapping-kw8tr" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.226114 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81cece98-fc44-4b30-b861-92affbfe1e8a-config-data\") pod \"nova-cell1-cell-mapping-kw8tr\" (UID: \"81cece98-fc44-4b30-b861-92affbfe1e8a\") " pod="openstack/nova-cell1-cell-mapping-kw8tr" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.226670 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81cece98-fc44-4b30-b861-92affbfe1e8a-scripts\") pod \"nova-cell1-cell-mapping-kw8tr\" (UID: \"81cece98-fc44-4b30-b861-92affbfe1e8a\") " pod="openstack/nova-cell1-cell-mapping-kw8tr" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.226946 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81cece98-fc44-4b30-b861-92affbfe1e8a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-kw8tr\" (UID: \"81cece98-fc44-4b30-b861-92affbfe1e8a\") " pod="openstack/nova-cell1-cell-mapping-kw8tr" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.246909 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx6k4\" (UniqueName: \"kubernetes.io/projected/81cece98-fc44-4b30-b861-92affbfe1e8a-kube-api-access-fx6k4\") pod \"nova-cell1-cell-mapping-kw8tr\" (UID: \"81cece98-fc44-4b30-b861-92affbfe1e8a\") " pod="openstack/nova-cell1-cell-mapping-kw8tr" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.316997 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kw8tr" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.317011 4799 scope.go:117] "RemoveContainer" containerID="567aafb834b99170785b5244330b42feb6c892eea3c5280568e3137dd5433d96" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.383554 4799 scope.go:117] "RemoveContainer" containerID="e981df74b1e7b3cb77548fc68751ea7af7f33057b3fbad0e0a6c50ead847e31e" Jan 27 08:10:59 crc kubenswrapper[4799]: E0127 08:10:59.384066 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e981df74b1e7b3cb77548fc68751ea7af7f33057b3fbad0e0a6c50ead847e31e\": container with ID starting with e981df74b1e7b3cb77548fc68751ea7af7f33057b3fbad0e0a6c50ead847e31e not found: ID does not exist" containerID="e981df74b1e7b3cb77548fc68751ea7af7f33057b3fbad0e0a6c50ead847e31e" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.384102 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e981df74b1e7b3cb77548fc68751ea7af7f33057b3fbad0e0a6c50ead847e31e"} err="failed to get container status \"e981df74b1e7b3cb77548fc68751ea7af7f33057b3fbad0e0a6c50ead847e31e\": rpc error: code = NotFound desc = could not find container \"e981df74b1e7b3cb77548fc68751ea7af7f33057b3fbad0e0a6c50ead847e31e\": container with ID starting with e981df74b1e7b3cb77548fc68751ea7af7f33057b3fbad0e0a6c50ead847e31e not found: ID does not exist" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.384125 4799 scope.go:117] "RemoveContainer" containerID="567aafb834b99170785b5244330b42feb6c892eea3c5280568e3137dd5433d96" Jan 27 08:10:59 crc kubenswrapper[4799]: E0127 08:10:59.384428 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"567aafb834b99170785b5244330b42feb6c892eea3c5280568e3137dd5433d96\": container with ID starting with 567aafb834b99170785b5244330b42feb6c892eea3c5280568e3137dd5433d96 not found: ID does not exist" containerID="567aafb834b99170785b5244330b42feb6c892eea3c5280568e3137dd5433d96" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.384461 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"567aafb834b99170785b5244330b42feb6c892eea3c5280568e3137dd5433d96"} err="failed to get container status \"567aafb834b99170785b5244330b42feb6c892eea3c5280568e3137dd5433d96\": rpc error: code = NotFound desc = could not find container \"567aafb834b99170785b5244330b42feb6c892eea3c5280568e3137dd5433d96\": container with ID starting with 567aafb834b99170785b5244330b42feb6c892eea3c5280568e3137dd5433d96 not found: ID does not exist" Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.684340 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:10:59 crc kubenswrapper[4799]: W0127 08:10:59.700822 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81cece98_fc44_4b30_b861_92affbfe1e8a.slice/crio-5f0b2c8393a46a2824506b5228f1462a358ad081cd119aee34326073d86ed8e4 WatchSource:0}: Error finding container 5f0b2c8393a46a2824506b5228f1462a358ad081cd119aee34326073d86ed8e4: Status 404 returned error can't find the container with id 5f0b2c8393a46a2824506b5228f1462a358ad081cd119aee34326073d86ed8e4 Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.708673 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-kw8tr"] Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.764270 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2bc07cc-2292-4fdf-9444-866ce10a6bf8","Type":"ContainerStarted","Data":"3aa9b4c90bf01c243d509a471b3aa7d04706b94259b27e0dc53f30e45dd3fa35"} Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.775524 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kw8tr" event={"ID":"81cece98-fc44-4b30-b861-92affbfe1e8a","Type":"ContainerStarted","Data":"5f0b2c8393a46a2824506b5228f1462a358ad081cd119aee34326073d86ed8e4"} Jan 27 08:10:59 crc kubenswrapper[4799]: I0127 08:10:59.800405 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 08:10:59 crc kubenswrapper[4799]: W0127 08:10:59.805674 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99120af8_c2e0_4d5d_8abf_84a350b88689.slice/crio-3a585fa6258976462f5c1c2e11ff0bb0b288a09f10746d698c51ef9644be5bc9 WatchSource:0}: Error finding container 3a585fa6258976462f5c1c2e11ff0bb0b288a09f10746d698c51ef9644be5bc9: Status 404 returned error can't find the container with id 3a585fa6258976462f5c1c2e11ff0bb0b288a09f10746d698c51ef9644be5bc9 Jan 27 08:11:00 crc kubenswrapper[4799]: I0127 08:11:00.473179 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d8f5ac9-657e-4d76-9505-a6efc258cd5e" path="/var/lib/kubelet/pods/5d8f5ac9-657e-4d76-9505-a6efc258cd5e/volumes" Jan 27 08:11:00 crc kubenswrapper[4799]: I0127 08:11:00.475455 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f8754bb-2d84-4d00-a9fc-ae268f1ac580" path="/var/lib/kubelet/pods/7f8754bb-2d84-4d00-a9fc-ae268f1ac580/volumes" Jan 27 08:11:00 crc kubenswrapper[4799]: I0127 08:11:00.789747 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"99120af8-c2e0-4d5d-8abf-84a350b88689","Type":"ContainerStarted","Data":"56618cbd77a65c5eb5ed4e50a1b185dcf6698e8532afae5ad6ef083cfd8fafaf"} Jan 27 08:11:00 crc kubenswrapper[4799]: I0127 08:11:00.789807 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"99120af8-c2e0-4d5d-8abf-84a350b88689","Type":"ContainerStarted","Data":"b60f1d8b0894eeca9d72090c75f8582b84cf434c323d116ef9e9453c692f0fc0"} Jan 27 08:11:00 crc kubenswrapper[4799]: I0127 08:11:00.789823 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"99120af8-c2e0-4d5d-8abf-84a350b88689","Type":"ContainerStarted","Data":"3a585fa6258976462f5c1c2e11ff0bb0b288a09f10746d698c51ef9644be5bc9"} Jan 27 08:11:00 crc kubenswrapper[4799]: I0127 08:11:00.791734 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kw8tr" event={"ID":"81cece98-fc44-4b30-b861-92affbfe1e8a","Type":"ContainerStarted","Data":"329ac23f45fba306609c0758cbb469b51cb52b2e49c3a319f13207b93bc317c3"} Jan 27 08:11:00 crc kubenswrapper[4799]: I0127 08:11:00.795149 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2bc07cc-2292-4fdf-9444-866ce10a6bf8","Type":"ContainerStarted","Data":"4056910ece0b74cfae9ac370024f462da85fe0f74588ea569acc4376d75af528"} Jan 27 08:11:00 crc kubenswrapper[4799]: I0127 08:11:00.809405 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.80938791 podStartE2EDuration="2.80938791s" podCreationTimestamp="2026-01-27 08:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:11:00.80716654 +0000 UTC m=+1527.118270615" watchObservedRunningTime="2026-01-27 08:11:00.80938791 +0000 UTC m=+1527.120491975" Jan 27 08:11:00 crc kubenswrapper[4799]: I0127 08:11:00.840561 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-kw8tr" podStartSLOduration=1.840537104 podStartE2EDuration="1.840537104s" podCreationTimestamp="2026-01-27 08:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:11:00.82910604 +0000 UTC m=+1527.140210095" watchObservedRunningTime="2026-01-27 08:11:00.840537104 +0000 UTC m=+1527.151641179" Jan 27 08:11:01 crc kubenswrapper[4799]: I0127 08:11:01.805605 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2bc07cc-2292-4fdf-9444-866ce10a6bf8","Type":"ContainerStarted","Data":"f13f2f82669106dde4a669c0c36f489b18439b25cce12c8268b4c6dacd82fd32"} Jan 27 08:11:01 crc kubenswrapper[4799]: I0127 08:11:01.807207 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2bc07cc-2292-4fdf-9444-866ce10a6bf8","Type":"ContainerStarted","Data":"087ca23e3553e56a77f6d4a218fb2efba0d2f0caa0a25536beeb55f006e98774"} Jan 27 08:11:02 crc kubenswrapper[4799]: I0127 08:11:02.191493 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:11:02 crc kubenswrapper[4799]: I0127 08:11:02.278000 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-22jbb"] Jan 27 08:11:02 crc kubenswrapper[4799]: I0127 08:11:02.278391 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-22jbb" podUID="cc9dd129-7e6c-4e2b-97cb-339f1bb23d73" containerName="dnsmasq-dns" containerID="cri-o://17dfb08fe8178c1d306515db03bb46849ecda3ce4eaa4bed4488e6fe713665e9" gracePeriod=10 Jan 27 08:11:02 crc kubenswrapper[4799]: I0127 08:11:02.816726 4799 generic.go:334] "Generic (PLEG): container finished" podID="cc9dd129-7e6c-4e2b-97cb-339f1bb23d73" containerID="17dfb08fe8178c1d306515db03bb46849ecda3ce4eaa4bed4488e6fe713665e9" exitCode=0 Jan 27 08:11:02 crc kubenswrapper[4799]: I0127 08:11:02.816811 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-22jbb" event={"ID":"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73","Type":"ContainerDied","Data":"17dfb08fe8178c1d306515db03bb46849ecda3ce4eaa4bed4488e6fe713665e9"} Jan 27 08:11:02 crc kubenswrapper[4799]: I0127 08:11:02.817070 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-22jbb" event={"ID":"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73","Type":"ContainerDied","Data":"15d449fc9f3aaa7a6b2aad5ffceb76c001700b4969b4b5f30a20460f01bccd92"} Jan 27 08:11:02 crc kubenswrapper[4799]: I0127 08:11:02.817087 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15d449fc9f3aaa7a6b2aad5ffceb76c001700b4969b4b5f30a20460f01bccd92" Jan 27 08:11:02 crc kubenswrapper[4799]: I0127 08:11:02.841772 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:11:02 crc kubenswrapper[4799]: I0127 08:11:02.903977 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-config\") pod \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " Jan 27 08:11:02 crc kubenswrapper[4799]: I0127 08:11:02.904058 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-ovsdbserver-nb\") pod \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " Jan 27 08:11:02 crc kubenswrapper[4799]: I0127 08:11:02.904108 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-dns-swift-storage-0\") pod \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " Jan 27 08:11:02 crc kubenswrapper[4799]: I0127 08:11:02.904123 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-ovsdbserver-sb\") pod \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " Jan 27 08:11:02 crc kubenswrapper[4799]: I0127 08:11:02.904278 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xx6q\" (UniqueName: \"kubernetes.io/projected/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-kube-api-access-4xx6q\") pod \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " Jan 27 08:11:02 crc kubenswrapper[4799]: I0127 08:11:02.904312 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-dns-svc\") pod \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\" (UID: \"cc9dd129-7e6c-4e2b-97cb-339f1bb23d73\") " Jan 27 08:11:02 crc kubenswrapper[4799]: I0127 08:11:02.913886 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-kube-api-access-4xx6q" (OuterVolumeSpecName: "kube-api-access-4xx6q") pod "cc9dd129-7e6c-4e2b-97cb-339f1bb23d73" (UID: "cc9dd129-7e6c-4e2b-97cb-339f1bb23d73"). InnerVolumeSpecName "kube-api-access-4xx6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:02 crc kubenswrapper[4799]: I0127 08:11:02.961445 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cc9dd129-7e6c-4e2b-97cb-339f1bb23d73" (UID: "cc9dd129-7e6c-4e2b-97cb-339f1bb23d73"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:02 crc kubenswrapper[4799]: I0127 08:11:02.977412 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cc9dd129-7e6c-4e2b-97cb-339f1bb23d73" (UID: "cc9dd129-7e6c-4e2b-97cb-339f1bb23d73"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:03 crc kubenswrapper[4799]: I0127 08:11:03.001558 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-config" (OuterVolumeSpecName: "config") pod "cc9dd129-7e6c-4e2b-97cb-339f1bb23d73" (UID: "cc9dd129-7e6c-4e2b-97cb-339f1bb23d73"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:03 crc kubenswrapper[4799]: I0127 08:11:03.006932 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:03 crc kubenswrapper[4799]: I0127 08:11:03.006962 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xx6q\" (UniqueName: \"kubernetes.io/projected/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-kube-api-access-4xx6q\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:03 crc kubenswrapper[4799]: I0127 08:11:03.006973 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:03 crc kubenswrapper[4799]: I0127 08:11:03.006983 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:03 crc kubenswrapper[4799]: I0127 08:11:03.017663 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cc9dd129-7e6c-4e2b-97cb-339f1bb23d73" (UID: "cc9dd129-7e6c-4e2b-97cb-339f1bb23d73"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:03 crc kubenswrapper[4799]: I0127 08:11:03.018332 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cc9dd129-7e6c-4e2b-97cb-339f1bb23d73" (UID: "cc9dd129-7e6c-4e2b-97cb-339f1bb23d73"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:03 crc kubenswrapper[4799]: I0127 08:11:03.108628 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:03 crc kubenswrapper[4799]: I0127 08:11:03.108906 4799 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:03 crc kubenswrapper[4799]: I0127 08:11:03.832327 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-22jbb" Jan 27 08:11:03 crc kubenswrapper[4799]: I0127 08:11:03.833060 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2bc07cc-2292-4fdf-9444-866ce10a6bf8","Type":"ContainerStarted","Data":"bb48f58362059b6ca1888fa50c758a2329cd6dbc499ff416935701be8bede32a"} Jan 27 08:11:03 crc kubenswrapper[4799]: I0127 08:11:03.833600 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 08:11:03 crc kubenswrapper[4799]: I0127 08:11:03.864610 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7052310090000002 podStartE2EDuration="5.864593683s" podCreationTimestamp="2026-01-27 08:10:58 +0000 UTC" firstStartedPulling="2026-01-27 08:10:59.700450582 +0000 UTC m=+1526.011554647" lastFinishedPulling="2026-01-27 08:11:02.859813256 +0000 UTC m=+1529.170917321" observedRunningTime="2026-01-27 08:11:03.861585871 +0000 UTC m=+1530.172689936" watchObservedRunningTime="2026-01-27 08:11:03.864593683 +0000 UTC m=+1530.175697748" Jan 27 08:11:03 crc kubenswrapper[4799]: I0127 08:11:03.886255 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-22jbb"] Jan 27 08:11:03 crc kubenswrapper[4799]: I0127 08:11:03.895688 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-22jbb"] Jan 27 08:11:04 crc kubenswrapper[4799]: I0127 08:11:04.463928 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc9dd129-7e6c-4e2b-97cb-339f1bb23d73" path="/var/lib/kubelet/pods/cc9dd129-7e6c-4e2b-97cb-339f1bb23d73/volumes" Jan 27 08:11:05 crc kubenswrapper[4799]: I0127 08:11:05.859763 4799 generic.go:334] "Generic (PLEG): container finished" podID="81cece98-fc44-4b30-b861-92affbfe1e8a" containerID="329ac23f45fba306609c0758cbb469b51cb52b2e49c3a319f13207b93bc317c3" exitCode=0 Jan 27 08:11:05 crc kubenswrapper[4799]: I0127 08:11:05.859848 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kw8tr" event={"ID":"81cece98-fc44-4b30-b861-92affbfe1e8a","Type":"ContainerDied","Data":"329ac23f45fba306609c0758cbb469b51cb52b2e49c3a319f13207b93bc317c3"} Jan 27 08:11:07 crc kubenswrapper[4799]: I0127 08:11:07.341190 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kw8tr" Jan 27 08:11:07 crc kubenswrapper[4799]: I0127 08:11:07.397744 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81cece98-fc44-4b30-b861-92affbfe1e8a-scripts\") pod \"81cece98-fc44-4b30-b861-92affbfe1e8a\" (UID: \"81cece98-fc44-4b30-b861-92affbfe1e8a\") " Jan 27 08:11:07 crc kubenswrapper[4799]: I0127 08:11:07.397861 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81cece98-fc44-4b30-b861-92affbfe1e8a-config-data\") pod \"81cece98-fc44-4b30-b861-92affbfe1e8a\" (UID: \"81cece98-fc44-4b30-b861-92affbfe1e8a\") " Jan 27 08:11:07 crc kubenswrapper[4799]: I0127 08:11:07.398406 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx6k4\" (UniqueName: \"kubernetes.io/projected/81cece98-fc44-4b30-b861-92affbfe1e8a-kube-api-access-fx6k4\") pod \"81cece98-fc44-4b30-b861-92affbfe1e8a\" (UID: \"81cece98-fc44-4b30-b861-92affbfe1e8a\") " Jan 27 08:11:07 crc kubenswrapper[4799]: I0127 08:11:07.398540 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81cece98-fc44-4b30-b861-92affbfe1e8a-combined-ca-bundle\") pod \"81cece98-fc44-4b30-b861-92affbfe1e8a\" (UID: \"81cece98-fc44-4b30-b861-92affbfe1e8a\") " Jan 27 08:11:07 crc kubenswrapper[4799]: I0127 08:11:07.412558 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81cece98-fc44-4b30-b861-92affbfe1e8a-kube-api-access-fx6k4" (OuterVolumeSpecName: "kube-api-access-fx6k4") pod "81cece98-fc44-4b30-b861-92affbfe1e8a" (UID: "81cece98-fc44-4b30-b861-92affbfe1e8a"). InnerVolumeSpecName "kube-api-access-fx6k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:07 crc kubenswrapper[4799]: I0127 08:11:07.412884 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81cece98-fc44-4b30-b861-92affbfe1e8a-scripts" (OuterVolumeSpecName: "scripts") pod "81cece98-fc44-4b30-b861-92affbfe1e8a" (UID: "81cece98-fc44-4b30-b861-92affbfe1e8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:07 crc kubenswrapper[4799]: I0127 08:11:07.431501 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81cece98-fc44-4b30-b861-92affbfe1e8a-config-data" (OuterVolumeSpecName: "config-data") pod "81cece98-fc44-4b30-b861-92affbfe1e8a" (UID: "81cece98-fc44-4b30-b861-92affbfe1e8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:07 crc kubenswrapper[4799]: I0127 08:11:07.433694 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81cece98-fc44-4b30-b861-92affbfe1e8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81cece98-fc44-4b30-b861-92affbfe1e8a" (UID: "81cece98-fc44-4b30-b861-92affbfe1e8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:07 crc kubenswrapper[4799]: I0127 08:11:07.501860 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81cece98-fc44-4b30-b861-92affbfe1e8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:07 crc kubenswrapper[4799]: I0127 08:11:07.501890 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81cece98-fc44-4b30-b861-92affbfe1e8a-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:07 crc kubenswrapper[4799]: I0127 08:11:07.501902 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81cece98-fc44-4b30-b861-92affbfe1e8a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:07 crc kubenswrapper[4799]: I0127 08:11:07.501911 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx6k4\" (UniqueName: \"kubernetes.io/projected/81cece98-fc44-4b30-b861-92affbfe1e8a-kube-api-access-fx6k4\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:07 crc kubenswrapper[4799]: I0127 08:11:07.877156 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kw8tr" event={"ID":"81cece98-fc44-4b30-b861-92affbfe1e8a","Type":"ContainerDied","Data":"5f0b2c8393a46a2824506b5228f1462a358ad081cd119aee34326073d86ed8e4"} Jan 27 08:11:07 crc kubenswrapper[4799]: I0127 08:11:07.877204 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f0b2c8393a46a2824506b5228f1462a358ad081cd119aee34326073d86ed8e4" Jan 27 08:11:07 crc kubenswrapper[4799]: I0127 08:11:07.877262 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kw8tr" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.082336 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.082782 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="99120af8-c2e0-4d5d-8abf-84a350b88689" containerName="nova-api-log" containerID="cri-o://b60f1d8b0894eeca9d72090c75f8582b84cf434c323d116ef9e9453c692f0fc0" gracePeriod=30 Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.082848 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="99120af8-c2e0-4d5d-8abf-84a350b88689" containerName="nova-api-api" containerID="cri-o://56618cbd77a65c5eb5ed4e50a1b185dcf6698e8532afae5ad6ef083cfd8fafaf" gracePeriod=30 Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.104971 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.105192 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0b1a8d96-854c-4df2-9b33-19c50ca49e14" containerName="nova-metadata-log" containerID="cri-o://94e371e14fdfd6d2434bc6d030c057627a9a058f4deb4a8a037fd8e849d3a4e5" gracePeriod=30 Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.105262 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0b1a8d96-854c-4df2-9b33-19c50ca49e14" containerName="nova-metadata-metadata" containerID="cri-o://d56c07390cc5edc50fe902563f76b32b1e98d074a3bd9ad80e74c8cc9750a5fc" gracePeriod=30 Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.146128 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.146380 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="57046e54-7327-4252-90e0-2a4420ea98c9" containerName="nova-scheduler-scheduler" containerID="cri-o://a0163079c0724b85d7db787a56f1f25e7534892ebfd6e188a02d7ed01475622b" gracePeriod=30 Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.705577 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 08:11:08 crc kubenswrapper[4799]: E0127 08:11:08.740433 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a0163079c0724b85d7db787a56f1f25e7534892ebfd6e188a02d7ed01475622b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 08:11:08 crc kubenswrapper[4799]: E0127 08:11:08.741961 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a0163079c0724b85d7db787a56f1f25e7534892ebfd6e188a02d7ed01475622b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 08:11:08 crc kubenswrapper[4799]: E0127 08:11:08.743795 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a0163079c0724b85d7db787a56f1f25e7534892ebfd6e188a02d7ed01475622b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 08:11:08 crc kubenswrapper[4799]: E0127 08:11:08.743860 4799 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="57046e54-7327-4252-90e0-2a4420ea98c9" containerName="nova-scheduler-scheduler" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.827473 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99120af8-c2e0-4d5d-8abf-84a350b88689-logs\") pod \"99120af8-c2e0-4d5d-8abf-84a350b88689\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.827562 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-internal-tls-certs\") pod \"99120af8-c2e0-4d5d-8abf-84a350b88689\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.827713 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-config-data\") pod \"99120af8-c2e0-4d5d-8abf-84a350b88689\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.827888 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-combined-ca-bundle\") pod \"99120af8-c2e0-4d5d-8abf-84a350b88689\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.827916 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmnlg\" (UniqueName: \"kubernetes.io/projected/99120af8-c2e0-4d5d-8abf-84a350b88689-kube-api-access-wmnlg\") pod \"99120af8-c2e0-4d5d-8abf-84a350b88689\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.827951 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-public-tls-certs\") pod \"99120af8-c2e0-4d5d-8abf-84a350b88689\" (UID: \"99120af8-c2e0-4d5d-8abf-84a350b88689\") " Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.828059 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99120af8-c2e0-4d5d-8abf-84a350b88689-logs" (OuterVolumeSpecName: "logs") pod "99120af8-c2e0-4d5d-8abf-84a350b88689" (UID: "99120af8-c2e0-4d5d-8abf-84a350b88689"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.828481 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99120af8-c2e0-4d5d-8abf-84a350b88689-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.842602 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99120af8-c2e0-4d5d-8abf-84a350b88689-kube-api-access-wmnlg" (OuterVolumeSpecName: "kube-api-access-wmnlg") pod "99120af8-c2e0-4d5d-8abf-84a350b88689" (UID: "99120af8-c2e0-4d5d-8abf-84a350b88689"). InnerVolumeSpecName "kube-api-access-wmnlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.875549 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-config-data" (OuterVolumeSpecName: "config-data") pod "99120af8-c2e0-4d5d-8abf-84a350b88689" (UID: "99120af8-c2e0-4d5d-8abf-84a350b88689"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.884414 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99120af8-c2e0-4d5d-8abf-84a350b88689" (UID: "99120af8-c2e0-4d5d-8abf-84a350b88689"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.888566 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "99120af8-c2e0-4d5d-8abf-84a350b88689" (UID: "99120af8-c2e0-4d5d-8abf-84a350b88689"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.889472 4799 generic.go:334] "Generic (PLEG): container finished" podID="99120af8-c2e0-4d5d-8abf-84a350b88689" containerID="56618cbd77a65c5eb5ed4e50a1b185dcf6698e8532afae5ad6ef083cfd8fafaf" exitCode=0 Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.889498 4799 generic.go:334] "Generic (PLEG): container finished" podID="99120af8-c2e0-4d5d-8abf-84a350b88689" containerID="b60f1d8b0894eeca9d72090c75f8582b84cf434c323d116ef9e9453c692f0fc0" exitCode=143 Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.889535 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"99120af8-c2e0-4d5d-8abf-84a350b88689","Type":"ContainerDied","Data":"56618cbd77a65c5eb5ed4e50a1b185dcf6698e8532afae5ad6ef083cfd8fafaf"} Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.889561 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"99120af8-c2e0-4d5d-8abf-84a350b88689","Type":"ContainerDied","Data":"b60f1d8b0894eeca9d72090c75f8582b84cf434c323d116ef9e9453c692f0fc0"} Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.889572 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"99120af8-c2e0-4d5d-8abf-84a350b88689","Type":"ContainerDied","Data":"3a585fa6258976462f5c1c2e11ff0bb0b288a09f10746d698c51ef9644be5bc9"} Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.889586 4799 scope.go:117] "RemoveContainer" containerID="56618cbd77a65c5eb5ed4e50a1b185dcf6698e8532afae5ad6ef083cfd8fafaf" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.889686 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.895215 4799 generic.go:334] "Generic (PLEG): container finished" podID="0b1a8d96-854c-4df2-9b33-19c50ca49e14" containerID="94e371e14fdfd6d2434bc6d030c057627a9a058f4deb4a8a037fd8e849d3a4e5" exitCode=143 Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.895268 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b1a8d96-854c-4df2-9b33-19c50ca49e14","Type":"ContainerDied","Data":"94e371e14fdfd6d2434bc6d030c057627a9a058f4deb4a8a037fd8e849d3a4e5"} Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.919749 4799 scope.go:117] "RemoveContainer" containerID="b60f1d8b0894eeca9d72090c75f8582b84cf434c323d116ef9e9453c692f0fc0" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.923677 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "99120af8-c2e0-4d5d-8abf-84a350b88689" (UID: "99120af8-c2e0-4d5d-8abf-84a350b88689"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.936071 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.936103 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.936114 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmnlg\" (UniqueName: \"kubernetes.io/projected/99120af8-c2e0-4d5d-8abf-84a350b88689-kube-api-access-wmnlg\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.936123 4799 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.936130 4799 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/99120af8-c2e0-4d5d-8abf-84a350b88689-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.945477 4799 scope.go:117] "RemoveContainer" containerID="56618cbd77a65c5eb5ed4e50a1b185dcf6698e8532afae5ad6ef083cfd8fafaf" Jan 27 08:11:08 crc kubenswrapper[4799]: E0127 08:11:08.947118 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56618cbd77a65c5eb5ed4e50a1b185dcf6698e8532afae5ad6ef083cfd8fafaf\": container with ID starting with 56618cbd77a65c5eb5ed4e50a1b185dcf6698e8532afae5ad6ef083cfd8fafaf not found: ID does not exist" containerID="56618cbd77a65c5eb5ed4e50a1b185dcf6698e8532afae5ad6ef083cfd8fafaf" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.947158 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56618cbd77a65c5eb5ed4e50a1b185dcf6698e8532afae5ad6ef083cfd8fafaf"} err="failed to get container status \"56618cbd77a65c5eb5ed4e50a1b185dcf6698e8532afae5ad6ef083cfd8fafaf\": rpc error: code = NotFound desc = could not find container \"56618cbd77a65c5eb5ed4e50a1b185dcf6698e8532afae5ad6ef083cfd8fafaf\": container with ID starting with 56618cbd77a65c5eb5ed4e50a1b185dcf6698e8532afae5ad6ef083cfd8fafaf not found: ID does not exist" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.947185 4799 scope.go:117] "RemoveContainer" containerID="b60f1d8b0894eeca9d72090c75f8582b84cf434c323d116ef9e9453c692f0fc0" Jan 27 08:11:08 crc kubenswrapper[4799]: E0127 08:11:08.947526 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b60f1d8b0894eeca9d72090c75f8582b84cf434c323d116ef9e9453c692f0fc0\": container with ID starting with b60f1d8b0894eeca9d72090c75f8582b84cf434c323d116ef9e9453c692f0fc0 not found: ID does not exist" containerID="b60f1d8b0894eeca9d72090c75f8582b84cf434c323d116ef9e9453c692f0fc0" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.947552 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b60f1d8b0894eeca9d72090c75f8582b84cf434c323d116ef9e9453c692f0fc0"} err="failed to get container status \"b60f1d8b0894eeca9d72090c75f8582b84cf434c323d116ef9e9453c692f0fc0\": rpc error: code = NotFound desc = could not find container \"b60f1d8b0894eeca9d72090c75f8582b84cf434c323d116ef9e9453c692f0fc0\": container with ID starting with b60f1d8b0894eeca9d72090c75f8582b84cf434c323d116ef9e9453c692f0fc0 not found: ID does not exist" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.947570 4799 scope.go:117] "RemoveContainer" containerID="56618cbd77a65c5eb5ed4e50a1b185dcf6698e8532afae5ad6ef083cfd8fafaf" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.947738 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56618cbd77a65c5eb5ed4e50a1b185dcf6698e8532afae5ad6ef083cfd8fafaf"} err="failed to get container status \"56618cbd77a65c5eb5ed4e50a1b185dcf6698e8532afae5ad6ef083cfd8fafaf\": rpc error: code = NotFound desc = could not find container \"56618cbd77a65c5eb5ed4e50a1b185dcf6698e8532afae5ad6ef083cfd8fafaf\": container with ID starting with 56618cbd77a65c5eb5ed4e50a1b185dcf6698e8532afae5ad6ef083cfd8fafaf not found: ID does not exist" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.947758 4799 scope.go:117] "RemoveContainer" containerID="b60f1d8b0894eeca9d72090c75f8582b84cf434c323d116ef9e9453c692f0fc0" Jan 27 08:11:08 crc kubenswrapper[4799]: I0127 08:11:08.947995 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b60f1d8b0894eeca9d72090c75f8582b84cf434c323d116ef9e9453c692f0fc0"} err="failed to get container status \"b60f1d8b0894eeca9d72090c75f8582b84cf434c323d116ef9e9453c692f0fc0\": rpc error: code = NotFound desc = could not find container \"b60f1d8b0894eeca9d72090c75f8582b84cf434c323d116ef9e9453c692f0fc0\": container with ID starting with b60f1d8b0894eeca9d72090c75f8582b84cf434c323d116ef9e9453c692f0fc0 not found: ID does not exist" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.228033 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.270651 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.281222 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 08:11:09 crc kubenswrapper[4799]: E0127 08:11:09.281829 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc9dd129-7e6c-4e2b-97cb-339f1bb23d73" containerName="dnsmasq-dns" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.281908 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc9dd129-7e6c-4e2b-97cb-339f1bb23d73" containerName="dnsmasq-dns" Jan 27 08:11:09 crc kubenswrapper[4799]: E0127 08:11:09.281977 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81cece98-fc44-4b30-b861-92affbfe1e8a" containerName="nova-manage" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.282029 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="81cece98-fc44-4b30-b861-92affbfe1e8a" containerName="nova-manage" Jan 27 08:11:09 crc kubenswrapper[4799]: E0127 08:11:09.282086 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99120af8-c2e0-4d5d-8abf-84a350b88689" containerName="nova-api-log" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.282137 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="99120af8-c2e0-4d5d-8abf-84a350b88689" containerName="nova-api-log" Jan 27 08:11:09 crc kubenswrapper[4799]: E0127 08:11:09.282214 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99120af8-c2e0-4d5d-8abf-84a350b88689" containerName="nova-api-api" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.282267 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="99120af8-c2e0-4d5d-8abf-84a350b88689" containerName="nova-api-api" Jan 27 08:11:09 crc kubenswrapper[4799]: E0127 08:11:09.282341 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc9dd129-7e6c-4e2b-97cb-339f1bb23d73" containerName="init" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.282397 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc9dd129-7e6c-4e2b-97cb-339f1bb23d73" containerName="init" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.282602 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc9dd129-7e6c-4e2b-97cb-339f1bb23d73" containerName="dnsmasq-dns" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.282668 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="81cece98-fc44-4b30-b861-92affbfe1e8a" containerName="nova-manage" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.282753 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="99120af8-c2e0-4d5d-8abf-84a350b88689" containerName="nova-api-api" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.282822 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="99120af8-c2e0-4d5d-8abf-84a350b88689" containerName="nova-api-log" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.283788 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.287650 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.287934 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.288107 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.291863 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.352822 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/034b328a-c365-4b0a-8346-1cd571d65921-logs\") pod \"nova-api-0\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " pod="openstack/nova-api-0" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.352923 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-config-data\") pod \"nova-api-0\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " pod="openstack/nova-api-0" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.353144 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " pod="openstack/nova-api-0" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.353180 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-internal-tls-certs\") pod \"nova-api-0\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " pod="openstack/nova-api-0" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.353812 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-public-tls-certs\") pod \"nova-api-0\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " pod="openstack/nova-api-0" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.353885 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7rgl\" (UniqueName: \"kubernetes.io/projected/034b328a-c365-4b0a-8346-1cd571d65921-kube-api-access-p7rgl\") pod \"nova-api-0\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " pod="openstack/nova-api-0" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.455052 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " pod="openstack/nova-api-0" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.455096 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-internal-tls-certs\") pod \"nova-api-0\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " pod="openstack/nova-api-0" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.455135 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-public-tls-certs\") pod \"nova-api-0\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " pod="openstack/nova-api-0" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.455176 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7rgl\" (UniqueName: \"kubernetes.io/projected/034b328a-c365-4b0a-8346-1cd571d65921-kube-api-access-p7rgl\") pod \"nova-api-0\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " pod="openstack/nova-api-0" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.455249 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/034b328a-c365-4b0a-8346-1cd571d65921-logs\") pod \"nova-api-0\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " pod="openstack/nova-api-0" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.455316 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-config-data\") pod \"nova-api-0\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " pod="openstack/nova-api-0" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.456676 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/034b328a-c365-4b0a-8346-1cd571d65921-logs\") pod \"nova-api-0\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " pod="openstack/nova-api-0" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.459336 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-public-tls-certs\") pod \"nova-api-0\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " pod="openstack/nova-api-0" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.459809 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-config-data\") pod \"nova-api-0\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " pod="openstack/nova-api-0" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.460926 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-internal-tls-certs\") pod \"nova-api-0\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " pod="openstack/nova-api-0" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.468366 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " pod="openstack/nova-api-0" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.478913 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7rgl\" (UniqueName: \"kubernetes.io/projected/034b328a-c365-4b0a-8346-1cd571d65921-kube-api-access-p7rgl\") pod \"nova-api-0\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " pod="openstack/nova-api-0" Jan 27 08:11:09 crc kubenswrapper[4799]: I0127 08:11:09.615586 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 08:11:10 crc kubenswrapper[4799]: I0127 08:11:10.096170 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 08:11:10 crc kubenswrapper[4799]: I0127 08:11:10.461698 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99120af8-c2e0-4d5d-8abf-84a350b88689" path="/var/lib/kubelet/pods/99120af8-c2e0-4d5d-8abf-84a350b88689/volumes" Jan 27 08:11:10 crc kubenswrapper[4799]: I0127 08:11:10.914502 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"034b328a-c365-4b0a-8346-1cd571d65921","Type":"ContainerStarted","Data":"3aded0d751c3418825c7df5a4d4839d2ed013993df821070e2de8ffc8b9aa2d3"} Jan 27 08:11:10 crc kubenswrapper[4799]: I0127 08:11:10.914796 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"034b328a-c365-4b0a-8346-1cd571d65921","Type":"ContainerStarted","Data":"e4071f639ad9f711f3ce82cf2b36a21d6fcce03277af1036060c6ec9c693832d"} Jan 27 08:11:10 crc kubenswrapper[4799]: I0127 08:11:10.914885 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"034b328a-c365-4b0a-8346-1cd571d65921","Type":"ContainerStarted","Data":"6df7a8bbfa5f25aba3030c54619c1cd478cb56bca5857024395af234fc65f172"} Jan 27 08:11:10 crc kubenswrapper[4799]: I0127 08:11:10.953352 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.9532898730000001 podStartE2EDuration="1.953289873s" podCreationTimestamp="2026-01-27 08:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:11:10.940343269 +0000 UTC m=+1537.251447354" watchObservedRunningTime="2026-01-27 08:11:10.953289873 +0000 UTC m=+1537.264393958" Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.747463 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.799379 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b1a8d96-854c-4df2-9b33-19c50ca49e14-logs\") pod \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\" (UID: \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\") " Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.799552 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxqs2\" (UniqueName: \"kubernetes.io/projected/0b1a8d96-854c-4df2-9b33-19c50ca49e14-kube-api-access-wxqs2\") pod \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\" (UID: \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\") " Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.799624 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b1a8d96-854c-4df2-9b33-19c50ca49e14-combined-ca-bundle\") pod \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\" (UID: \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\") " Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.799662 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b1a8d96-854c-4df2-9b33-19c50ca49e14-nova-metadata-tls-certs\") pod \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\" (UID: \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\") " Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.800074 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b1a8d96-854c-4df2-9b33-19c50ca49e14-logs" (OuterVolumeSpecName: "logs") pod "0b1a8d96-854c-4df2-9b33-19c50ca49e14" (UID: "0b1a8d96-854c-4df2-9b33-19c50ca49e14"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.800394 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b1a8d96-854c-4df2-9b33-19c50ca49e14-config-data\") pod \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\" (UID: \"0b1a8d96-854c-4df2-9b33-19c50ca49e14\") " Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.800849 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b1a8d96-854c-4df2-9b33-19c50ca49e14-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.817708 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b1a8d96-854c-4df2-9b33-19c50ca49e14-kube-api-access-wxqs2" (OuterVolumeSpecName: "kube-api-access-wxqs2") pod "0b1a8d96-854c-4df2-9b33-19c50ca49e14" (UID: "0b1a8d96-854c-4df2-9b33-19c50ca49e14"). InnerVolumeSpecName "kube-api-access-wxqs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.856322 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b1a8d96-854c-4df2-9b33-19c50ca49e14-config-data" (OuterVolumeSpecName: "config-data") pod "0b1a8d96-854c-4df2-9b33-19c50ca49e14" (UID: "0b1a8d96-854c-4df2-9b33-19c50ca49e14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.865565 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b1a8d96-854c-4df2-9b33-19c50ca49e14-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0b1a8d96-854c-4df2-9b33-19c50ca49e14" (UID: "0b1a8d96-854c-4df2-9b33-19c50ca49e14"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.883389 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b1a8d96-854c-4df2-9b33-19c50ca49e14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b1a8d96-854c-4df2-9b33-19c50ca49e14" (UID: "0b1a8d96-854c-4df2-9b33-19c50ca49e14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.902799 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxqs2\" (UniqueName: \"kubernetes.io/projected/0b1a8d96-854c-4df2-9b33-19c50ca49e14-kube-api-access-wxqs2\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.902840 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b1a8d96-854c-4df2-9b33-19c50ca49e14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.902849 4799 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b1a8d96-854c-4df2-9b33-19c50ca49e14-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.902858 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b1a8d96-854c-4df2-9b33-19c50ca49e14-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.927254 4799 generic.go:334] "Generic (PLEG): container finished" podID="0b1a8d96-854c-4df2-9b33-19c50ca49e14" containerID="d56c07390cc5edc50fe902563f76b32b1e98d074a3bd9ad80e74c8cc9750a5fc" exitCode=0 Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.927510 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b1a8d96-854c-4df2-9b33-19c50ca49e14","Type":"ContainerDied","Data":"d56c07390cc5edc50fe902563f76b32b1e98d074a3bd9ad80e74c8cc9750a5fc"} Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.927587 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b1a8d96-854c-4df2-9b33-19c50ca49e14","Type":"ContainerDied","Data":"1435e9119d11448800be298b9a3ede23d2d1afdbd9af013b5fbf3d57c9fdfd3b"} Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.927606 4799 scope.go:117] "RemoveContainer" containerID="d56c07390cc5edc50fe902563f76b32b1e98d074a3bd9ad80e74c8cc9750a5fc" Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.927539 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.955109 4799 scope.go:117] "RemoveContainer" containerID="94e371e14fdfd6d2434bc6d030c057627a9a058f4deb4a8a037fd8e849d3a4e5" Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.984927 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.992798 4799 scope.go:117] "RemoveContainer" containerID="d56c07390cc5edc50fe902563f76b32b1e98d074a3bd9ad80e74c8cc9750a5fc" Jan 27 08:11:11 crc kubenswrapper[4799]: E0127 08:11:11.994289 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d56c07390cc5edc50fe902563f76b32b1e98d074a3bd9ad80e74c8cc9750a5fc\": container with ID starting with d56c07390cc5edc50fe902563f76b32b1e98d074a3bd9ad80e74c8cc9750a5fc not found: ID does not exist" containerID="d56c07390cc5edc50fe902563f76b32b1e98d074a3bd9ad80e74c8cc9750a5fc" Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.994412 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d56c07390cc5edc50fe902563f76b32b1e98d074a3bd9ad80e74c8cc9750a5fc"} err="failed to get container status \"d56c07390cc5edc50fe902563f76b32b1e98d074a3bd9ad80e74c8cc9750a5fc\": rpc error: code = NotFound desc = could not find container \"d56c07390cc5edc50fe902563f76b32b1e98d074a3bd9ad80e74c8cc9750a5fc\": container with ID starting with d56c07390cc5edc50fe902563f76b32b1e98d074a3bd9ad80e74c8cc9750a5fc not found: ID does not exist" Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.994497 4799 scope.go:117] "RemoveContainer" containerID="94e371e14fdfd6d2434bc6d030c057627a9a058f4deb4a8a037fd8e849d3a4e5" Jan 27 08:11:11 crc kubenswrapper[4799]: E0127 08:11:11.994825 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94e371e14fdfd6d2434bc6d030c057627a9a058f4deb4a8a037fd8e849d3a4e5\": container with ID starting with 94e371e14fdfd6d2434bc6d030c057627a9a058f4deb4a8a037fd8e849d3a4e5 not found: ID does not exist" containerID="94e371e14fdfd6d2434bc6d030c057627a9a058f4deb4a8a037fd8e849d3a4e5" Jan 27 08:11:11 crc kubenswrapper[4799]: I0127 08:11:11.994921 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94e371e14fdfd6d2434bc6d030c057627a9a058f4deb4a8a037fd8e849d3a4e5"} err="failed to get container status \"94e371e14fdfd6d2434bc6d030c057627a9a058f4deb4a8a037fd8e849d3a4e5\": rpc error: code = NotFound desc = could not find container \"94e371e14fdfd6d2434bc6d030c057627a9a058f4deb4a8a037fd8e849d3a4e5\": container with ID starting with 94e371e14fdfd6d2434bc6d030c057627a9a058f4deb4a8a037fd8e849d3a4e5 not found: ID does not exist" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:11.998969 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.014311 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:11:12 crc kubenswrapper[4799]: E0127 08:11:12.014744 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b1a8d96-854c-4df2-9b33-19c50ca49e14" containerName="nova-metadata-log" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.014760 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b1a8d96-854c-4df2-9b33-19c50ca49e14" containerName="nova-metadata-log" Jan 27 08:11:12 crc kubenswrapper[4799]: E0127 08:11:12.014773 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b1a8d96-854c-4df2-9b33-19c50ca49e14" containerName="nova-metadata-metadata" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.014780 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b1a8d96-854c-4df2-9b33-19c50ca49e14" containerName="nova-metadata-metadata" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.014963 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b1a8d96-854c-4df2-9b33-19c50ca49e14" containerName="nova-metadata-metadata" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.014983 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b1a8d96-854c-4df2-9b33-19c50ca49e14" containerName="nova-metadata-log" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.015991 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.017974 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.018133 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.024205 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.110031 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\") " pod="openstack/nova-metadata-0" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.110465 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\") " pod="openstack/nova-metadata-0" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.110506 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-config-data\") pod \"nova-metadata-0\" (UID: \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\") " pod="openstack/nova-metadata-0" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.110618 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk7sr\" (UniqueName: \"kubernetes.io/projected/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-kube-api-access-gk7sr\") pod \"nova-metadata-0\" (UID: \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\") " pod="openstack/nova-metadata-0" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.110648 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-logs\") pod \"nova-metadata-0\" (UID: \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\") " pod="openstack/nova-metadata-0" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.211895 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk7sr\" (UniqueName: \"kubernetes.io/projected/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-kube-api-access-gk7sr\") pod \"nova-metadata-0\" (UID: \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\") " pod="openstack/nova-metadata-0" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.211934 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-logs\") pod \"nova-metadata-0\" (UID: \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\") " pod="openstack/nova-metadata-0" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.212027 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\") " pod="openstack/nova-metadata-0" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.212051 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\") " pod="openstack/nova-metadata-0" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.212070 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-config-data\") pod \"nova-metadata-0\" (UID: \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\") " pod="openstack/nova-metadata-0" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.212742 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-logs\") pod \"nova-metadata-0\" (UID: \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\") " pod="openstack/nova-metadata-0" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.217718 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\") " pod="openstack/nova-metadata-0" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.217832 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\") " pod="openstack/nova-metadata-0" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.218292 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-config-data\") pod \"nova-metadata-0\" (UID: \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\") " pod="openstack/nova-metadata-0" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.231287 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk7sr\" (UniqueName: \"kubernetes.io/projected/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-kube-api-access-gk7sr\") pod \"nova-metadata-0\" (UID: \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\") " pod="openstack/nova-metadata-0" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.338719 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.514519 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b1a8d96-854c-4df2-9b33-19c50ca49e14" path="/var/lib/kubelet/pods/0b1a8d96-854c-4df2-9b33-19c50ca49e14/volumes" Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.900827 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.966073 4799 generic.go:334] "Generic (PLEG): container finished" podID="57046e54-7327-4252-90e0-2a4420ea98c9" containerID="a0163079c0724b85d7db787a56f1f25e7534892ebfd6e188a02d7ed01475622b" exitCode=0 Jan 27 08:11:12 crc kubenswrapper[4799]: I0127 08:11:12.966133 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"57046e54-7327-4252-90e0-2a4420ea98c9","Type":"ContainerDied","Data":"a0163079c0724b85d7db787a56f1f25e7534892ebfd6e188a02d7ed01475622b"} Jan 27 08:11:13 crc kubenswrapper[4799]: I0127 08:11:13.085545 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 08:11:13 crc kubenswrapper[4799]: I0127 08:11:13.130255 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57046e54-7327-4252-90e0-2a4420ea98c9-config-data\") pod \"57046e54-7327-4252-90e0-2a4420ea98c9\" (UID: \"57046e54-7327-4252-90e0-2a4420ea98c9\") " Jan 27 08:11:13 crc kubenswrapper[4799]: I0127 08:11:13.130462 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzht9\" (UniqueName: \"kubernetes.io/projected/57046e54-7327-4252-90e0-2a4420ea98c9-kube-api-access-tzht9\") pod \"57046e54-7327-4252-90e0-2a4420ea98c9\" (UID: \"57046e54-7327-4252-90e0-2a4420ea98c9\") " Jan 27 08:11:13 crc kubenswrapper[4799]: I0127 08:11:13.130500 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57046e54-7327-4252-90e0-2a4420ea98c9-combined-ca-bundle\") pod \"57046e54-7327-4252-90e0-2a4420ea98c9\" (UID: \"57046e54-7327-4252-90e0-2a4420ea98c9\") " Jan 27 08:11:13 crc kubenswrapper[4799]: I0127 08:11:13.137910 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57046e54-7327-4252-90e0-2a4420ea98c9-kube-api-access-tzht9" (OuterVolumeSpecName: "kube-api-access-tzht9") pod "57046e54-7327-4252-90e0-2a4420ea98c9" (UID: "57046e54-7327-4252-90e0-2a4420ea98c9"). InnerVolumeSpecName "kube-api-access-tzht9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:13 crc kubenswrapper[4799]: I0127 08:11:13.169014 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57046e54-7327-4252-90e0-2a4420ea98c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57046e54-7327-4252-90e0-2a4420ea98c9" (UID: "57046e54-7327-4252-90e0-2a4420ea98c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:13 crc kubenswrapper[4799]: I0127 08:11:13.179006 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57046e54-7327-4252-90e0-2a4420ea98c9-config-data" (OuterVolumeSpecName: "config-data") pod "57046e54-7327-4252-90e0-2a4420ea98c9" (UID: "57046e54-7327-4252-90e0-2a4420ea98c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:13 crc kubenswrapper[4799]: I0127 08:11:13.233355 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzht9\" (UniqueName: \"kubernetes.io/projected/57046e54-7327-4252-90e0-2a4420ea98c9-kube-api-access-tzht9\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:13 crc kubenswrapper[4799]: I0127 08:11:13.233398 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57046e54-7327-4252-90e0-2a4420ea98c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:13 crc kubenswrapper[4799]: I0127 08:11:13.233411 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57046e54-7327-4252-90e0-2a4420ea98c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:13 crc kubenswrapper[4799]: I0127 08:11:13.976127 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"57046e54-7327-4252-90e0-2a4420ea98c9","Type":"ContainerDied","Data":"7bf2868714cae3f04d63c5b5ec7f9e975ea919a857a298c83a3c57c279a5fd74"} Jan 27 08:11:13 crc kubenswrapper[4799]: I0127 08:11:13.976146 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 08:11:13 crc kubenswrapper[4799]: I0127 08:11:13.976530 4799 scope.go:117] "RemoveContainer" containerID="a0163079c0724b85d7db787a56f1f25e7534892ebfd6e188a02d7ed01475622b" Jan 27 08:11:13 crc kubenswrapper[4799]: I0127 08:11:13.978988 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89","Type":"ContainerStarted","Data":"9baffec134930c3ab03eac84affdcfbddaeaf581caaa16db927895c2feaa8b6f"} Jan 27 08:11:13 crc kubenswrapper[4799]: I0127 08:11:13.979023 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89","Type":"ContainerStarted","Data":"947b7b89455c6bd8431f5a9a840ca3ceaaa323705be0a8b142bb4d7b5cf00a77"} Jan 27 08:11:13 crc kubenswrapper[4799]: I0127 08:11:13.979033 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89","Type":"ContainerStarted","Data":"a4560661f3873cc9c6037b9ddf6cdbcd8b6e6cb830fdfc3b988ebe18f50da85a"} Jan 27 08:11:14 crc kubenswrapper[4799]: I0127 08:11:14.012836 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.012808503 podStartE2EDuration="3.012808503s" podCreationTimestamp="2026-01-27 08:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:11:13.998548313 +0000 UTC m=+1540.309652388" watchObservedRunningTime="2026-01-27 08:11:14.012808503 +0000 UTC m=+1540.323912588" Jan 27 08:11:14 crc kubenswrapper[4799]: I0127 08:11:14.031557 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 08:11:14 crc kubenswrapper[4799]: I0127 08:11:14.043852 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 08:11:14 crc kubenswrapper[4799]: I0127 08:11:14.055688 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 08:11:14 crc kubenswrapper[4799]: E0127 08:11:14.056185 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57046e54-7327-4252-90e0-2a4420ea98c9" containerName="nova-scheduler-scheduler" Jan 27 08:11:14 crc kubenswrapper[4799]: I0127 08:11:14.056208 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="57046e54-7327-4252-90e0-2a4420ea98c9" containerName="nova-scheduler-scheduler" Jan 27 08:11:14 crc kubenswrapper[4799]: I0127 08:11:14.056458 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="57046e54-7327-4252-90e0-2a4420ea98c9" containerName="nova-scheduler-scheduler" Jan 27 08:11:14 crc kubenswrapper[4799]: I0127 08:11:14.057141 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 08:11:14 crc kubenswrapper[4799]: I0127 08:11:14.060434 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 08:11:14 crc kubenswrapper[4799]: I0127 08:11:14.064887 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 08:11:14 crc kubenswrapper[4799]: I0127 08:11:14.151422 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvhkn\" (UniqueName: \"kubernetes.io/projected/69778bc9-c84e-42d0-9645-7fd3afa2ca28-kube-api-access-fvhkn\") pod \"nova-scheduler-0\" (UID: \"69778bc9-c84e-42d0-9645-7fd3afa2ca28\") " pod="openstack/nova-scheduler-0" Jan 27 08:11:14 crc kubenswrapper[4799]: I0127 08:11:14.151467 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69778bc9-c84e-42d0-9645-7fd3afa2ca28-config-data\") pod \"nova-scheduler-0\" (UID: \"69778bc9-c84e-42d0-9645-7fd3afa2ca28\") " pod="openstack/nova-scheduler-0" Jan 27 08:11:14 crc kubenswrapper[4799]: I0127 08:11:14.151526 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69778bc9-c84e-42d0-9645-7fd3afa2ca28-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"69778bc9-c84e-42d0-9645-7fd3afa2ca28\") " pod="openstack/nova-scheduler-0" Jan 27 08:11:14 crc kubenswrapper[4799]: I0127 08:11:14.252715 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvhkn\" (UniqueName: \"kubernetes.io/projected/69778bc9-c84e-42d0-9645-7fd3afa2ca28-kube-api-access-fvhkn\") pod \"nova-scheduler-0\" (UID: \"69778bc9-c84e-42d0-9645-7fd3afa2ca28\") " pod="openstack/nova-scheduler-0" Jan 27 08:11:14 crc kubenswrapper[4799]: I0127 08:11:14.252764 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69778bc9-c84e-42d0-9645-7fd3afa2ca28-config-data\") pod \"nova-scheduler-0\" (UID: \"69778bc9-c84e-42d0-9645-7fd3afa2ca28\") " pod="openstack/nova-scheduler-0" Jan 27 08:11:14 crc kubenswrapper[4799]: I0127 08:11:14.252829 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69778bc9-c84e-42d0-9645-7fd3afa2ca28-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"69778bc9-c84e-42d0-9645-7fd3afa2ca28\") " pod="openstack/nova-scheduler-0" Jan 27 08:11:14 crc kubenswrapper[4799]: I0127 08:11:14.257693 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69778bc9-c84e-42d0-9645-7fd3afa2ca28-config-data\") pod \"nova-scheduler-0\" (UID: \"69778bc9-c84e-42d0-9645-7fd3afa2ca28\") " pod="openstack/nova-scheduler-0" Jan 27 08:11:14 crc kubenswrapper[4799]: I0127 08:11:14.257970 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69778bc9-c84e-42d0-9645-7fd3afa2ca28-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"69778bc9-c84e-42d0-9645-7fd3afa2ca28\") " pod="openstack/nova-scheduler-0" Jan 27 08:11:14 crc kubenswrapper[4799]: I0127 08:11:14.269774 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvhkn\" (UniqueName: \"kubernetes.io/projected/69778bc9-c84e-42d0-9645-7fd3afa2ca28-kube-api-access-fvhkn\") pod \"nova-scheduler-0\" (UID: \"69778bc9-c84e-42d0-9645-7fd3afa2ca28\") " pod="openstack/nova-scheduler-0" Jan 27 08:11:14 crc kubenswrapper[4799]: I0127 08:11:14.372650 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 08:11:14 crc kubenswrapper[4799]: I0127 08:11:14.481023 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57046e54-7327-4252-90e0-2a4420ea98c9" path="/var/lib/kubelet/pods/57046e54-7327-4252-90e0-2a4420ea98c9/volumes" Jan 27 08:11:14 crc kubenswrapper[4799]: W0127 08:11:14.836593 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69778bc9_c84e_42d0_9645_7fd3afa2ca28.slice/crio-f1c93ed585f701ee8e159ab0d001f5debb73bda9c36a4fa0913614dbca815445 WatchSource:0}: Error finding container f1c93ed585f701ee8e159ab0d001f5debb73bda9c36a4fa0913614dbca815445: Status 404 returned error can't find the container with id f1c93ed585f701ee8e159ab0d001f5debb73bda9c36a4fa0913614dbca815445 Jan 27 08:11:14 crc kubenswrapper[4799]: I0127 08:11:14.839176 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 08:11:14 crc kubenswrapper[4799]: I0127 08:11:14.987081 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"69778bc9-c84e-42d0-9645-7fd3afa2ca28","Type":"ContainerStarted","Data":"f1c93ed585f701ee8e159ab0d001f5debb73bda9c36a4fa0913614dbca815445"} Jan 27 08:11:16 crc kubenswrapper[4799]: I0127 08:11:16.019528 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"69778bc9-c84e-42d0-9645-7fd3afa2ca28","Type":"ContainerStarted","Data":"bc6c983e01ab338de442045f241c1648fa14c28bc2a221e488c7933c7f13fa66"} Jan 27 08:11:16 crc kubenswrapper[4799]: I0127 08:11:16.056775 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.056739431 podStartE2EDuration="2.056739431s" podCreationTimestamp="2026-01-27 08:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 08:11:16.043012565 +0000 UTC m=+1542.354116710" watchObservedRunningTime="2026-01-27 08:11:16.056739431 +0000 UTC m=+1542.367843536" Jan 27 08:11:16 crc kubenswrapper[4799]: I0127 08:11:16.699270 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="0b1a8d96-854c-4df2-9b33-19c50ca49e14" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 08:11:16 crc kubenswrapper[4799]: I0127 08:11:16.699383 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="0b1a8d96-854c-4df2-9b33-19c50ca49e14" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": dial tcp 10.217.0.188:8775: i/o timeout" Jan 27 08:11:17 crc kubenswrapper[4799]: I0127 08:11:17.339682 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 08:11:17 crc kubenswrapper[4799]: I0127 08:11:17.339750 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 08:11:19 crc kubenswrapper[4799]: I0127 08:11:19.372771 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 08:11:19 crc kubenswrapper[4799]: I0127 08:11:19.617122 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 08:11:19 crc kubenswrapper[4799]: I0127 08:11:19.617450 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 08:11:20 crc kubenswrapper[4799]: I0127 08:11:20.634509 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="034b328a-c365-4b0a-8346-1cd571d65921" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.198:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 08:11:20 crc kubenswrapper[4799]: I0127 08:11:20.634509 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="034b328a-c365-4b0a-8346-1cd571d65921" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.198:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 08:11:22 crc kubenswrapper[4799]: I0127 08:11:22.338930 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 08:11:22 crc kubenswrapper[4799]: I0127 08:11:22.339479 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 08:11:23 crc kubenswrapper[4799]: I0127 08:11:23.354502 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cdad1fc3-eebb-4dcb-b69a-076d1dc63a89" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 08:11:23 crc kubenswrapper[4799]: I0127 08:11:23.354575 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cdad1fc3-eebb-4dcb-b69a-076d1dc63a89" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 08:11:24 crc kubenswrapper[4799]: I0127 08:11:24.373508 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 08:11:24 crc kubenswrapper[4799]: I0127 08:11:24.424102 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 08:11:25 crc kubenswrapper[4799]: I0127 08:11:25.168176 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 08:11:29 crc kubenswrapper[4799]: I0127 08:11:29.203140 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 08:11:29 crc kubenswrapper[4799]: I0127 08:11:29.625490 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 08:11:29 crc kubenswrapper[4799]: I0127 08:11:29.627029 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 08:11:29 crc kubenswrapper[4799]: I0127 08:11:29.628122 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 08:11:29 crc kubenswrapper[4799]: I0127 08:11:29.636244 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 08:11:30 crc kubenswrapper[4799]: I0127 08:11:30.179755 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 08:11:30 crc kubenswrapper[4799]: I0127 08:11:30.187108 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 08:11:32 crc kubenswrapper[4799]: I0127 08:11:32.372450 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 08:11:32 crc kubenswrapper[4799]: I0127 08:11:32.377260 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 08:11:32 crc kubenswrapper[4799]: I0127 08:11:32.380014 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 08:11:33 crc kubenswrapper[4799]: I0127 08:11:33.231388 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.619849 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9vvt6"] Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.621677 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9vvt6" Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.625764 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.673520 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9vvt6"] Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.678911 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68ddq\" (UniqueName: \"kubernetes.io/projected/f97b84a5-a34c-405f-8357-70cad8efedbc-kube-api-access-68ddq\") pod \"root-account-create-update-9vvt6\" (UID: \"f97b84a5-a34c-405f-8357-70cad8efedbc\") " pod="openstack/root-account-create-update-9vvt6" Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.679022 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f97b84a5-a34c-405f-8357-70cad8efedbc-operator-scripts\") pod \"root-account-create-update-9vvt6\" (UID: \"f97b84a5-a34c-405f-8357-70cad8efedbc\") " pod="openstack/root-account-create-update-9vvt6" Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.694401 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5wzgq"] Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.705543 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-5wzgq"] Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.735390 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3eda-account-create-update-v5sgh"] Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.772524 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3eda-account-create-update-v5sgh"] Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.784549 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f97b84a5-a34c-405f-8357-70cad8efedbc-operator-scripts\") pod \"root-account-create-update-9vvt6\" (UID: \"f97b84a5-a34c-405f-8357-70cad8efedbc\") " pod="openstack/root-account-create-update-9vvt6" Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.784699 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68ddq\" (UniqueName: \"kubernetes.io/projected/f97b84a5-a34c-405f-8357-70cad8efedbc-kube-api-access-68ddq\") pod \"root-account-create-update-9vvt6\" (UID: \"f97b84a5-a34c-405f-8357-70cad8efedbc\") " pod="openstack/root-account-create-update-9vvt6" Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.785860 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f97b84a5-a34c-405f-8357-70cad8efedbc-operator-scripts\") pod \"root-account-create-update-9vvt6\" (UID: \"f97b84a5-a34c-405f-8357-70cad8efedbc\") " pod="openstack/root-account-create-update-9vvt6" Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.792957 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3eda-account-create-update-9krkt"] Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.794085 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3eda-account-create-update-9krkt" Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.805661 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.811830 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3eda-account-create-update-9krkt"] Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.860121 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68ddq\" (UniqueName: \"kubernetes.io/projected/f97b84a5-a34c-405f-8357-70cad8efedbc-kube-api-access-68ddq\") pod \"root-account-create-update-9vvt6\" (UID: \"f97b84a5-a34c-405f-8357-70cad8efedbc\") " pod="openstack/root-account-create-update-9vvt6" Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.891287 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtxth\" (UniqueName: \"kubernetes.io/projected/ade86985-ca70-4f21-ae7a-825353f912cb-kube-api-access-vtxth\") pod \"neutron-3eda-account-create-update-9krkt\" (UID: \"ade86985-ca70-4f21-ae7a-825353f912cb\") " pod="openstack/neutron-3eda-account-create-update-9krkt" Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.891423 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade86985-ca70-4f21-ae7a-825353f912cb-operator-scripts\") pod \"neutron-3eda-account-create-update-9krkt\" (UID: \"ade86985-ca70-4f21-ae7a-825353f912cb\") " pod="openstack/neutron-3eda-account-create-update-9krkt" Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.909463 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-07af-account-create-update-wm8x8"] Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.911246 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-07af-account-create-update-wm8x8" Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.918993 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.950355 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-07af-account-create-update-jg2hs"] Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.969368 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-07af-account-create-update-jg2hs"] Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.974192 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9vvt6" Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.978953 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.979168 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="07e52675-0afa-4579-a5c1-f0aba31dd6e7" containerName="openstackclient" containerID="cri-o://932c19e1786489198ed2fd00256bc0ea9ef4db8f5bd057c7ca4751c183f4f1d6" gracePeriod=2 Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.996220 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a55cd9e-7386-41a1-912c-0876a917bd93-operator-scripts\") pod \"nova-api-07af-account-create-update-wm8x8\" (UID: \"7a55cd9e-7386-41a1-912c-0876a917bd93\") " pod="openstack/nova-api-07af-account-create-update-wm8x8" Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.996339 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fghhh\" (UniqueName: \"kubernetes.io/projected/7a55cd9e-7386-41a1-912c-0876a917bd93-kube-api-access-fghhh\") pod \"nova-api-07af-account-create-update-wm8x8\" (UID: \"7a55cd9e-7386-41a1-912c-0876a917bd93\") " pod="openstack/nova-api-07af-account-create-update-wm8x8" Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.996385 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtxth\" (UniqueName: \"kubernetes.io/projected/ade86985-ca70-4f21-ae7a-825353f912cb-kube-api-access-vtxth\") pod \"neutron-3eda-account-create-update-9krkt\" (UID: \"ade86985-ca70-4f21-ae7a-825353f912cb\") " pod="openstack/neutron-3eda-account-create-update-9krkt" Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.996446 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade86985-ca70-4f21-ae7a-825353f912cb-operator-scripts\") pod \"neutron-3eda-account-create-update-9krkt\" (UID: \"ade86985-ca70-4f21-ae7a-825353f912cb\") " pod="openstack/neutron-3eda-account-create-update-9krkt" Jan 27 08:11:51 crc kubenswrapper[4799]: I0127 08:11:51.997282 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade86985-ca70-4f21-ae7a-825353f912cb-operator-scripts\") pod \"neutron-3eda-account-create-update-9krkt\" (UID: \"ade86985-ca70-4f21-ae7a-825353f912cb\") " pod="openstack/neutron-3eda-account-create-update-9krkt" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.016876 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0f1e-account-create-update-9pcqg"] Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.018136 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f1e-account-create-update-9pcqg" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.033559 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.033827 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.049439 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.057406 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0f1e-account-create-update-9pcqg"] Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.076406 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-07af-account-create-update-wm8x8"] Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.081443 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtxth\" (UniqueName: \"kubernetes.io/projected/ade86985-ca70-4f21-ae7a-825353f912cb-kube-api-access-vtxth\") pod \"neutron-3eda-account-create-update-9krkt\" (UID: \"ade86985-ca70-4f21-ae7a-825353f912cb\") " pod="openstack/neutron-3eda-account-create-update-9krkt" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.086987 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-5c72-account-create-update-mpznk"] Jan 27 08:11:52 crc kubenswrapper[4799]: E0127 08:11:52.088280 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e52675-0afa-4579-a5c1-f0aba31dd6e7" containerName="openstackclient" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.088314 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e52675-0afa-4579-a5c1-f0aba31dd6e7" containerName="openstackclient" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.088516 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e52675-0afa-4579-a5c1-f0aba31dd6e7" containerName="openstackclient" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.089083 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5c72-account-create-update-mpznk" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.092112 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.099284 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caa95ce2-79d9-4314-af1c-6d3b93667cb5-operator-scripts\") pod \"nova-cell0-0f1e-account-create-update-9pcqg\" (UID: \"caa95ce2-79d9-4314-af1c-6d3b93667cb5\") " pod="openstack/nova-cell0-0f1e-account-create-update-9pcqg" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.099399 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a55cd9e-7386-41a1-912c-0876a917bd93-operator-scripts\") pod \"nova-api-07af-account-create-update-wm8x8\" (UID: \"7a55cd9e-7386-41a1-912c-0876a917bd93\") " pod="openstack/nova-api-07af-account-create-update-wm8x8" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.099432 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4fdq\" (UniqueName: \"kubernetes.io/projected/caa95ce2-79d9-4314-af1c-6d3b93667cb5-kube-api-access-h4fdq\") pod \"nova-cell0-0f1e-account-create-update-9pcqg\" (UID: \"caa95ce2-79d9-4314-af1c-6d3b93667cb5\") " pod="openstack/nova-cell0-0f1e-account-create-update-9pcqg" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.099500 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fghhh\" (UniqueName: \"kubernetes.io/projected/7a55cd9e-7386-41a1-912c-0876a917bd93-kube-api-access-fghhh\") pod \"nova-api-07af-account-create-update-wm8x8\" (UID: \"7a55cd9e-7386-41a1-912c-0876a917bd93\") " pod="openstack/nova-api-07af-account-create-update-wm8x8" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.100291 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a55cd9e-7386-41a1-912c-0876a917bd93-operator-scripts\") pod \"nova-api-07af-account-create-update-wm8x8\" (UID: \"7a55cd9e-7386-41a1-912c-0876a917bd93\") " pod="openstack/nova-api-07af-account-create-update-wm8x8" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.100706 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c37c-account-create-update-mb86j"] Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.101775 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c37c-account-create-update-mb86j" Jan 27 08:11:52 crc kubenswrapper[4799]: E0127 08:11:52.100708 4799 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 27 08:11:52 crc kubenswrapper[4799]: E0127 08:11:52.102060 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-config-data podName:0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0 nodeName:}" failed. No retries permitted until 2026-01-27 08:11:52.602045246 +0000 UTC m=+1578.913149311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-config-data") pod "rabbitmq-cell1-server-0" (UID: "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0") : configmap "rabbitmq-cell1-config-data" not found Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.105599 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.123753 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3eda-account-create-update-9krkt" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.141839 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c37c-account-create-update-mb86j"] Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.175367 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5c72-account-create-update-mpznk"] Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.179991 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fghhh\" (UniqueName: \"kubernetes.io/projected/7a55cd9e-7386-41a1-912c-0876a917bd93-kube-api-access-fghhh\") pod \"nova-api-07af-account-create-update-wm8x8\" (UID: \"7a55cd9e-7386-41a1-912c-0876a917bd93\") " pod="openstack/nova-api-07af-account-create-update-wm8x8" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.201347 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z58s4\" (UniqueName: \"kubernetes.io/projected/1ac2fc5a-2192-497c-ad7f-76a3fef58da6-kube-api-access-z58s4\") pod \"placement-c37c-account-create-update-mb86j\" (UID: \"1ac2fc5a-2192-497c-ad7f-76a3fef58da6\") " pod="openstack/placement-c37c-account-create-update-mb86j" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.201697 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caa95ce2-79d9-4314-af1c-6d3b93667cb5-operator-scripts\") pod \"nova-cell0-0f1e-account-create-update-9pcqg\" (UID: \"caa95ce2-79d9-4314-af1c-6d3b93667cb5\") " pod="openstack/nova-cell0-0f1e-account-create-update-9pcqg" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.201794 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkf92\" (UniqueName: \"kubernetes.io/projected/2fa13966-417e-4920-8ecc-5afc73396410-kube-api-access-hkf92\") pod \"cinder-5c72-account-create-update-mpznk\" (UID: \"2fa13966-417e-4920-8ecc-5afc73396410\") " pod="openstack/cinder-5c72-account-create-update-mpznk" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.201907 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fa13966-417e-4920-8ecc-5afc73396410-operator-scripts\") pod \"cinder-5c72-account-create-update-mpznk\" (UID: \"2fa13966-417e-4920-8ecc-5afc73396410\") " pod="openstack/cinder-5c72-account-create-update-mpznk" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.202007 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4fdq\" (UniqueName: \"kubernetes.io/projected/caa95ce2-79d9-4314-af1c-6d3b93667cb5-kube-api-access-h4fdq\") pod \"nova-cell0-0f1e-account-create-update-9pcqg\" (UID: \"caa95ce2-79d9-4314-af1c-6d3b93667cb5\") " pod="openstack/nova-cell0-0f1e-account-create-update-9pcqg" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.202122 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ac2fc5a-2192-497c-ad7f-76a3fef58da6-operator-scripts\") pod \"placement-c37c-account-create-update-mb86j\" (UID: \"1ac2fc5a-2192-497c-ad7f-76a3fef58da6\") " pod="openstack/placement-c37c-account-create-update-mb86j" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.203286 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caa95ce2-79d9-4314-af1c-6d3b93667cb5-operator-scripts\") pod \"nova-cell0-0f1e-account-create-update-9pcqg\" (UID: \"caa95ce2-79d9-4314-af1c-6d3b93667cb5\") " pod="openstack/nova-cell0-0f1e-account-create-update-9pcqg" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.244487 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-07af-account-create-update-wm8x8" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.249260 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-dt4kd"] Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.249509 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-dt4kd" podUID="f3a68e3d-78f4-4a7a-9915-0801f0ffeed6" containerName="openstack-network-exporter" containerID="cri-o://5509cf2b97d8f78121cc9bf786809bf94e29e6e3d4c6777462e39be2a813f69f" gracePeriod=30 Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.267172 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-zct2j"] Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.275768 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4fdq\" (UniqueName: \"kubernetes.io/projected/caa95ce2-79d9-4314-af1c-6d3b93667cb5-kube-api-access-h4fdq\") pod \"nova-cell0-0f1e-account-create-update-9pcqg\" (UID: \"caa95ce2-79d9-4314-af1c-6d3b93667cb5\") " pod="openstack/nova-cell0-0f1e-account-create-update-9pcqg" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.312956 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkf92\" (UniqueName: \"kubernetes.io/projected/2fa13966-417e-4920-8ecc-5afc73396410-kube-api-access-hkf92\") pod \"cinder-5c72-account-create-update-mpznk\" (UID: \"2fa13966-417e-4920-8ecc-5afc73396410\") " pod="openstack/cinder-5c72-account-create-update-mpznk" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.313308 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fa13966-417e-4920-8ecc-5afc73396410-operator-scripts\") pod \"cinder-5c72-account-create-update-mpznk\" (UID: \"2fa13966-417e-4920-8ecc-5afc73396410\") " pod="openstack/cinder-5c72-account-create-update-mpznk" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.313375 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ac2fc5a-2192-497c-ad7f-76a3fef58da6-operator-scripts\") pod \"placement-c37c-account-create-update-mb86j\" (UID: \"1ac2fc5a-2192-497c-ad7f-76a3fef58da6\") " pod="openstack/placement-c37c-account-create-update-mb86j" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.313427 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z58s4\" (UniqueName: \"kubernetes.io/projected/1ac2fc5a-2192-497c-ad7f-76a3fef58da6-kube-api-access-z58s4\") pod \"placement-c37c-account-create-update-mb86j\" (UID: \"1ac2fc5a-2192-497c-ad7f-76a3fef58da6\") " pod="openstack/placement-c37c-account-create-update-mb86j" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.314161 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fa13966-417e-4920-8ecc-5afc73396410-operator-scripts\") pod \"cinder-5c72-account-create-update-mpznk\" (UID: \"2fa13966-417e-4920-8ecc-5afc73396410\") " pod="openstack/cinder-5c72-account-create-update-mpznk" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.316892 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ac2fc5a-2192-497c-ad7f-76a3fef58da6-operator-scripts\") pod \"placement-c37c-account-create-update-mb86j\" (UID: \"1ac2fc5a-2192-497c-ad7f-76a3fef58da6\") " pod="openstack/placement-c37c-account-create-update-mb86j" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.362277 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.362783 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="54237546-70b8-4475-bd97-53ea6047786b" containerName="ovn-northd" containerID="cri-o://fbcff016e9760704203725b31f6b8f4186145b4f641856d6714921eacca80540" gracePeriod=30 Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.363323 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="54237546-70b8-4475-bd97-53ea6047786b" containerName="openstack-network-exporter" containerID="cri-o://960c6e3a2d0404b26224ebbe0c842e8ab32ce14fc65333840ba8d2163a57fc6d" gracePeriod=30 Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.376088 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z58s4\" (UniqueName: \"kubernetes.io/projected/1ac2fc5a-2192-497c-ad7f-76a3fef58da6-kube-api-access-z58s4\") pod \"placement-c37c-account-create-update-mb86j\" (UID: \"1ac2fc5a-2192-497c-ad7f-76a3fef58da6\") " pod="openstack/placement-c37c-account-create-update-mb86j" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.410866 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkf92\" (UniqueName: \"kubernetes.io/projected/2fa13966-417e-4920-8ecc-5afc73396410-kube-api-access-hkf92\") pod \"cinder-5c72-account-create-update-mpznk\" (UID: \"2fa13966-417e-4920-8ecc-5afc73396410\") " pod="openstack/cinder-5c72-account-create-update-mpznk" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.591542 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f1e-account-create-update-9pcqg" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.609766 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5c72-account-create-update-mpznk" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.610117 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c37c-account-create-update-mb86j" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.673425 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-dt4kd_f3a68e3d-78f4-4a7a-9915-0801f0ffeed6/openstack-network-exporter/0.log" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.673945 4799 generic.go:334] "Generic (PLEG): container finished" podID="f3a68e3d-78f4-4a7a-9915-0801f0ffeed6" containerID="5509cf2b97d8f78121cc9bf786809bf94e29e6e3d4c6777462e39be2a813f69f" exitCode=2 Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.676177 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a9d5a4-a457-4cf4-92aa-2de96430c864" path="/var/lib/kubelet/pods/94a9d5a4-a457-4cf4-92aa-2de96430c864/volumes" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.677059 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b181c307-a5d9-4821-8b81-9bf5539511e5" path="/var/lib/kubelet/pods/b181c307-a5d9-4821-8b81-9bf5539511e5/volumes" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.677759 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9ea3026-b416-47dd-b55a-994533c7f302" path="/var/lib/kubelet/pods/b9ea3026-b416-47dd-b55a-994533c7f302/volumes" Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.681183 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dt4kd" event={"ID":"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6","Type":"ContainerDied","Data":"5509cf2b97d8f78121cc9bf786809bf94e29e6e3d4c6777462e39be2a813f69f"} Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.681419 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-lx6nr"] Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.728826 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 08:11:52 crc kubenswrapper[4799]: E0127 08:11:52.754590 4799 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 27 08:11:52 crc kubenswrapper[4799]: E0127 08:11:52.754829 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-config-data podName:0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0 nodeName:}" failed. No retries permitted until 2026-01-27 08:11:53.75481108 +0000 UTC m=+1580.065915145 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-config-data") pod "rabbitmq-cell1-server-0" (UID: "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0") : configmap "rabbitmq-cell1-config-data" not found Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.775372 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-22s2d"] Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.803258 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0f1e-account-create-update-grg95"] Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.824541 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-22s2d"] Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.840360 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0f1e-account-create-update-grg95"] Jan 27 08:11:52 crc kubenswrapper[4799]: E0127 08:11:52.859764 4799 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 27 08:11:52 crc kubenswrapper[4799]: E0127 08:11:52.859982 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-config-data podName:8d822fe6-f547-4b8f-a6e4-c7256e1b2ace nodeName:}" failed. No retries permitted until 2026-01-27 08:11:53.359967648 +0000 UTC m=+1579.671071713 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-config-data") pod "rabbitmq-server-0" (UID: "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace") : configmap "rabbitmq-config-data" not found Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.888626 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c37c-account-create-update-6x5kp"] Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.893113 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5c72-account-create-update-9x5nw"] Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.910885 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-5c72-account-create-update-9x5nw"] Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.926806 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c37c-account-create-update-6x5kp"] Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.947729 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-n9x6s"] Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.971745 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-n9x6s"] Jan 27 08:11:52 crc kubenswrapper[4799]: I0127 08:11:52.989833 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-cj8lr"] Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.018180 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-cj8lr"] Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.023928 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-b6fd-account-create-update-zjgk2"] Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.035831 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-b6fd-account-create-update-zjgk2"] Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.048081 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-d9mwm"] Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.069645 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-d9mwm"] Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.093117 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-kw8tr"] Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.108470 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-kw8tr"] Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.128647 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-m8v2m"] Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.167472 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-m8v2m"] Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.240361 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.241059 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="e6b7da0a-2774-4bae-ba2f-3b943e027082" containerName="openstack-network-exporter" containerID="cri-o://af4582fbc376280b8069bf7f7b55933070749f10ea9380861c1e04e1287e288f" gracePeriod=300 Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.249364 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-csjnn"] Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.260375 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-csjnn"] Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.271345 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.273241 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="617cc655-aae2-4918-ba79-05e346cf9200" containerName="openstack-network-exporter" containerID="cri-o://97b857c500f0dc120edc4b9f7299baa035a1a1571e8961c116690abe3c273321" gracePeriod=300 Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.288556 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.288607 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-hd42p"] Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.288619 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5ff7b8d449-xjt48"] Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.289243 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5ff7b8d449-xjt48" podUID="2db9ba76-0532-4ed0-972e-fd5452048b97" containerName="neutron-api" containerID="cri-o://0dcbd436a5762bcc61d230602c621a9b9849e2da017a7bac5c585459bb6be746" gracePeriod=30 Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.303899 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fbb51a95-a5db-4e7c-8cca-a59d07200ad5" containerName="glance-log" containerID="cri-o://47f6d93a69dd90911aea2e658078c1ee7cf68c9157a2c5886005700a1107b370" gracePeriod=30 Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.304332 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" podUID="d57ed20b-0573-4924-aeee-bef05838e330" containerName="dnsmasq-dns" containerID="cri-o://242ed1a16fd5f7f954693a993a1d2ded4c83efdbd95645efc9040027f0bb6c24" gracePeriod=10 Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.304385 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fbb51a95-a5db-4e7c-8cca-a59d07200ad5" containerName="glance-httpd" containerID="cri-o://8cd4ca5237c50f4d23bf8d52a5873e5a8a629a0e179bd22093979620a71f464d" gracePeriod=30 Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.304430 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5ff7b8d449-xjt48" podUID="2db9ba76-0532-4ed0-972e-fd5452048b97" containerName="neutron-httpd" containerID="cri-o://7a13a4dba57a64680601c65f46bc4e1fd1ddd9881983073fa8db00588d91d96c" gracePeriod=30 Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.361702 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-nnhs2"] Jan 27 08:11:53 crc kubenswrapper[4799]: E0127 08:11:53.403610 4799 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 27 08:11:53 crc kubenswrapper[4799]: E0127 08:11:53.421402 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-config-data podName:8d822fe6-f547-4b8f-a6e4-c7256e1b2ace nodeName:}" failed. No retries permitted until 2026-01-27 08:11:54.421375822 +0000 UTC m=+1580.732479887 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-config-data") pod "rabbitmq-server-0" (UID: "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace") : configmap "rabbitmq-config-data" not found Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.452769 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-nnhs2"] Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.454947 4799 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/cinder-scheduler-0" secret="" err="secret \"cinder-cinder-dockercfg-55nfh\" not found" Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.483286 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.483612 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8d1ca94d-0dc1-402e-87b0-e76fc390a9a4" containerName="glance-log" containerID="cri-o://645e7a56ca8ac37dc99398357884c68673d8c691b0491e5e93509703b5f8f491" gracePeriod=30 Jan 27 08:11:53 crc kubenswrapper[4799]: I0127 08:11:53.484113 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8d1ca94d-0dc1-402e-87b0-e76fc390a9a4" containerName="glance-httpd" containerID="cri-o://9163187825d75d65281abfc71d2b39e88d5e1b584e17b29a6cb086d7ce38d30f" gracePeriod=30 Jan 27 08:11:53 crc kubenswrapper[4799]: E0127 08:11:53.536781 4799 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Jan 27 08:11:53 crc kubenswrapper[4799]: E0127 08:11:53.536846 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data podName:182368c8-7aeb-4cfe-8de7-60794b59792c nodeName:}" failed. No retries permitted until 2026-01-27 08:11:54.036827692 +0000 UTC m=+1580.347931757 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data") pod "cinder-scheduler-0" (UID: "182368c8-7aeb-4cfe-8de7-60794b59792c") : secret "cinder-config-data" not found Jan 27 08:11:53 crc kubenswrapper[4799]: E0127 08:11:53.536904 4799 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Jan 27 08:11:53 crc kubenswrapper[4799]: E0127 08:11:53.536931 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-scripts podName:182368c8-7aeb-4cfe-8de7-60794b59792c nodeName:}" failed. No retries permitted until 2026-01-27 08:11:54.036924375 +0000 UTC m=+1580.348028440 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-scripts") pod "cinder-scheduler-0" (UID: "182368c8-7aeb-4cfe-8de7-60794b59792c") : secret "cinder-scripts" not found Jan 27 08:11:53 crc kubenswrapper[4799]: E0127 08:11:53.536972 4799 secret.go:188] Couldn't get secret openstack/cinder-scheduler-config-data: secret "cinder-scheduler-config-data" not found Jan 27 08:11:53 crc kubenswrapper[4799]: E0127 08:11:53.537007 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data-custom podName:182368c8-7aeb-4cfe-8de7-60794b59792c nodeName:}" failed. No retries permitted until 2026-01-27 08:11:54.036989117 +0000 UTC m=+1580.348093172 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data-custom") pod "cinder-scheduler-0" (UID: "182368c8-7aeb-4cfe-8de7-60794b59792c") : secret "cinder-scheduler-config-data" not found Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.634563 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="617cc655-aae2-4918-ba79-05e346cf9200" containerName="ovsdbserver-sb" containerID="cri-o://8b889d06f7ebe01917c15ab23bb6f82de1d3280d85886ca49cee0080e8046c73" gracePeriod=300 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.676517 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="e6b7da0a-2774-4bae-ba2f-3b943e027082" containerName="ovsdbserver-nb" containerID="cri-o://ab6cec124f22dcd31e62ec157b18fc36096228b880c78321a66eca8e1a726508" gracePeriod=300 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.701884 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-zbptw"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.739454 4799 generic.go:334] "Generic (PLEG): container finished" podID="54237546-70b8-4475-bd97-53ea6047786b" containerID="960c6e3a2d0404b26224ebbe0c842e8ab32ce14fc65333840ba8d2163a57fc6d" exitCode=2 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.739574 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"54237546-70b8-4475-bd97-53ea6047786b","Type":"ContainerDied","Data":"960c6e3a2d0404b26224ebbe0c842e8ab32ce14fc65333840ba8d2163a57fc6d"} Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:53.763679 4799 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:53.763734 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-config-data podName:0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0 nodeName:}" failed. No retries permitted until 2026-01-27 08:11:55.763719342 +0000 UTC m=+1582.074823407 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-config-data") pod "rabbitmq-cell1-server-0" (UID: "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0") : configmap "rabbitmq-cell1-config-data" not found Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.768604 4799 generic.go:334] "Generic (PLEG): container finished" podID="e6b7da0a-2774-4bae-ba2f-3b943e027082" containerID="af4582fbc376280b8069bf7f7b55933070749f10ea9380861c1e04e1287e288f" exitCode=2 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.768672 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e6b7da0a-2774-4bae-ba2f-3b943e027082","Type":"ContainerDied","Data":"af4582fbc376280b8069bf7f7b55933070749f10ea9380861c1e04e1287e288f"} Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.778042 4799 generic.go:334] "Generic (PLEG): container finished" podID="d57ed20b-0573-4924-aeee-bef05838e330" containerID="242ed1a16fd5f7f954693a993a1d2ded4c83efdbd95645efc9040027f0bb6c24" exitCode=0 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.778145 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" event={"ID":"d57ed20b-0573-4924-aeee-bef05838e330","Type":"ContainerDied","Data":"242ed1a16fd5f7f954693a993a1d2ded4c83efdbd95645efc9040027f0bb6c24"} Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.790628 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_617cc655-aae2-4918-ba79-05e346cf9200/ovsdbserver-sb/0.log" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.790671 4799 generic.go:334] "Generic (PLEG): container finished" podID="617cc655-aae2-4918-ba79-05e346cf9200" containerID="97b857c500f0dc120edc4b9f7299baa035a1a1571e8961c116690abe3c273321" exitCode=2 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.790734 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"617cc655-aae2-4918-ba79-05e346cf9200","Type":"ContainerDied","Data":"97b857c500f0dc120edc4b9f7299baa035a1a1571e8961c116690abe3c273321"} Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.797726 4799 generic.go:334] "Generic (PLEG): container finished" podID="fbb51a95-a5db-4e7c-8cca-a59d07200ad5" containerID="47f6d93a69dd90911aea2e658078c1ee7cf68c9157a2c5886005700a1107b370" exitCode=143 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.797827 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-zbptw"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.797855 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fbb51a95-a5db-4e7c-8cca-a59d07200ad5","Type":"ContainerDied","Data":"47f6d93a69dd90911aea2e658078c1ee7cf68c9157a2c5886005700a1107b370"} Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.828443 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-dt4kd_f3a68e3d-78f4-4a7a-9915-0801f0ffeed6/openstack-network-exporter/0.log" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.828487 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dt4kd" event={"ID":"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6","Type":"ContainerDied","Data":"423e0ff5622dedf1b5358e30c57f3108bb225da07c9d45dea4a0517cd808c0a7"} Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.828511 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="423e0ff5622dedf1b5358e30c57f3108bb225da07c9d45dea4a0517cd808c0a7" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.873420 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.878617 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="account-server" containerID="cri-o://c9d0893ee0366152b7225975257a1cb9bd87ad844aa46d49cb26f4f0a856f1bd" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.879661 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="swift-recon-cron" containerID="cri-o://c4d3c4b4e64dfbc1c45aeef0d3fa8039b0ef7f24f1db2ef2a53e7b81f2dbf7cd" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.879712 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="rsync" containerID="cri-o://45d6619acf1257ed156eb62ccd78bce4b9de066ddff06f50c79b9cfa7413832a" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.879741 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="object-expirer" containerID="cri-o://3922fa50fa34a49a4cae14b9fb8d549d80b13972fa6b8a9ebc9d6e8b35d4c31a" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.879778 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="object-updater" containerID="cri-o://9f59e75754ee9b9eac827452a8a976c731b40f46763088ad523dac5e470ed06f" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.879810 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="object-auditor" containerID="cri-o://654d69afe42028cbde3190c61f3ec77cf53f47e3e019c731d93e9629e2ab6f7e" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.879840 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="object-replicator" containerID="cri-o://a0f80023889ce615a3db222ff0674d625f01a9a123a68219af42b2380036e108" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.879867 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="object-server" containerID="cri-o://ed9695191592e2cf7c9a81c2f7e406e573fe094f24810d13f52025b42fd14e45" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.879895 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="container-updater" containerID="cri-o://e030d8bc2db8dd9aa03cb62df712dbc9c8cb6607608f2f7f1450bf93e538b751" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.879924 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="container-auditor" containerID="cri-o://899ef5eb6ef3452c56a64f0c4e70618404205cce529f22053b03a285e9ee13a3" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.879954 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="container-replicator" containerID="cri-o://929e394a1e6c0338b8779b3f2f7a5f4bcce35d3226afdb70bb609d003ca46732" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.879982 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="container-server" containerID="cri-o://27328900cd6228d146086ac95ae8b05b2862be13eb3c0f09db06830d1bca9dcd" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.880027 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="account-reaper" containerID="cri-o://5f6ad523ec32449ea83c02924beadf32d298bbe23dfa33c93e20912e9492a329" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.880056 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="account-auditor" containerID="cri-o://c1c3036953768d8694461daaf820c6e0b6719d2fd7d5cf0b122afc241b86a7f8" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.880114 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="account-replicator" containerID="cri-o://f81a6afb6d0a44c9057b113f1164161ca416a68ae9a9c80e23e3c53a915439ca" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.908597 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.908917 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="182368c8-7aeb-4cfe-8de7-60794b59792c" containerName="cinder-scheduler" containerID="cri-o://887b7522c7a579e1357187dc19c73cd82c504e4628cbbadfecd2ef27a7755903" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.909449 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="182368c8-7aeb-4cfe-8de7-60794b59792c" containerName="probe" containerID="cri-o://2e0014740e85a33f412b5d3841b82af95fac4e3521ee8803886d47fa9713d82f" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:53.934670 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab6cec124f22dcd31e62ec157b18fc36096228b880c78321a66eca8e1a726508 is running failed: container process not found" containerID="ab6cec124f22dcd31e62ec157b18fc36096228b880c78321a66eca8e1a726508" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:53.938467 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab6cec124f22dcd31e62ec157b18fc36096228b880c78321a66eca8e1a726508 is running failed: container process not found" containerID="ab6cec124f22dcd31e62ec157b18fc36096228b880c78321a66eca8e1a726508" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:53.942508 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab6cec124f22dcd31e62ec157b18fc36096228b880c78321a66eca8e1a726508 is running failed: container process not found" containerID="ab6cec124f22dcd31e62ec157b18fc36096228b880c78321a66eca8e1a726508" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:53.942574 4799 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab6cec124f22dcd31e62ec157b18fc36096228b880c78321a66eca8e1a726508 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-nb-0" podUID="e6b7da0a-2774-4bae-ba2f-3b943e027082" containerName="ovsdbserver-nb" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.995866 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.996132 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b04e9a37-9722-491b-ada1-992d747e5bed" containerName="cinder-api-log" containerID="cri-o://f06952a9ab57e05a35f1acbc89982309bc73e3bb682bdad8e9a6892475d7d2d6" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:53.996278 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b04e9a37-9722-491b-ada1-992d747e5bed" containerName="cinder-api" containerID="cri-o://116993352c0ec841dd44d8855c494723f90029b4a77addd4297eb275455f13c3" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.017812 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3eda-account-create-update-9krkt"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.039721 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.046422 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.046646 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cdad1fc3-eebb-4dcb-b69a-076d1dc63a89" containerName="nova-metadata-log" containerID="cri-o://947b7b89455c6bd8431f5a9a840ca3ceaaa323705be0a8b142bb4d7b5cf00a77" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.046781 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cdad1fc3-eebb-4dcb-b69a-076d1dc63a89" containerName="nova-metadata-metadata" containerID="cri-o://9baffec134930c3ab03eac84affdcfbddaeaf581caaa16db927895c2feaa8b6f" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.065627 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-dgncp"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.076886 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-dgncp"] Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:54.078250 4799 secret.go:188] Couldn't get secret openstack/cinder-scheduler-config-data: secret "cinder-scheduler-config-data" not found Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:54.078319 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data-custom podName:182368c8-7aeb-4cfe-8de7-60794b59792c nodeName:}" failed. No retries permitted until 2026-01-27 08:11:55.078286581 +0000 UTC m=+1581.389390646 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data-custom") pod "cinder-scheduler-0" (UID: "182368c8-7aeb-4cfe-8de7-60794b59792c") : secret "cinder-scheduler-config-data" not found Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:54.078637 4799 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:54.078663 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data podName:182368c8-7aeb-4cfe-8de7-60794b59792c nodeName:}" failed. No retries permitted until 2026-01-27 08:11:55.078655111 +0000 UTC m=+1581.389759176 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data") pod "cinder-scheduler-0" (UID: "182368c8-7aeb-4cfe-8de7-60794b59792c") : secret "cinder-config-data" not found Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:54.078699 4799 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:54.078718 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-scripts podName:182368c8-7aeb-4cfe-8de7-60794b59792c nodeName:}" failed. No retries permitted until 2026-01-27 08:11:55.078711332 +0000 UTC m=+1581.389815397 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-scripts") pod "cinder-scheduler-0" (UID: "182368c8-7aeb-4cfe-8de7-60794b59792c") : secret "cinder-scripts" not found Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.111482 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-abfe-account-create-update-l8nsq"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.120967 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-abfe-account-create-update-l8nsq"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.167889 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7c8985574d-z64hk"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.168139 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7c8985574d-z64hk" podUID="c28c66b4-aa13-41ed-8045-b6f131d48146" containerName="placement-log" containerID="cri-o://801f7188ac9b031ee354e36b6557438a9bb09e2611de5ef5ebabce452d6cad4b" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.168564 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7c8985574d-z64hk" podUID="c28c66b4-aa13-41ed-8045-b6f131d48146" containerName="placement-api" containerID="cri-o://c2b2e7928c28e65fd10ed3d02c25b2c2d9f228016a43047cdc597b20a6fb8409" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:54.182629 4799 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 27 08:11:54 crc kubenswrapper[4799]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 27 08:11:54 crc kubenswrapper[4799]: + source /usr/local/bin/container-scripts/functions Jan 27 08:11:54 crc kubenswrapper[4799]: ++ OVNBridge=br-int Jan 27 08:11:54 crc kubenswrapper[4799]: ++ OVNRemote=tcp:localhost:6642 Jan 27 08:11:54 crc kubenswrapper[4799]: ++ OVNEncapType=geneve Jan 27 08:11:54 crc kubenswrapper[4799]: ++ OVNAvailabilityZones= Jan 27 08:11:54 crc kubenswrapper[4799]: ++ EnableChassisAsGateway=true Jan 27 08:11:54 crc kubenswrapper[4799]: ++ PhysicalNetworks= Jan 27 08:11:54 crc kubenswrapper[4799]: ++ OVNHostName= Jan 27 08:11:54 crc kubenswrapper[4799]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 27 08:11:54 crc kubenswrapper[4799]: ++ ovs_dir=/var/lib/openvswitch Jan 27 08:11:54 crc kubenswrapper[4799]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 27 08:11:54 crc kubenswrapper[4799]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 27 08:11:54 crc kubenswrapper[4799]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 27 08:11:54 crc kubenswrapper[4799]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 08:11:54 crc kubenswrapper[4799]: + sleep 0.5 Jan 27 08:11:54 crc kubenswrapper[4799]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 08:11:54 crc kubenswrapper[4799]: + sleep 0.5 Jan 27 08:11:54 crc kubenswrapper[4799]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 08:11:54 crc kubenswrapper[4799]: + sleep 0.5 Jan 27 08:11:54 crc kubenswrapper[4799]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 08:11:54 crc kubenswrapper[4799]: + cleanup_ovsdb_server_semaphore Jan 27 08:11:54 crc kubenswrapper[4799]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 27 08:11:54 crc kubenswrapper[4799]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 27 08:11:54 crc kubenswrapper[4799]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-zct2j" message=< Jan 27 08:11:54 crc kubenswrapper[4799]: Exiting ovsdb-server (5) [ OK ] Jan 27 08:11:54 crc kubenswrapper[4799]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 27 08:11:54 crc kubenswrapper[4799]: + source /usr/local/bin/container-scripts/functions Jan 27 08:11:54 crc kubenswrapper[4799]: ++ OVNBridge=br-int Jan 27 08:11:54 crc kubenswrapper[4799]: ++ OVNRemote=tcp:localhost:6642 Jan 27 08:11:54 crc kubenswrapper[4799]: ++ OVNEncapType=geneve Jan 27 08:11:54 crc kubenswrapper[4799]: ++ OVNAvailabilityZones= Jan 27 08:11:54 crc kubenswrapper[4799]: ++ EnableChassisAsGateway=true Jan 27 08:11:54 crc kubenswrapper[4799]: ++ PhysicalNetworks= Jan 27 08:11:54 crc kubenswrapper[4799]: ++ OVNHostName= Jan 27 08:11:54 crc kubenswrapper[4799]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 27 08:11:54 crc kubenswrapper[4799]: ++ ovs_dir=/var/lib/openvswitch Jan 27 08:11:54 crc kubenswrapper[4799]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 27 08:11:54 crc kubenswrapper[4799]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 27 08:11:54 crc kubenswrapper[4799]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 27 08:11:54 crc kubenswrapper[4799]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 08:11:54 crc kubenswrapper[4799]: + sleep 0.5 Jan 27 08:11:54 crc kubenswrapper[4799]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 08:11:54 crc kubenswrapper[4799]: + sleep 0.5 Jan 27 08:11:54 crc kubenswrapper[4799]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 08:11:54 crc kubenswrapper[4799]: + sleep 0.5 Jan 27 08:11:54 crc kubenswrapper[4799]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 08:11:54 crc kubenswrapper[4799]: + cleanup_ovsdb_server_semaphore Jan 27 08:11:54 crc kubenswrapper[4799]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 27 08:11:54 crc kubenswrapper[4799]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 27 08:11:54 crc kubenswrapper[4799]: > Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:54.182670 4799 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 27 08:11:54 crc kubenswrapper[4799]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 27 08:11:54 crc kubenswrapper[4799]: + source /usr/local/bin/container-scripts/functions Jan 27 08:11:54 crc kubenswrapper[4799]: ++ OVNBridge=br-int Jan 27 08:11:54 crc kubenswrapper[4799]: ++ OVNRemote=tcp:localhost:6642 Jan 27 08:11:54 crc kubenswrapper[4799]: ++ OVNEncapType=geneve Jan 27 08:11:54 crc kubenswrapper[4799]: ++ OVNAvailabilityZones= Jan 27 08:11:54 crc kubenswrapper[4799]: ++ EnableChassisAsGateway=true Jan 27 08:11:54 crc kubenswrapper[4799]: ++ PhysicalNetworks= Jan 27 08:11:54 crc kubenswrapper[4799]: ++ OVNHostName= Jan 27 08:11:54 crc kubenswrapper[4799]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 27 08:11:54 crc kubenswrapper[4799]: ++ ovs_dir=/var/lib/openvswitch Jan 27 08:11:54 crc kubenswrapper[4799]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 27 08:11:54 crc kubenswrapper[4799]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 27 08:11:54 crc kubenswrapper[4799]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 27 08:11:54 crc kubenswrapper[4799]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 08:11:54 crc kubenswrapper[4799]: + sleep 0.5 Jan 27 08:11:54 crc kubenswrapper[4799]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 08:11:54 crc kubenswrapper[4799]: + sleep 0.5 Jan 27 08:11:54 crc kubenswrapper[4799]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 08:11:54 crc kubenswrapper[4799]: + sleep 0.5 Jan 27 08:11:54 crc kubenswrapper[4799]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 08:11:54 crc kubenswrapper[4799]: + cleanup_ovsdb_server_semaphore Jan 27 08:11:54 crc kubenswrapper[4799]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 27 08:11:54 crc kubenswrapper[4799]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 27 08:11:54 crc kubenswrapper[4799]: > pod="openstack/ovn-controller-ovs-zct2j" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovsdb-server" containerID="cri-o://cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.182705 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-zct2j" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovsdb-server" containerID="cri-o://cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e" gracePeriod=29 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.182800 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" containerName="rabbitmq" containerID="cri-o://966f0455778a97fcdcd44dc78bdffda4cc3f28c1ba75e48f0a8013fb5a8ec713" gracePeriod=604800 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.185158 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-jqxqz"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.193600 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-zct2j" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovs-vswitchd" containerID="cri-o://21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f" gracePeriod=29 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.197476 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-jqxqz"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.217575 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.217787 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="034b328a-c365-4b0a-8346-1cd571d65921" containerName="nova-api-log" containerID="cri-o://e4071f639ad9f711f3ce82cf2b36a21d6fcce03277af1036060c6ec9c693832d" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.218276 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="034b328a-c365-4b0a-8346-1cd571d65921" containerName="nova-api-api" containerID="cri-o://3aded0d751c3418825c7df5a4d4839d2ed013993df821070e2de8ffc8b9aa2d3" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.246730 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-8gkts"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.260618 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-8gkts"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.270355 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f310-account-create-update-nffw8"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.277482 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-f310-account-create-update-nffw8"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.279903 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-b7647d64-tp8mw"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.280216 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" podUID="2d63e438-475a-4686-861e-5fba1fcb6767" containerName="barbican-keystone-listener-log" containerID="cri-o://a22901acd6a50b1282be5aa7812b776f75d8908b54e3c27f5cd0447e61303815" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.280393 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" podUID="2d63e438-475a-4686-861e-5fba1fcb6767" containerName="barbican-keystone-listener" containerID="cri-o://f1755401242c059dad35b5bc55293a45e3c2338cff7e596ef0a63fa469e5e36a" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.289110 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5c72-account-create-update-mpznk"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.290899 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-dt4kd_f3a68e3d-78f4-4a7a-9915-0801f0ffeed6/openstack-network-exporter/0.log" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.290947 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dt4kd" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.304472 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9vvt6"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.313018 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-85fc64b547-v7lvv"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.313369 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-85fc64b547-v7lvv" podUID="57b30668-20df-41a6-80b4-ee59aea714dc" containerName="barbican-worker-log" containerID="cri-o://f7e155f75ed90bc86ce2b18a04f696ca9d79b6490d4aead206b8756ba19fd468" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.313508 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-85fc64b547-v7lvv" podUID="57b30668-20df-41a6-80b4-ee59aea714dc" containerName="barbican-worker" containerID="cri-o://5d5582170f2ff95c4d921e6cfd3e75f6a45a23ac72d43f17c5d9b3b7565c91e8" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.317068 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.331436 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-774s2"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.336120 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-774s2"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.355310 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0f1e-account-create-update-9pcqg"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.384618 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-ovs-rundir\") pod \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.384714 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-config\") pod \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.384881 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkljb\" (UniqueName: \"kubernetes.io/projected/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-kube-api-access-rkljb\") pod \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.385000 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-ovn-rundir\") pod \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.385022 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-metrics-certs-tls-certs\") pod \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.385060 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-combined-ca-bundle\") pod \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\" (UID: \"f3a68e3d-78f4-4a7a-9915-0801f0ffeed6\") " Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.388336 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-config" (OuterVolumeSpecName: "config") pod "f3a68e3d-78f4-4a7a-9915-0801f0ffeed6" (UID: "f3a68e3d-78f4-4a7a-9915-0801f0ffeed6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.388405 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "f3a68e3d-78f4-4a7a-9915-0801f0ffeed6" (UID: "f3a68e3d-78f4-4a7a-9915-0801f0ffeed6"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.388427 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "f3a68e3d-78f4-4a7a-9915-0801f0ffeed6" (UID: "f3a68e3d-78f4-4a7a-9915-0801f0ffeed6"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.397050 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-9pj2r"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.399936 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-kube-api-access-rkljb" (OuterVolumeSpecName: "kube-api-access-rkljb") pod "f3a68e3d-78f4-4a7a-9915-0801f0ffeed6" (UID: "f3a68e3d-78f4-4a7a-9915-0801f0ffeed6"). InnerVolumeSpecName "kube-api-access-rkljb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.417677 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-9pj2r"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.422847 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-796576645f-ws7ff"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.423325 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-796576645f-ws7ff" podUID="786fd8aa-3ed9-420c-bdcd-b15a36795e72" containerName="proxy-httpd" containerID="cri-o://a410fa87aa27bc5aec692e3904102b11a8beb6ba58b545d710293acc1e8b85db" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.424739 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-796576645f-ws7ff" podUID="786fd8aa-3ed9-420c-bdcd-b15a36795e72" containerName="proxy-server" containerID="cri-o://d0a0014e66a5c2d50763987c6970b274c017a47ee7c6c4a889fdb90cf619e3f2" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.433484 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-pjzqp"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.445996 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-85c6c54fbb-zhvhw"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.446214 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-85c6c54fbb-zhvhw" podUID="0dbfc3a0-883d-46a6-af9b-879efb42840e" containerName="barbican-api-log" containerID="cri-o://40d6b9faa74af8ff6a32d01f9fc3a6c0f6258a0b08ea53fa5774e5655a3aa97d" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.448944 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-85c6c54fbb-zhvhw" podUID="0dbfc3a0-883d-46a6-af9b-879efb42840e" containerName="barbican-api" containerID="cri-o://f6c0c751dfd74d698477e4e018861e43ef7141cef238f287a434550b2a21af4b" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.483615 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3a68e3d-78f4-4a7a-9915-0801f0ffeed6" (UID: "f3a68e3d-78f4-4a7a-9915-0801f0ffeed6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.490819 4799 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.490847 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.490859 4799 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.490874 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.490884 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkljb\" (UniqueName: \"kubernetes.io/projected/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-kube-api-access-rkljb\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:54.491792 4799 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:54.491869 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-config-data podName:8d822fe6-f547-4b8f-a6e4-c7256e1b2ace nodeName:}" failed. No retries permitted until 2026-01-27 08:11:56.491852959 +0000 UTC m=+1582.802957014 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-config-data") pod "rabbitmq-server-0" (UID: "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace") : configmap "rabbitmq-config-data" not found Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.577354 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e7f06a1-752e-4e8b-9d59-991326981dda" path="/var/lib/kubelet/pods/1e7f06a1-752e-4e8b-9d59-991326981dda/volumes" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.578955 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24508566-6f8e-48c4-a3e5-088544cd6b94" path="/var/lib/kubelet/pods/24508566-6f8e-48c4-a3e5-088544cd6b94/volumes" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.588464 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bd94d42-7c61-4f0a-a655-d4f85cd03d88" path="/var/lib/kubelet/pods/2bd94d42-7c61-4f0a-a655-d4f85cd03d88/volumes" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.589733 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49f26f77-a15a-4c1a-a697-fd3823a47c5b" path="/var/lib/kubelet/pods/49f26f77-a15a-4c1a-a697-fd3823a47c5b/volumes" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.590475 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b01cc32-2ffb-4377-afff-7fbaa3d14de7" path="/var/lib/kubelet/pods/5b01cc32-2ffb-4377-afff-7fbaa3d14de7/volumes" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.591557 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6228dcb6-2940-4494-b1fa-838d28618279" path="/var/lib/kubelet/pods/6228dcb6-2940-4494-b1fa-838d28618279/volumes" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.592161 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="666620a1-4d36-48ba-a226-e4ba6b9d82a0" path="/var/lib/kubelet/pods/666620a1-4d36-48ba-a226-e4ba6b9d82a0/volumes" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.592745 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a00aa8f-6e63-4f84-8353-1ba24e84e64d" path="/var/lib/kubelet/pods/6a00aa8f-6e63-4f84-8353-1ba24e84e64d/volumes" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.593378 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74da5422-dcf4-48cb-a29a-7378082a827d" path="/var/lib/kubelet/pods/74da5422-dcf4-48cb-a29a-7378082a827d/volumes" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.594448 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81cece98-fc44-4b30-b861-92affbfe1e8a" path="/var/lib/kubelet/pods/81cece98-fc44-4b30-b861-92affbfe1e8a/volumes" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.594999 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84f49a02-4934-43be-aa45-d24a40b20db2" path="/var/lib/kubelet/pods/84f49a02-4934-43be-aa45-d24a40b20db2/volumes" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.595720 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3acfc06-6e63-4a08-a201-50a9d6fe8ed5" path="/var/lib/kubelet/pods/a3acfc06-6e63-4a08-a201-50a9d6fe8ed5/volumes" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.601445 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7ee0ddb-6bdc-4388-8b45-f58e81417a13" path="/var/lib/kubelet/pods/a7ee0ddb-6bdc-4388-8b45-f58e81417a13/volumes" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.602072 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3b3796e-e4b6-41e1-b1f5-dc7ade294816" path="/var/lib/kubelet/pods/b3b3796e-e4b6-41e1-b1f5-dc7ade294816/volumes" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.602597 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8c82402-d3a9-494f-a979-881fa184a4e1" path="/var/lib/kubelet/pods/c8c82402-d3a9-494f-a979-881fa184a4e1/volumes" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.605516 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8cfa24c-646d-435d-bd6f-30199969555c" path="/var/lib/kubelet/pods/c8cfa24c-646d-435d-bd6f-30199969555c/volumes" Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:54.614112 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" containerID="cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:54.614508 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" containerID="cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:54.615036 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" containerID="cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:54.615139 4799 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-zct2j" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovsdb-server" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.619635 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfe230e4-078d-4aeb-858f-296dd5505f4a" path="/var/lib/kubelet/pods/dfe230e4-078d-4aeb-858f-296dd5505f4a/volumes" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.620484 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0445ff2-2b89-41e9-81f3-953e21253b19" path="/var/lib/kubelet/pods/e0445ff2-2b89-41e9-81f3-953e21253b19/volumes" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.621584 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87" path="/var/lib/kubelet/pods/f9a1b1d0-03d9-4034-a8ef-dad82f7f6b87/volumes" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.622688 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb663bcc-159b-4604-8582-75a4baff492f" path="/var/lib/kubelet/pods/fb663bcc-159b-4604-8582-75a4baff492f/volumes" Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:54.627073 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:54.638347 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.639750 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-pjzqp"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.639783 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c37c-account-create-update-mb86j"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.639798 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-kdhxd"] Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:54.670426 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:54.670487 4799 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-zct2j" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovs-vswitchd" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.674549 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "f3a68e3d-78f4-4a7a-9915-0801f0ffeed6" (UID: "f3a68e3d-78f4-4a7a-9915-0801f0ffeed6"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.692351 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-07af-account-create-update-wm8x8"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.695056 4799 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.709632 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-kdhxd"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.721855 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.722084 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b3f9568f-3dd7-4bdb-9b53-2e6ec291e813" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://7dc576b58ec5fe8802a2fd56053e84ba4ee332d5e7f250677126c60dfef2ae46" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.731380 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.760276 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t6sqk"] Jan 27 08:11:54 crc kubenswrapper[4799]: E0127 08:11:54.760753 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a68e3d-78f4-4a7a-9915-0801f0ffeed6" containerName="openstack-network-exporter" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.760766 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a68e3d-78f4-4a7a-9915-0801f0ffeed6" containerName="openstack-network-exporter" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.761018 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a68e3d-78f4-4a7a-9915-0801f0ffeed6" containerName="openstack-network-exporter" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.762281 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6sqk" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.770543 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t6sqk"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.776115 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.776451 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="69778bc9-c84e-42d0-9645-7fd3afa2ca28" containerName="nova-scheduler-scheduler" containerID="cri-o://bc6c983e01ab338de442045f241c1648fa14c28bc2a221e488c7933c7f13fa66" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.811357 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="8d822fe6-f547-4b8f-a6e4-c7256e1b2ace" containerName="rabbitmq" containerID="cri-o://89a107c494f936fe6c451a1549012a9e938164def6d5383c5c024b222a155ca4" gracePeriod=604800 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.855533 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.855800 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="8bca1b10-545f-4e35-a5af-e760d464d0ff" containerName="nova-cell1-conductor-conductor" containerID="cri-o://647797099f96b25df47d1cc66e23dbb35585ab19b6a105db8444e78a1585d8dc" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.893514 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tfq6j"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.901254 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c0d170a-443e-438c-b4cd-0be234b7594c-catalog-content\") pod \"certified-operators-t6sqk\" (UID: \"3c0d170a-443e-438c-b4cd-0be234b7594c\") " pod="openshift-marketplace/certified-operators-t6sqk" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.901291 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c0d170a-443e-438c-b4cd-0be234b7594c-utilities\") pod \"certified-operators-t6sqk\" (UID: \"3c0d170a-443e-438c-b4cd-0be234b7594c\") " pod="openshift-marketplace/certified-operators-t6sqk" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.901355 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv5l7\" (UniqueName: \"kubernetes.io/projected/3c0d170a-443e-438c-b4cd-0be234b7594c-kube-api-access-kv5l7\") pod \"certified-operators-t6sqk\" (UID: \"3c0d170a-443e-438c-b4cd-0be234b7594c\") " pod="openshift-marketplace/certified-operators-t6sqk" Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.930638 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tfq6j"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.956377 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.956648 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="3c53857a-2e9c-4057-9f69-3611704d36f5" containerName="nova-cell0-conductor-conductor" containerID="cri-o://202e2f036574e98bda00448180d3c7a6925f661345419ea82a8d5eedddba0db0" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.967593 4799 generic.go:334] "Generic (PLEG): container finished" podID="82b996cd-10af-493c-9972-bb6d9bedc711" containerID="cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e" exitCode=0 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.967718 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zct2j" event={"ID":"82b996cd-10af-493c-9972-bb6d9bedc711","Type":"ContainerDied","Data":"cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e"} Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.982932 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hmh5p"] Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.990784 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="26e17670-568e-498f-be09-ffb1406c3152" containerName="galera" containerID="cri-o://4bd65b5bc7d74ca250c680832d09104b02e9463eba911724fc54dcc3b8686b82" gracePeriod=30 Jan 27 08:11:54 crc kubenswrapper[4799]: I0127 08:11:54.995105 4799 generic.go:334] "Generic (PLEG): container finished" podID="07e52675-0afa-4579-a5c1-f0aba31dd6e7" containerID="932c19e1786489198ed2fd00256bc0ea9ef4db8f5bd057c7ca4751c183f4f1d6" exitCode=137 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.004636 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv5l7\" (UniqueName: \"kubernetes.io/projected/3c0d170a-443e-438c-b4cd-0be234b7594c-kube-api-access-kv5l7\") pod \"certified-operators-t6sqk\" (UID: \"3c0d170a-443e-438c-b4cd-0be234b7594c\") " pod="openshift-marketplace/certified-operators-t6sqk" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.004815 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c0d170a-443e-438c-b4cd-0be234b7594c-catalog-content\") pod \"certified-operators-t6sqk\" (UID: \"3c0d170a-443e-438c-b4cd-0be234b7594c\") " pod="openshift-marketplace/certified-operators-t6sqk" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.004835 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c0d170a-443e-438c-b4cd-0be234b7594c-utilities\") pod \"certified-operators-t6sqk\" (UID: \"3c0d170a-443e-438c-b4cd-0be234b7594c\") " pod="openshift-marketplace/certified-operators-t6sqk" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.005384 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c0d170a-443e-438c-b4cd-0be234b7594c-utilities\") pod \"certified-operators-t6sqk\" (UID: \"3c0d170a-443e-438c-b4cd-0be234b7594c\") " pod="openshift-marketplace/certified-operators-t6sqk" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.005997 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c0d170a-443e-438c-b4cd-0be234b7594c-catalog-content\") pod \"certified-operators-t6sqk\" (UID: \"3c0d170a-443e-438c-b4cd-0be234b7594c\") " pod="openshift-marketplace/certified-operators-t6sqk" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.008340 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hmh5p"] Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.022468 4799 generic.go:334] "Generic (PLEG): container finished" podID="8d1ca94d-0dc1-402e-87b0-e76fc390a9a4" containerID="645e7a56ca8ac37dc99398357884c68673d8c691b0491e5e93509703b5f8f491" exitCode=143 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.022536 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4","Type":"ContainerDied","Data":"645e7a56ca8ac37dc99398357884c68673d8c691b0491e5e93509703b5f8f491"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.041977 4799 generic.go:334] "Generic (PLEG): container finished" podID="b04e9a37-9722-491b-ada1-992d747e5bed" containerID="f06952a9ab57e05a35f1acbc89982309bc73e3bb682bdad8e9a6892475d7d2d6" exitCode=143 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.042058 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b04e9a37-9722-491b-ada1-992d747e5bed","Type":"ContainerDied","Data":"f06952a9ab57e05a35f1acbc89982309bc73e3bb682bdad8e9a6892475d7d2d6"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.056125 4799 generic.go:334] "Generic (PLEG): container finished" podID="cdad1fc3-eebb-4dcb-b69a-076d1dc63a89" containerID="947b7b89455c6bd8431f5a9a840ca3ceaaa323705be0a8b142bb4d7b5cf00a77" exitCode=143 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.056575 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89","Type":"ContainerDied","Data":"947b7b89455c6bd8431f5a9a840ca3ceaaa323705be0a8b142bb4d7b5cf00a77"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.070869 4799 generic.go:334] "Generic (PLEG): container finished" podID="2d63e438-475a-4686-861e-5fba1fcb6767" containerID="a22901acd6a50b1282be5aa7812b776f75d8908b54e3c27f5cd0447e61303815" exitCode=143 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.070943 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" event={"ID":"2d63e438-475a-4686-861e-5fba1fcb6767","Type":"ContainerDied","Data":"a22901acd6a50b1282be5aa7812b776f75d8908b54e3c27f5cd0447e61303815"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.096845 4799 generic.go:334] "Generic (PLEG): container finished" podID="f97b84a5-a34c-405f-8357-70cad8efedbc" containerID="16a1ef8c1f548424a36b350fd3a051e370a5acd755ff4d2a876031dd99b39489" exitCode=1 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.096914 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9vvt6" event={"ID":"f97b84a5-a34c-405f-8357-70cad8efedbc","Type":"ContainerDied","Data":"16a1ef8c1f548424a36b350fd3a051e370a5acd755ff4d2a876031dd99b39489"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.096938 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9vvt6" event={"ID":"f97b84a5-a34c-405f-8357-70cad8efedbc","Type":"ContainerStarted","Data":"333e93a7500bc2fa96959f58aa041856b5f7249b66b7247beb95c35c0357f70c"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.097472 4799 scope.go:117] "RemoveContainer" containerID="16a1ef8c1f548424a36b350fd3a051e370a5acd755ff4d2a876031dd99b39489" Jan 27 08:11:55 crc kubenswrapper[4799]: E0127 08:11:55.107577 4799 secret.go:188] Couldn't get secret openstack/cinder-scheduler-config-data: secret "cinder-scheduler-config-data" not found Jan 27 08:11:55 crc kubenswrapper[4799]: E0127 08:11:55.107594 4799 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Jan 27 08:11:55 crc kubenswrapper[4799]: E0127 08:11:55.107641 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data-custom podName:182368c8-7aeb-4cfe-8de7-60794b59792c nodeName:}" failed. No retries permitted until 2026-01-27 08:11:57.107621131 +0000 UTC m=+1583.418725196 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data-custom") pod "cinder-scheduler-0" (UID: "182368c8-7aeb-4cfe-8de7-60794b59792c") : secret "cinder-scheduler-config-data" not found Jan 27 08:11:55 crc kubenswrapper[4799]: E0127 08:11:55.107656 4799 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Jan 27 08:11:55 crc kubenswrapper[4799]: E0127 08:11:55.107660 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-scripts podName:182368c8-7aeb-4cfe-8de7-60794b59792c nodeName:}" failed. No retries permitted until 2026-01-27 08:11:57.107650181 +0000 UTC m=+1583.418754246 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-scripts") pod "cinder-scheduler-0" (UID: "182368c8-7aeb-4cfe-8de7-60794b59792c") : secret "cinder-scripts" not found Jan 27 08:11:55 crc kubenswrapper[4799]: E0127 08:11:55.107683 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data podName:182368c8-7aeb-4cfe-8de7-60794b59792c nodeName:}" failed. No retries permitted until 2026-01-27 08:11:57.107669602 +0000 UTC m=+1583.418773667 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data") pod "cinder-scheduler-0" (UID: "182368c8-7aeb-4cfe-8de7-60794b59792c") : secret "cinder-config-data" not found Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.131775 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-796576645f-ws7ff" podUID="786fd8aa-3ed9-420c-bdcd-b15a36795e72" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.165:8080/healthcheck\": dial tcp 10.217.0.165:8080: connect: connection refused" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.131865 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-796576645f-ws7ff" podUID="786fd8aa-3ed9-420c-bdcd-b15a36795e72" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.165:8080/healthcheck\": dial tcp 10.217.0.165:8080: connect: connection refused" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.141978 4799 generic.go:334] "Generic (PLEG): container finished" podID="0dbfc3a0-883d-46a6-af9b-879efb42840e" containerID="40d6b9faa74af8ff6a32d01f9fc3a6c0f6258a0b08ea53fa5774e5655a3aa97d" exitCode=143 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.142291 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85c6c54fbb-zhvhw" event={"ID":"0dbfc3a0-883d-46a6-af9b-879efb42840e","Type":"ContainerDied","Data":"40d6b9faa74af8ff6a32d01f9fc3a6c0f6258a0b08ea53fa5774e5655a3aa97d"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.146872 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e6b7da0a-2774-4bae-ba2f-3b943e027082/ovsdbserver-nb/0.log" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.146916 4799 generic.go:334] "Generic (PLEG): container finished" podID="e6b7da0a-2774-4bae-ba2f-3b943e027082" containerID="ab6cec124f22dcd31e62ec157b18fc36096228b880c78321a66eca8e1a726508" exitCode=143 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.146997 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e6b7da0a-2774-4bae-ba2f-3b943e027082","Type":"ContainerDied","Data":"ab6cec124f22dcd31e62ec157b18fc36096228b880c78321a66eca8e1a726508"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.161163 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv5l7\" (UniqueName: \"kubernetes.io/projected/3c0d170a-443e-438c-b4cd-0be234b7594c-kube-api-access-kv5l7\") pod \"certified-operators-t6sqk\" (UID: \"3c0d170a-443e-438c-b4cd-0be234b7594c\") " pod="openshift-marketplace/certified-operators-t6sqk" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.164824 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6sqk" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.226017 4799 generic.go:334] "Generic (PLEG): container finished" podID="2db9ba76-0532-4ed0-972e-fd5452048b97" containerID="7a13a4dba57a64680601c65f46bc4e1fd1ddd9881983073fa8db00588d91d96c" exitCode=0 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.226097 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ff7b8d449-xjt48" event={"ID":"2db9ba76-0532-4ed0-972e-fd5452048b97","Type":"ContainerDied","Data":"7a13a4dba57a64680601c65f46bc4e1fd1ddd9881983073fa8db00588d91d96c"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.252391 4799 generic.go:334] "Generic (PLEG): container finished" podID="034b328a-c365-4b0a-8346-1cd571d65921" containerID="e4071f639ad9f711f3ce82cf2b36a21d6fcce03277af1036060c6ec9c693832d" exitCode=143 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.252460 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"034b328a-c365-4b0a-8346-1cd571d65921","Type":"ContainerDied","Data":"e4071f639ad9f711f3ce82cf2b36a21d6fcce03277af1036060c6ec9c693832d"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.261807 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3eda-account-create-update-9krkt"] Jan 27 08:11:55 crc kubenswrapper[4799]: E0127 08:11:55.326321 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="202e2f036574e98bda00448180d3c7a6925f661345419ea82a8d5eedddba0db0" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.326800 4799 generic.go:334] "Generic (PLEG): container finished" podID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerID="45d6619acf1257ed156eb62ccd78bce4b9de066ddff06f50c79b9cfa7413832a" exitCode=0 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.326814 4799 generic.go:334] "Generic (PLEG): container finished" podID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerID="3922fa50fa34a49a4cae14b9fb8d549d80b13972fa6b8a9ebc9d6e8b35d4c31a" exitCode=0 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.326822 4799 generic.go:334] "Generic (PLEG): container finished" podID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerID="9f59e75754ee9b9eac827452a8a976c731b40f46763088ad523dac5e470ed06f" exitCode=0 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.326831 4799 generic.go:334] "Generic (PLEG): container finished" podID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerID="654d69afe42028cbde3190c61f3ec77cf53f47e3e019c731d93e9629e2ab6f7e" exitCode=0 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.326838 4799 generic.go:334] "Generic (PLEG): container finished" podID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerID="a0f80023889ce615a3db222ff0674d625f01a9a123a68219af42b2380036e108" exitCode=0 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.326859 4799 generic.go:334] "Generic (PLEG): container finished" podID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerID="e030d8bc2db8dd9aa03cb62df712dbc9c8cb6607608f2f7f1450bf93e538b751" exitCode=0 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.326866 4799 generic.go:334] "Generic (PLEG): container finished" podID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerID="899ef5eb6ef3452c56a64f0c4e70618404205cce529f22053b03a285e9ee13a3" exitCode=0 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.326872 4799 generic.go:334] "Generic (PLEG): container finished" podID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerID="929e394a1e6c0338b8779b3f2f7a5f4bcce35d3226afdb70bb609d003ca46732" exitCode=0 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.326880 4799 generic.go:334] "Generic (PLEG): container finished" podID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerID="5f6ad523ec32449ea83c02924beadf32d298bbe23dfa33c93e20912e9492a329" exitCode=0 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.326886 4799 generic.go:334] "Generic (PLEG): container finished" podID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerID="c1c3036953768d8694461daaf820c6e0b6719d2fd7d5cf0b122afc241b86a7f8" exitCode=0 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.326891 4799 generic.go:334] "Generic (PLEG): container finished" podID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerID="f81a6afb6d0a44c9057b113f1164161ca416a68ae9a9c80e23e3c53a915439ca" exitCode=0 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.326949 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerDied","Data":"45d6619acf1257ed156eb62ccd78bce4b9de066ddff06f50c79b9cfa7413832a"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.326974 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerDied","Data":"3922fa50fa34a49a4cae14b9fb8d549d80b13972fa6b8a9ebc9d6e8b35d4c31a"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.326984 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerDied","Data":"9f59e75754ee9b9eac827452a8a976c731b40f46763088ad523dac5e470ed06f"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.326994 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerDied","Data":"654d69afe42028cbde3190c61f3ec77cf53f47e3e019c731d93e9629e2ab6f7e"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.327020 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerDied","Data":"a0f80023889ce615a3db222ff0674d625f01a9a123a68219af42b2380036e108"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.327029 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerDied","Data":"e030d8bc2db8dd9aa03cb62df712dbc9c8cb6607608f2f7f1450bf93e538b751"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.327040 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerDied","Data":"899ef5eb6ef3452c56a64f0c4e70618404205cce529f22053b03a285e9ee13a3"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.327048 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerDied","Data":"929e394a1e6c0338b8779b3f2f7a5f4bcce35d3226afdb70bb609d003ca46732"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.327058 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerDied","Data":"5f6ad523ec32449ea83c02924beadf32d298bbe23dfa33c93e20912e9492a329"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.327068 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerDied","Data":"c1c3036953768d8694461daaf820c6e0b6719d2fd7d5cf0b122afc241b86a7f8"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.327093 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerDied","Data":"f81a6afb6d0a44c9057b113f1164161ca416a68ae9a9c80e23e3c53a915439ca"} Jan 27 08:11:55 crc kubenswrapper[4799]: E0127 08:11:55.333968 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="202e2f036574e98bda00448180d3c7a6925f661345419ea82a8d5eedddba0db0" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 08:11:55 crc kubenswrapper[4799]: E0127 08:11:55.336566 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="202e2f036574e98bda00448180d3c7a6925f661345419ea82a8d5eedddba0db0" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 08:11:55 crc kubenswrapper[4799]: E0127 08:11:55.336615 4799 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="3c53857a-2e9c-4057-9f69-3611704d36f5" containerName="nova-cell0-conductor-conductor" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.339881 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_617cc655-aae2-4918-ba79-05e346cf9200/ovsdbserver-sb/0.log" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.339933 4799 generic.go:334] "Generic (PLEG): container finished" podID="617cc655-aae2-4918-ba79-05e346cf9200" containerID="8b889d06f7ebe01917c15ab23bb6f82de1d3280d85886ca49cee0080e8046c73" exitCode=143 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.340051 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"617cc655-aae2-4918-ba79-05e346cf9200","Type":"ContainerDied","Data":"8b889d06f7ebe01917c15ab23bb6f82de1d3280d85886ca49cee0080e8046c73"} Jan 27 08:11:55 crc kubenswrapper[4799]: E0127 08:11:55.355195 4799 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 08:11:55 crc kubenswrapper[4799]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 27 08:11:55 crc kubenswrapper[4799]: Jan 27 08:11:55 crc kubenswrapper[4799]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 27 08:11:55 crc kubenswrapper[4799]: Jan 27 08:11:55 crc kubenswrapper[4799]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 27 08:11:55 crc kubenswrapper[4799]: Jan 27 08:11:55 crc kubenswrapper[4799]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 27 08:11:55 crc kubenswrapper[4799]: Jan 27 08:11:55 crc kubenswrapper[4799]: if [ -n "neutron" ]; then Jan 27 08:11:55 crc kubenswrapper[4799]: GRANT_DATABASE="neutron" Jan 27 08:11:55 crc kubenswrapper[4799]: else Jan 27 08:11:55 crc kubenswrapper[4799]: GRANT_DATABASE="*" Jan 27 08:11:55 crc kubenswrapper[4799]: fi Jan 27 08:11:55 crc kubenswrapper[4799]: Jan 27 08:11:55 crc kubenswrapper[4799]: # going for maximum compatibility here: Jan 27 08:11:55 crc kubenswrapper[4799]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 27 08:11:55 crc kubenswrapper[4799]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 27 08:11:55 crc kubenswrapper[4799]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 27 08:11:55 crc kubenswrapper[4799]: # support updates Jan 27 08:11:55 crc kubenswrapper[4799]: Jan 27 08:11:55 crc kubenswrapper[4799]: $MYSQL_CMD < logger="UnhandledError" Jan 27 08:11:55 crc kubenswrapper[4799]: E0127 08:11:55.356386 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-3eda-account-create-update-9krkt" podUID="ade86985-ca70-4f21-ae7a-825353f912cb" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.360123 4799 generic.go:334] "Generic (PLEG): container finished" podID="182368c8-7aeb-4cfe-8de7-60794b59792c" containerID="2e0014740e85a33f412b5d3841b82af95fac4e3521ee8803886d47fa9713d82f" exitCode=0 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.360173 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"182368c8-7aeb-4cfe-8de7-60794b59792c","Type":"ContainerDied","Data":"2e0014740e85a33f412b5d3841b82af95fac4e3521ee8803886d47fa9713d82f"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.372025 4799 generic.go:334] "Generic (PLEG): container finished" podID="57b30668-20df-41a6-80b4-ee59aea714dc" containerID="f7e155f75ed90bc86ce2b18a04f696ca9d79b6490d4aead206b8756ba19fd468" exitCode=143 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.372088 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-85fc64b547-v7lvv" event={"ID":"57b30668-20df-41a6-80b4-ee59aea714dc","Type":"ContainerDied","Data":"f7e155f75ed90bc86ce2b18a04f696ca9d79b6490d4aead206b8756ba19fd468"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.373914 4799 generic.go:334] "Generic (PLEG): container finished" podID="c28c66b4-aa13-41ed-8045-b6f131d48146" containerID="801f7188ac9b031ee354e36b6557438a9bb09e2611de5ef5ebabce452d6cad4b" exitCode=143 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.373999 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dt4kd" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.374014 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c8985574d-z64hk" event={"ID":"c28c66b4-aa13-41ed-8045-b6f131d48146","Type":"ContainerDied","Data":"801f7188ac9b031ee354e36b6557438a9bb09e2611de5ef5ebabce452d6cad4b"} Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.476373 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.486560 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-dt4kd"] Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.493157 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-dt4kd"] Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.669196 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_617cc655-aae2-4918-ba79-05e346cf9200/ovsdbserver-sb/0.log" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.669543 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.682004 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-07af-account-create-update-wm8x8"] Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.687174 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.689894 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.694832 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e6b7da0a-2774-4bae-ba2f-3b943e027082/ovsdbserver-nb/0.log" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.694907 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 08:11:55 crc kubenswrapper[4799]: W0127 08:11:55.722075 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a55cd9e_7386_41a1_912c_0876a917bd93.slice/crio-138745f101a65274022c16971d361d1fcf306be1f5f446c8620362a101f67679 WatchSource:0}: Error finding container 138745f101a65274022c16971d361d1fcf306be1f5f446c8620362a101f67679: Status 404 returned error can't find the container with id 138745f101a65274022c16971d361d1fcf306be1f5f446c8620362a101f67679 Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.723994 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/617cc655-aae2-4918-ba79-05e346cf9200-combined-ca-bundle\") pod \"617cc655-aae2-4918-ba79-05e346cf9200\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.724959 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/617cc655-aae2-4918-ba79-05e346cf9200-config\") pod \"617cc655-aae2-4918-ba79-05e346cf9200\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.725212 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slg5x\" (UniqueName: \"kubernetes.io/projected/617cc655-aae2-4918-ba79-05e346cf9200-kube-api-access-slg5x\") pod \"617cc655-aae2-4918-ba79-05e346cf9200\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.725269 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/617cc655-aae2-4918-ba79-05e346cf9200-ovsdb-rundir\") pod \"617cc655-aae2-4918-ba79-05e346cf9200\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.725330 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/617cc655-aae2-4918-ba79-05e346cf9200-ovsdbserver-sb-tls-certs\") pod \"617cc655-aae2-4918-ba79-05e346cf9200\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.725362 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/617cc655-aae2-4918-ba79-05e346cf9200-scripts\") pod \"617cc655-aae2-4918-ba79-05e346cf9200\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.725504 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/617cc655-aae2-4918-ba79-05e346cf9200-metrics-certs-tls-certs\") pod \"617cc655-aae2-4918-ba79-05e346cf9200\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.725551 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"617cc655-aae2-4918-ba79-05e346cf9200\" (UID: \"617cc655-aae2-4918-ba79-05e346cf9200\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.726867 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/617cc655-aae2-4918-ba79-05e346cf9200-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "617cc655-aae2-4918-ba79-05e346cf9200" (UID: "617cc655-aae2-4918-ba79-05e346cf9200"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.727655 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/617cc655-aae2-4918-ba79-05e346cf9200-config" (OuterVolumeSpecName: "config") pod "617cc655-aae2-4918-ba79-05e346cf9200" (UID: "617cc655-aae2-4918-ba79-05e346cf9200"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.727956 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/617cc655-aae2-4918-ba79-05e346cf9200-scripts" (OuterVolumeSpecName: "scripts") pod "617cc655-aae2-4918-ba79-05e346cf9200" (UID: "617cc655-aae2-4918-ba79-05e346cf9200"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.729514 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/617cc655-aae2-4918-ba79-05e346cf9200-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.729625 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/617cc655-aae2-4918-ba79-05e346cf9200-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.729707 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/617cc655-aae2-4918-ba79-05e346cf9200-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.741608 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "617cc655-aae2-4918-ba79-05e346cf9200" (UID: "617cc655-aae2-4918-ba79-05e346cf9200"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 08:11:55 crc kubenswrapper[4799]: E0127 08:11:55.742656 4799 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 08:11:55 crc kubenswrapper[4799]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 27 08:11:55 crc kubenswrapper[4799]: Jan 27 08:11:55 crc kubenswrapper[4799]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 27 08:11:55 crc kubenswrapper[4799]: Jan 27 08:11:55 crc kubenswrapper[4799]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 27 08:11:55 crc kubenswrapper[4799]: Jan 27 08:11:55 crc kubenswrapper[4799]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 27 08:11:55 crc kubenswrapper[4799]: Jan 27 08:11:55 crc kubenswrapper[4799]: if [ -n "nova_api" ]; then Jan 27 08:11:55 crc kubenswrapper[4799]: GRANT_DATABASE="nova_api" Jan 27 08:11:55 crc kubenswrapper[4799]: else Jan 27 08:11:55 crc kubenswrapper[4799]: GRANT_DATABASE="*" Jan 27 08:11:55 crc kubenswrapper[4799]: fi Jan 27 08:11:55 crc kubenswrapper[4799]: Jan 27 08:11:55 crc kubenswrapper[4799]: # going for maximum compatibility here: Jan 27 08:11:55 crc kubenswrapper[4799]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 27 08:11:55 crc kubenswrapper[4799]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 27 08:11:55 crc kubenswrapper[4799]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 27 08:11:55 crc kubenswrapper[4799]: # support updates Jan 27 08:11:55 crc kubenswrapper[4799]: Jan 27 08:11:55 crc kubenswrapper[4799]: $MYSQL_CMD < logger="UnhandledError" Jan 27 08:11:55 crc kubenswrapper[4799]: E0127 08:11:55.747547 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-07af-account-create-update-wm8x8" podUID="7a55cd9e-7386-41a1-912c-0876a917bd93" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.757964 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/617cc655-aae2-4918-ba79-05e346cf9200-kube-api-access-slg5x" (OuterVolumeSpecName: "kube-api-access-slg5x") pod "617cc655-aae2-4918-ba79-05e346cf9200" (UID: "617cc655-aae2-4918-ba79-05e346cf9200"). InnerVolumeSpecName "kube-api-access-slg5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.814042 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/617cc655-aae2-4918-ba79-05e346cf9200-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "617cc655-aae2-4918-ba79-05e346cf9200" (UID: "617cc655-aae2-4918-ba79-05e346cf9200"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.826811 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c37c-account-create-update-mb86j"] Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.832969 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-config\") pod \"d57ed20b-0573-4924-aeee-bef05838e330\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.833055 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b7da0a-2774-4bae-ba2f-3b943e027082-combined-ca-bundle\") pod \"e6b7da0a-2774-4bae-ba2f-3b943e027082\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.833116 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzn94\" (UniqueName: \"kubernetes.io/projected/07e52675-0afa-4579-a5c1-f0aba31dd6e7-kube-api-access-qzn94\") pod \"07e52675-0afa-4579-a5c1-f0aba31dd6e7\" (UID: \"07e52675-0afa-4579-a5c1-f0aba31dd6e7\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.833139 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6b7da0a-2774-4bae-ba2f-3b943e027082-scripts\") pod \"e6b7da0a-2774-4bae-ba2f-3b943e027082\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.833161 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/07e52675-0afa-4579-a5c1-f0aba31dd6e7-openstack-config-secret\") pod \"07e52675-0afa-4579-a5c1-f0aba31dd6e7\" (UID: \"07e52675-0afa-4579-a5c1-f0aba31dd6e7\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.833199 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"e6b7da0a-2774-4bae-ba2f-3b943e027082\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.833240 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlgkw\" (UniqueName: \"kubernetes.io/projected/d57ed20b-0573-4924-aeee-bef05838e330-kube-api-access-vlgkw\") pod \"d57ed20b-0573-4924-aeee-bef05838e330\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.833293 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b7da0a-2774-4bae-ba2f-3b943e027082-ovsdbserver-nb-tls-certs\") pod \"e6b7da0a-2774-4bae-ba2f-3b943e027082\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.833362 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b7da0a-2774-4bae-ba2f-3b943e027082-metrics-certs-tls-certs\") pod \"e6b7da0a-2774-4bae-ba2f-3b943e027082\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.833388 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-dns-svc\") pod \"d57ed20b-0573-4924-aeee-bef05838e330\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.833418 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e6b7da0a-2774-4bae-ba2f-3b943e027082-ovsdb-rundir\") pod \"e6b7da0a-2774-4bae-ba2f-3b943e027082\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.833442 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tmcz\" (UniqueName: \"kubernetes.io/projected/e6b7da0a-2774-4bae-ba2f-3b943e027082-kube-api-access-7tmcz\") pod \"e6b7da0a-2774-4bae-ba2f-3b943e027082\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.833464 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b7da0a-2774-4bae-ba2f-3b943e027082-config\") pod \"e6b7da0a-2774-4bae-ba2f-3b943e027082\" (UID: \"e6b7da0a-2774-4bae-ba2f-3b943e027082\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.833484 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-ovsdbserver-nb\") pod \"d57ed20b-0573-4924-aeee-bef05838e330\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.833503 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-dns-swift-storage-0\") pod \"d57ed20b-0573-4924-aeee-bef05838e330\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.833524 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-ovsdbserver-sb\") pod \"d57ed20b-0573-4924-aeee-bef05838e330\" (UID: \"d57ed20b-0573-4924-aeee-bef05838e330\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.833555 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/07e52675-0afa-4579-a5c1-f0aba31dd6e7-openstack-config\") pod \"07e52675-0afa-4579-a5c1-f0aba31dd6e7\" (UID: \"07e52675-0afa-4579-a5c1-f0aba31dd6e7\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.833583 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e52675-0afa-4579-a5c1-f0aba31dd6e7-combined-ca-bundle\") pod \"07e52675-0afa-4579-a5c1-f0aba31dd6e7\" (UID: \"07e52675-0afa-4579-a5c1-f0aba31dd6e7\") " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.834014 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.834028 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/617cc655-aae2-4918-ba79-05e346cf9200-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.834038 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slg5x\" (UniqueName: \"kubernetes.io/projected/617cc655-aae2-4918-ba79-05e346cf9200-kube-api-access-slg5x\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.835493 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6b7da0a-2774-4bae-ba2f-3b943e027082-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "e6b7da0a-2774-4bae-ba2f-3b943e027082" (UID: "e6b7da0a-2774-4bae-ba2f-3b943e027082"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.837811 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6b7da0a-2774-4bae-ba2f-3b943e027082-scripts" (OuterVolumeSpecName: "scripts") pod "e6b7da0a-2774-4bae-ba2f-3b943e027082" (UID: "e6b7da0a-2774-4bae-ba2f-3b943e027082"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:55 crc kubenswrapper[4799]: E0127 08:11:55.838481 4799 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 27 08:11:55 crc kubenswrapper[4799]: E0127 08:11:55.838571 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-config-data podName:0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0 nodeName:}" failed. No retries permitted until 2026-01-27 08:11:59.838548494 +0000 UTC m=+1586.149652609 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-config-data") pod "rabbitmq-cell1-server-0" (UID: "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0") : configmap "rabbitmq-cell1-config-data" not found Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.842034 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d57ed20b-0573-4924-aeee-bef05838e330-kube-api-access-vlgkw" (OuterVolumeSpecName: "kube-api-access-vlgkw") pod "d57ed20b-0573-4924-aeee-bef05838e330" (UID: "d57ed20b-0573-4924-aeee-bef05838e330"). InnerVolumeSpecName "kube-api-access-vlgkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.843088 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e52675-0afa-4579-a5c1-f0aba31dd6e7-kube-api-access-qzn94" (OuterVolumeSpecName: "kube-api-access-qzn94") pod "07e52675-0afa-4579-a5c1-f0aba31dd6e7" (UID: "07e52675-0afa-4579-a5c1-f0aba31dd6e7"). InnerVolumeSpecName "kube-api-access-qzn94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.844053 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6b7da0a-2774-4bae-ba2f-3b943e027082-config" (OuterVolumeSpecName: "config") pod "e6b7da0a-2774-4bae-ba2f-3b943e027082" (UID: "e6b7da0a-2774-4bae-ba2f-3b943e027082"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.844439 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b7da0a-2774-4bae-ba2f-3b943e027082-kube-api-access-7tmcz" (OuterVolumeSpecName: "kube-api-access-7tmcz") pod "e6b7da0a-2774-4bae-ba2f-3b943e027082" (UID: "e6b7da0a-2774-4bae-ba2f-3b943e027082"). InnerVolumeSpecName "kube-api-access-7tmcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:55 crc kubenswrapper[4799]: E0127 08:11:55.846281 4799 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 08:11:55 crc kubenswrapper[4799]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 27 08:11:55 crc kubenswrapper[4799]: Jan 27 08:11:55 crc kubenswrapper[4799]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 27 08:11:55 crc kubenswrapper[4799]: Jan 27 08:11:55 crc kubenswrapper[4799]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 27 08:11:55 crc kubenswrapper[4799]: Jan 27 08:11:55 crc kubenswrapper[4799]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 27 08:11:55 crc kubenswrapper[4799]: Jan 27 08:11:55 crc kubenswrapper[4799]: if [ -n "placement" ]; then Jan 27 08:11:55 crc kubenswrapper[4799]: GRANT_DATABASE="placement" Jan 27 08:11:55 crc kubenswrapper[4799]: else Jan 27 08:11:55 crc kubenswrapper[4799]: GRANT_DATABASE="*" Jan 27 08:11:55 crc kubenswrapper[4799]: fi Jan 27 08:11:55 crc kubenswrapper[4799]: Jan 27 08:11:55 crc kubenswrapper[4799]: # going for maximum compatibility here: Jan 27 08:11:55 crc kubenswrapper[4799]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 27 08:11:55 crc kubenswrapper[4799]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 27 08:11:55 crc kubenswrapper[4799]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 27 08:11:55 crc kubenswrapper[4799]: # support updates Jan 27 08:11:55 crc kubenswrapper[4799]: Jan 27 08:11:55 crc kubenswrapper[4799]: $MYSQL_CMD < logger="UnhandledError" Jan 27 08:11:55 crc kubenswrapper[4799]: E0127 08:11:55.847438 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-c37c-account-create-update-mb86j" podUID="1ac2fc5a-2192-497c-ad7f-76a3fef58da6" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.855812 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "e6b7da0a-2774-4bae-ba2f-3b943e027082" (UID: "e6b7da0a-2774-4bae-ba2f-3b943e027082"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.912866 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/617cc655-aae2-4918-ba79-05e346cf9200-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "617cc655-aae2-4918-ba79-05e346cf9200" (UID: "617cc655-aae2-4918-ba79-05e346cf9200"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.937589 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/617cc655-aae2-4918-ba79-05e346cf9200-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.937627 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e6b7da0a-2774-4bae-ba2f-3b943e027082-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.937636 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tmcz\" (UniqueName: \"kubernetes.io/projected/e6b7da0a-2774-4bae-ba2f-3b943e027082-kube-api-access-7tmcz\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.937644 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b7da0a-2774-4bae-ba2f-3b943e027082-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.937657 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzn94\" (UniqueName: \"kubernetes.io/projected/07e52675-0afa-4579-a5c1-f0aba31dd6e7-kube-api-access-qzn94\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.937669 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6b7da0a-2774-4bae-ba2f-3b943e027082-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.937689 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.937698 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlgkw\" (UniqueName: \"kubernetes.io/projected/d57ed20b-0573-4924-aeee-bef05838e330-kube-api-access-vlgkw\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.938496 4799 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 27 08:11:55 crc kubenswrapper[4799]: I0127 08:11:55.969646 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e52675-0afa-4579-a5c1-f0aba31dd6e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07e52675-0afa-4579-a5c1-f0aba31dd6e7" (UID: "07e52675-0afa-4579-a5c1-f0aba31dd6e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.021988 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6b7da0a-2774-4bae-ba2f-3b943e027082-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6b7da0a-2774-4bae-ba2f-3b943e027082" (UID: "e6b7da0a-2774-4bae-ba2f-3b943e027082"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.041911 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e52675-0afa-4579-a5c1-f0aba31dd6e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.041945 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b7da0a-2774-4bae-ba2f-3b943e027082-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.041957 4799 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.062755 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07e52675-0afa-4579-a5c1-f0aba31dd6e7-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "07e52675-0afa-4579-a5c1-f0aba31dd6e7" (UID: "07e52675-0afa-4579-a5c1-f0aba31dd6e7"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.112571 4799 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.113206 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-config" (OuterVolumeSpecName: "config") pod "d57ed20b-0573-4924-aeee-bef05838e330" (UID: "d57ed20b-0573-4924-aeee-bef05838e330"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.114795 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d57ed20b-0573-4924-aeee-bef05838e330" (UID: "d57ed20b-0573-4924-aeee-bef05838e330"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.121222 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d57ed20b-0573-4924-aeee-bef05838e330" (UID: "d57ed20b-0573-4924-aeee-bef05838e330"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.124773 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/617cc655-aae2-4918-ba79-05e346cf9200-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "617cc655-aae2-4918-ba79-05e346cf9200" (UID: "617cc655-aae2-4918-ba79-05e346cf9200"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.145747 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d57ed20b-0573-4924-aeee-bef05838e330" (UID: "d57ed20b-0573-4924-aeee-bef05838e330"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.157165 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6b7da0a-2774-4bae-ba2f-3b943e027082-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "e6b7da0a-2774-4bae-ba2f-3b943e027082" (UID: "e6b7da0a-2774-4bae-ba2f-3b943e027082"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.160390 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e52675-0afa-4579-a5c1-f0aba31dd6e7-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "07e52675-0afa-4579-a5c1-f0aba31dd6e7" (UID: "07e52675-0afa-4579-a5c1-f0aba31dd6e7"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.173488 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.173520 4799 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.173533 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.173552 4799 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/07e52675-0afa-4579-a5c1-f0aba31dd6e7-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.173564 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.173577 4799 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/617cc655-aae2-4918-ba79-05e346cf9200-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.173589 4799 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/07e52675-0afa-4579-a5c1-f0aba31dd6e7-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.173604 4799 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.173616 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b7da0a-2774-4bae-ba2f-3b943e027082-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.207934 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d57ed20b-0573-4924-aeee-bef05838e330" (UID: "d57ed20b-0573-4924-aeee-bef05838e330"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.220814 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6b7da0a-2774-4bae-ba2f-3b943e027082-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "e6b7da0a-2774-4bae-ba2f-3b943e027082" (UID: "e6b7da0a-2774-4bae-ba2f-3b943e027082"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.277273 4799 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b7da0a-2774-4bae-ba2f-3b943e027082-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.277338 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d57ed20b-0573-4924-aeee-bef05838e330-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.354745 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-85fc64b547-v7lvv" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.363064 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.371152 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0f1e-account-create-update-9pcqg"] Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.390108 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:11:56 crc kubenswrapper[4799]: E0127 08:11:56.393551 4799 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 08:11:56 crc kubenswrapper[4799]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 27 08:11:56 crc kubenswrapper[4799]: Jan 27 08:11:56 crc kubenswrapper[4799]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 27 08:11:56 crc kubenswrapper[4799]: Jan 27 08:11:56 crc kubenswrapper[4799]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 27 08:11:56 crc kubenswrapper[4799]: Jan 27 08:11:56 crc kubenswrapper[4799]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 27 08:11:56 crc kubenswrapper[4799]: Jan 27 08:11:56 crc kubenswrapper[4799]: if [ -n "nova_cell0" ]; then Jan 27 08:11:56 crc kubenswrapper[4799]: GRANT_DATABASE="nova_cell0" Jan 27 08:11:56 crc kubenswrapper[4799]: else Jan 27 08:11:56 crc kubenswrapper[4799]: GRANT_DATABASE="*" Jan 27 08:11:56 crc kubenswrapper[4799]: fi Jan 27 08:11:56 crc kubenswrapper[4799]: Jan 27 08:11:56 crc kubenswrapper[4799]: # going for maximum compatibility here: Jan 27 08:11:56 crc kubenswrapper[4799]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 27 08:11:56 crc kubenswrapper[4799]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 27 08:11:56 crc kubenswrapper[4799]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 27 08:11:56 crc kubenswrapper[4799]: # support updates Jan 27 08:11:56 crc kubenswrapper[4799]: Jan 27 08:11:56 crc kubenswrapper[4799]: $MYSQL_CMD < logger="UnhandledError" Jan 27 08:11:56 crc kubenswrapper[4799]: E0127 08:11:56.394658 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-0f1e-account-create-update-9pcqg" podUID="caa95ce2-79d9-4314-af1c-6d3b93667cb5" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.402010 4799 generic.go:334] "Generic (PLEG): container finished" podID="57b30668-20df-41a6-80b4-ee59aea714dc" containerID="5d5582170f2ff95c4d921e6cfd3e75f6a45a23ac72d43f17c5d9b3b7565c91e8" exitCode=0 Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.402070 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-85fc64b547-v7lvv" event={"ID":"57b30668-20df-41a6-80b4-ee59aea714dc","Type":"ContainerDied","Data":"5d5582170f2ff95c4d921e6cfd3e75f6a45a23ac72d43f17c5d9b3b7565c91e8"} Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.402097 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-85fc64b547-v7lvv" event={"ID":"57b30668-20df-41a6-80b4-ee59aea714dc","Type":"ContainerDied","Data":"560b70a2b269e41b89f2f6f6d53fbb201d6dec617563b16c36013515d335ff36"} Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.402114 4799 scope.go:117] "RemoveContainer" containerID="5d5582170f2ff95c4d921e6cfd3e75f6a45a23ac72d43f17c5d9b3b7565c91e8" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.402225 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-85fc64b547-v7lvv" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.407584 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.408683 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3eda-account-create-update-9krkt" event={"ID":"ade86985-ca70-4f21-ae7a-825353f912cb","Type":"ContainerStarted","Data":"9846208d51de11a10ef4cc74656c4c6a063aa48cc86a6416c8bd6ac5d7eb4865"} Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.410626 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-07af-account-create-update-wm8x8" event={"ID":"7a55cd9e-7386-41a1-912c-0876a917bd93","Type":"ContainerStarted","Data":"138745f101a65274022c16971d361d1fcf306be1f5f446c8620362a101f67679"} Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.436571 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5c72-account-create-update-mpznk"] Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.466153 4799 scope.go:117] "RemoveContainer" containerID="f7e155f75ed90bc86ce2b18a04f696ca9d79b6490d4aead206b8756ba19fd468" Jan 27 08:11:56 crc kubenswrapper[4799]: W0127 08:11:56.467250 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fa13966_417e_4920_8ecc_5afc73396410.slice/crio-68a5b6483afc7f56e5996950e048d2e072ab2b26d636229789d1047907be6ae8 WatchSource:0}: Error finding container 68a5b6483afc7f56e5996950e048d2e072ab2b26d636229789d1047907be6ae8: Status 404 returned error can't find the container with id 68a5b6483afc7f56e5996950e048d2e072ab2b26d636229789d1047907be6ae8 Jan 27 08:11:56 crc kubenswrapper[4799]: E0127 08:11:56.475629 4799 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 08:11:56 crc kubenswrapper[4799]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 27 08:11:56 crc kubenswrapper[4799]: Jan 27 08:11:56 crc kubenswrapper[4799]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 27 08:11:56 crc kubenswrapper[4799]: Jan 27 08:11:56 crc kubenswrapper[4799]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 27 08:11:56 crc kubenswrapper[4799]: Jan 27 08:11:56 crc kubenswrapper[4799]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 27 08:11:56 crc kubenswrapper[4799]: Jan 27 08:11:56 crc kubenswrapper[4799]: if [ -n "cinder" ]; then Jan 27 08:11:56 crc kubenswrapper[4799]: GRANT_DATABASE="cinder" Jan 27 08:11:56 crc kubenswrapper[4799]: else Jan 27 08:11:56 crc kubenswrapper[4799]: GRANT_DATABASE="*" Jan 27 08:11:56 crc kubenswrapper[4799]: fi Jan 27 08:11:56 crc kubenswrapper[4799]: Jan 27 08:11:56 crc kubenswrapper[4799]: # going for maximum compatibility here: Jan 27 08:11:56 crc kubenswrapper[4799]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 27 08:11:56 crc kubenswrapper[4799]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 27 08:11:56 crc kubenswrapper[4799]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 27 08:11:56 crc kubenswrapper[4799]: # support updates Jan 27 08:11:56 crc kubenswrapper[4799]: Jan 27 08:11:56 crc kubenswrapper[4799]: $MYSQL_CMD < logger="UnhandledError" Jan 27 08:11:56 crc kubenswrapper[4799]: E0127 08:11:56.477510 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-5c72-account-create-update-mpznk" podUID="2fa13966-417e-4920-8ecc-5afc73396410" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.483971 4799 generic.go:334] "Generic (PLEG): container finished" podID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerID="ed9695191592e2cf7c9a81c2f7e406e573fe094f24810d13f52025b42fd14e45" exitCode=0 Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.484008 4799 generic.go:334] "Generic (PLEG): container finished" podID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerID="27328900cd6228d146086ac95ae8b05b2862be13eb3c0f09db06830d1bca9dcd" exitCode=0 Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.484015 4799 generic.go:334] "Generic (PLEG): container finished" podID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerID="c9d0893ee0366152b7225975257a1cb9bd87ad844aa46d49cb26f4f0a856f1bd" exitCode=0 Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.488326 4799 generic.go:334] "Generic (PLEG): container finished" podID="b3f9568f-3dd7-4bdb-9b53-2e6ec291e813" containerID="7dc576b58ec5fe8802a2fd56053e84ba4ee332d5e7f250677126c60dfef2ae46" exitCode=0 Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.488438 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.495441 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/786fd8aa-3ed9-420c-bdcd-b15a36795e72-etc-swift\") pod \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.495841 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d63e438-475a-4686-861e-5fba1fcb6767-config-data\") pod \"2d63e438-475a-4686-861e-5fba1fcb6767\" (UID: \"2d63e438-475a-4686-861e-5fba1fcb6767\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.496742 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57b30668-20df-41a6-80b4-ee59aea714dc-config-data\") pod \"57b30668-20df-41a6-80b4-ee59aea714dc\" (UID: \"57b30668-20df-41a6-80b4-ee59aea714dc\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.496835 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d63e438-475a-4686-861e-5fba1fcb6767-logs\") pod \"2d63e438-475a-4686-861e-5fba1fcb6767\" (UID: \"2d63e438-475a-4686-861e-5fba1fcb6767\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.496889 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/786fd8aa-3ed9-420c-bdcd-b15a36795e72-log-httpd\") pod \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.496931 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d63e438-475a-4686-861e-5fba1fcb6767-combined-ca-bundle\") pod \"2d63e438-475a-4686-861e-5fba1fcb6767\" (UID: \"2d63e438-475a-4686-861e-5fba1fcb6767\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.496961 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7swqq\" (UniqueName: \"kubernetes.io/projected/57b30668-20df-41a6-80b4-ee59aea714dc-kube-api-access-7swqq\") pod \"57b30668-20df-41a6-80b4-ee59aea714dc\" (UID: \"57b30668-20df-41a6-80b4-ee59aea714dc\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.497025 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-combined-ca-bundle\") pod \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.497058 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-vencrypt-tls-certs\") pod \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\" (UID: \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.497082 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57b30668-20df-41a6-80b4-ee59aea714dc-config-data-custom\") pod \"57b30668-20df-41a6-80b4-ee59aea714dc\" (UID: \"57b30668-20df-41a6-80b4-ee59aea714dc\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.497109 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d63e438-475a-4686-861e-5fba1fcb6767-config-data-custom\") pod \"2d63e438-475a-4686-861e-5fba1fcb6767\" (UID: \"2d63e438-475a-4686-861e-5fba1fcb6767\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.497133 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57b30668-20df-41a6-80b4-ee59aea714dc-logs\") pod \"57b30668-20df-41a6-80b4-ee59aea714dc\" (UID: \"57b30668-20df-41a6-80b4-ee59aea714dc\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.497169 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kdmq\" (UniqueName: \"kubernetes.io/projected/786fd8aa-3ed9-420c-bdcd-b15a36795e72-kube-api-access-9kdmq\") pod \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.497193 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-nova-novncproxy-tls-certs\") pod \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\" (UID: \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.497216 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf8ts\" (UniqueName: \"kubernetes.io/projected/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-kube-api-access-pf8ts\") pod \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\" (UID: \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.497246 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-public-tls-certs\") pod \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.497272 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-config-data\") pod \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.497341 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-combined-ca-bundle\") pod \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\" (UID: \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.501844 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-internal-tls-certs\") pod \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.501908 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/786fd8aa-3ed9-420c-bdcd-b15a36795e72-run-httpd\") pod \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\" (UID: \"786fd8aa-3ed9-420c-bdcd-b15a36795e72\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.501987 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcz95\" (UniqueName: \"kubernetes.io/projected/2d63e438-475a-4686-861e-5fba1fcb6767-kube-api-access-gcz95\") pod \"2d63e438-475a-4686-861e-5fba1fcb6767\" (UID: \"2d63e438-475a-4686-861e-5fba1fcb6767\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.502085 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57b30668-20df-41a6-80b4-ee59aea714dc-combined-ca-bundle\") pod \"57b30668-20df-41a6-80b4-ee59aea714dc\" (UID: \"57b30668-20df-41a6-80b4-ee59aea714dc\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.502139 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-config-data\") pod \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\" (UID: \"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813\") " Jan 27 08:11:56 crc kubenswrapper[4799]: E0127 08:11:56.503330 4799 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 27 08:11:56 crc kubenswrapper[4799]: E0127 08:11:56.503409 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-config-data podName:8d822fe6-f547-4b8f-a6e4-c7256e1b2ace nodeName:}" failed. No retries permitted until 2026-01-27 08:12:00.503375789 +0000 UTC m=+1586.814479994 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-config-data") pod "rabbitmq-server-0" (UID: "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace") : configmap "rabbitmq-config-data" not found Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.503972 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e6b7da0a-2774-4bae-ba2f-3b943e027082/ovsdbserver-nb/0.log" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.504158 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.506607 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/786fd8aa-3ed9-420c-bdcd-b15a36795e72-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "786fd8aa-3ed9-420c-bdcd-b15a36795e72" (UID: "786fd8aa-3ed9-420c-bdcd-b15a36795e72"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.522369 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57b30668-20df-41a6-80b4-ee59aea714dc-logs" (OuterVolumeSpecName: "logs") pod "57b30668-20df-41a6-80b4-ee59aea714dc" (UID: "57b30668-20df-41a6-80b4-ee59aea714dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.531997 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/786fd8aa-3ed9-420c-bdcd-b15a36795e72-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "786fd8aa-3ed9-420c-bdcd-b15a36795e72" (UID: "786fd8aa-3ed9-420c-bdcd-b15a36795e72"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.533464 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d63e438-475a-4686-861e-5fba1fcb6767-logs" (OuterVolumeSpecName: "logs") pod "2d63e438-475a-4686-861e-5fba1fcb6767" (UID: "2d63e438-475a-4686-861e-5fba1fcb6767"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.538049 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.548887 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07e52675-0afa-4579-a5c1-f0aba31dd6e7" path="/var/lib/kubelet/pods/07e52675-0afa-4579-a5c1-f0aba31dd6e7/volumes" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.550275 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83464c1e-e470-4907-aece-b0aeea8a7ff2" path="/var/lib/kubelet/pods/83464c1e-e470-4907-aece-b0aeea8a7ff2/volumes" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.551757 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a070ecb5-b0ed-42b2-9778-07e62cffe5c4" path="/var/lib/kubelet/pods/a070ecb5-b0ed-42b2-9778-07e62cffe5c4/volumes" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.553352 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de94f63b-88ce-4f40-acc5-d9f70195f265" path="/var/lib/kubelet/pods/de94f63b-88ce-4f40-acc5-d9f70195f265/volumes" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.553832 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_617cc655-aae2-4918-ba79-05e346cf9200/ovsdbserver-sb/0.log" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.554029 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8974c0d-814e-4e96-b79e-41971c3761c7" path="/var/lib/kubelet/pods/e8974c0d-814e-4e96-b79e-41971c3761c7/volumes" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.554083 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.555167 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3a68e3d-78f4-4a7a-9915-0801f0ffeed6" path="/var/lib/kubelet/pods/f3a68e3d-78f4-4a7a-9915-0801f0ffeed6/volumes" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.566180 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57b30668-20df-41a6-80b4-ee59aea714dc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "57b30668-20df-41a6-80b4-ee59aea714dc" (UID: "57b30668-20df-41a6-80b4-ee59aea714dc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.567074 4799 scope.go:117] "RemoveContainer" containerID="5d5582170f2ff95c4d921e6cfd3e75f6a45a23ac72d43f17c5d9b3b7565c91e8" Jan 27 08:11:56 crc kubenswrapper[4799]: E0127 08:11:56.568669 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d5582170f2ff95c4d921e6cfd3e75f6a45a23ac72d43f17c5d9b3b7565c91e8\": container with ID starting with 5d5582170f2ff95c4d921e6cfd3e75f6a45a23ac72d43f17c5d9b3b7565c91e8 not found: ID does not exist" containerID="5d5582170f2ff95c4d921e6cfd3e75f6a45a23ac72d43f17c5d9b3b7565c91e8" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.568731 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d5582170f2ff95c4d921e6cfd3e75f6a45a23ac72d43f17c5d9b3b7565c91e8"} err="failed to get container status \"5d5582170f2ff95c4d921e6cfd3e75f6a45a23ac72d43f17c5d9b3b7565c91e8\": rpc error: code = NotFound desc = could not find container \"5d5582170f2ff95c4d921e6cfd3e75f6a45a23ac72d43f17c5d9b3b7565c91e8\": container with ID starting with 5d5582170f2ff95c4d921e6cfd3e75f6a45a23ac72d43f17c5d9b3b7565c91e8 not found: ID does not exist" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.568760 4799 scope.go:117] "RemoveContainer" containerID="f7e155f75ed90bc86ce2b18a04f696ca9d79b6490d4aead206b8756ba19fd468" Jan 27 08:11:56 crc kubenswrapper[4799]: E0127 08:11:56.569665 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7e155f75ed90bc86ce2b18a04f696ca9d79b6490d4aead206b8756ba19fd468\": container with ID starting with f7e155f75ed90bc86ce2b18a04f696ca9d79b6490d4aead206b8756ba19fd468 not found: ID does not exist" containerID="f7e155f75ed90bc86ce2b18a04f696ca9d79b6490d4aead206b8756ba19fd468" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.569701 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e155f75ed90bc86ce2b18a04f696ca9d79b6490d4aead206b8756ba19fd468"} err="failed to get container status \"f7e155f75ed90bc86ce2b18a04f696ca9d79b6490d4aead206b8756ba19fd468\": rpc error: code = NotFound desc = could not find container \"f7e155f75ed90bc86ce2b18a04f696ca9d79b6490d4aead206b8756ba19fd468\": container with ID starting with f7e155f75ed90bc86ce2b18a04f696ca9d79b6490d4aead206b8756ba19fd468 not found: ID does not exist" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.570283 4799 generic.go:334] "Generic (PLEG): container finished" podID="f97b84a5-a34c-405f-8357-70cad8efedbc" containerID="138d26b6968877e65cfe794731a3ffaae35afc65d14bc11d0641011dd81c571a" exitCode=1 Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.572440 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d63e438-475a-4686-861e-5fba1fcb6767-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2d63e438-475a-4686-861e-5fba1fcb6767" (UID: "2d63e438-475a-4686-861e-5fba1fcb6767"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.573262 4799 scope.go:117] "RemoveContainer" containerID="138d26b6968877e65cfe794731a3ffaae35afc65d14bc11d0641011dd81c571a" Jan 27 08:11:56 crc kubenswrapper[4799]: E0127 08:11:56.573665 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-9vvt6_openstack(f97b84a5-a34c-405f-8357-70cad8efedbc)\"" pod="openstack/root-account-create-update-9vvt6" podUID="f97b84a5-a34c-405f-8357-70cad8efedbc" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.581171 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/786fd8aa-3ed9-420c-bdcd-b15a36795e72-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "786fd8aa-3ed9-420c-bdcd-b15a36795e72" (UID: "786fd8aa-3ed9-420c-bdcd-b15a36795e72"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.583818 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerDied","Data":"ed9695191592e2cf7c9a81c2f7e406e573fe094f24810d13f52025b42fd14e45"} Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.583955 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerDied","Data":"27328900cd6228d146086ac95ae8b05b2862be13eb3c0f09db06830d1bca9dcd"} Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.583972 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerDied","Data":"c9d0893ee0366152b7225975257a1cb9bd87ad844aa46d49cb26f4f0a856f1bd"} Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.583982 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813","Type":"ContainerDied","Data":"7dc576b58ec5fe8802a2fd56053e84ba4ee332d5e7f250677126c60dfef2ae46"} Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.583995 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b3f9568f-3dd7-4bdb-9b53-2e6ec291e813","Type":"ContainerDied","Data":"f623259fd3de0140b07d67af2f49130edee9e18c7f5bc29a500ec3619f972381"} Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.584005 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e6b7da0a-2774-4bae-ba2f-3b943e027082","Type":"ContainerDied","Data":"02a2eae437335800ff988d22c88c6fd3208704b5e6630014b4fe86153ce6795e"} Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.584020 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"617cc655-aae2-4918-ba79-05e346cf9200","Type":"ContainerDied","Data":"ab4ed8a7cf947fe16abe4581853510f2754cc1aa328e7f070e0d1a6ecd3f307c"} Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.584032 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9vvt6" event={"ID":"f97b84a5-a34c-405f-8357-70cad8efedbc","Type":"ContainerDied","Data":"138d26b6968877e65cfe794731a3ffaae35afc65d14bc11d0641011dd81c571a"} Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.584045 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c37c-account-create-update-mb86j" event={"ID":"1ac2fc5a-2192-497c-ad7f-76a3fef58da6","Type":"ContainerStarted","Data":"7787c5f2a4243ae1971d2c7f8d79b348bd25ff6a781a046bec44b98290010ae0"} Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.584069 4799 scope.go:117] "RemoveContainer" containerID="7dc576b58ec5fe8802a2fd56053e84ba4ee332d5e7f250677126c60dfef2ae46" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.598350 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" event={"ID":"d57ed20b-0573-4924-aeee-bef05838e330","Type":"ContainerDied","Data":"612e1ae8b857473df4951719e8cdf4f8a94e6a4f6a82cd7fa585c44a8683de04"} Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.598468 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-hd42p" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.604512 4799 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/786fd8aa-3ed9-420c-bdcd-b15a36795e72-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.604536 4799 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/786fd8aa-3ed9-420c-bdcd-b15a36795e72-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.604546 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d63e438-475a-4686-861e-5fba1fcb6767-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.604555 4799 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/786fd8aa-3ed9-420c-bdcd-b15a36795e72-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.604564 4799 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57b30668-20df-41a6-80b4-ee59aea714dc-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.604572 4799 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d63e438-475a-4686-861e-5fba1fcb6767-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.604580 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57b30668-20df-41a6-80b4-ee59aea714dc-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.606287 4799 generic.go:334] "Generic (PLEG): container finished" podID="2d63e438-475a-4686-861e-5fba1fcb6767" containerID="f1755401242c059dad35b5bc55293a45e3c2338cff7e596ef0a63fa469e5e36a" exitCode=0 Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.608357 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" event={"ID":"2d63e438-475a-4686-861e-5fba1fcb6767","Type":"ContainerDied","Data":"f1755401242c059dad35b5bc55293a45e3c2338cff7e596ef0a63fa469e5e36a"} Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.608402 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" event={"ID":"2d63e438-475a-4686-861e-5fba1fcb6767","Type":"ContainerDied","Data":"1212791c981bc88fb260f2c57b5b8b9e0d847ee1813fc85b8dd8607887fbc597"} Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.608487 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b7647d64-tp8mw" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.615879 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.621651 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.632087 4799 generic.go:334] "Generic (PLEG): container finished" podID="26e17670-568e-498f-be09-ffb1406c3152" containerID="4bd65b5bc7d74ca250c680832d09104b02e9463eba911724fc54dcc3b8686b82" exitCode=0 Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.632172 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"26e17670-568e-498f-be09-ffb1406c3152","Type":"ContainerDied","Data":"4bd65b5bc7d74ca250c680832d09104b02e9463eba911724fc54dcc3b8686b82"} Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.637080 4799 generic.go:334] "Generic (PLEG): container finished" podID="786fd8aa-3ed9-420c-bdcd-b15a36795e72" containerID="d0a0014e66a5c2d50763987c6970b274c017a47ee7c6c4a889fdb90cf619e3f2" exitCode=0 Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.637117 4799 generic.go:334] "Generic (PLEG): container finished" podID="786fd8aa-3ed9-420c-bdcd-b15a36795e72" containerID="a410fa87aa27bc5aec692e3904102b11a8beb6ba58b545d710293acc1e8b85db" exitCode=0 Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.637148 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-796576645f-ws7ff" event={"ID":"786fd8aa-3ed9-420c-bdcd-b15a36795e72","Type":"ContainerDied","Data":"d0a0014e66a5c2d50763987c6970b274c017a47ee7c6c4a889fdb90cf619e3f2"} Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.637192 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-796576645f-ws7ff" event={"ID":"786fd8aa-3ed9-420c-bdcd-b15a36795e72","Type":"ContainerDied","Data":"a410fa87aa27bc5aec692e3904102b11a8beb6ba58b545d710293acc1e8b85db"} Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.637208 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-796576645f-ws7ff" event={"ID":"786fd8aa-3ed9-420c-bdcd-b15a36795e72","Type":"ContainerDied","Data":"224478b6a05caebbe95da6cd7ffeef1a6c164b12dfe12547665ff0091ad2f52c"} Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.637278 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-796576645f-ws7ff" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.640079 4799 scope.go:117] "RemoveContainer" containerID="7dc576b58ec5fe8802a2fd56053e84ba4ee332d5e7f250677126c60dfef2ae46" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.640562 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/786fd8aa-3ed9-420c-bdcd-b15a36795e72-kube-api-access-9kdmq" (OuterVolumeSpecName: "kube-api-access-9kdmq") pod "786fd8aa-3ed9-420c-bdcd-b15a36795e72" (UID: "786fd8aa-3ed9-420c-bdcd-b15a36795e72"). InnerVolumeSpecName "kube-api-access-9kdmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.640675 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-kube-api-access-pf8ts" (OuterVolumeSpecName: "kube-api-access-pf8ts") pod "b3f9568f-3dd7-4bdb-9b53-2e6ec291e813" (UID: "b3f9568f-3dd7-4bdb-9b53-2e6ec291e813"). InnerVolumeSpecName "kube-api-access-pf8ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.640696 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57b30668-20df-41a6-80b4-ee59aea714dc-kube-api-access-7swqq" (OuterVolumeSpecName: "kube-api-access-7swqq") pod "57b30668-20df-41a6-80b4-ee59aea714dc" (UID: "57b30668-20df-41a6-80b4-ee59aea714dc"). InnerVolumeSpecName "kube-api-access-7swqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: E0127 08:11:56.640700 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dc576b58ec5fe8802a2fd56053e84ba4ee332d5e7f250677126c60dfef2ae46\": container with ID starting with 7dc576b58ec5fe8802a2fd56053e84ba4ee332d5e7f250677126c60dfef2ae46 not found: ID does not exist" containerID="7dc576b58ec5fe8802a2fd56053e84ba4ee332d5e7f250677126c60dfef2ae46" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.640737 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dc576b58ec5fe8802a2fd56053e84ba4ee332d5e7f250677126c60dfef2ae46"} err="failed to get container status \"7dc576b58ec5fe8802a2fd56053e84ba4ee332d5e7f250677126c60dfef2ae46\": rpc error: code = NotFound desc = could not find container \"7dc576b58ec5fe8802a2fd56053e84ba4ee332d5e7f250677126c60dfef2ae46\": container with ID starting with 7dc576b58ec5fe8802a2fd56053e84ba4ee332d5e7f250677126c60dfef2ae46 not found: ID does not exist" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.640788 4799 scope.go:117] "RemoveContainer" containerID="af4582fbc376280b8069bf7f7b55933070749f10ea9380861c1e04e1287e288f" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.640714 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d63e438-475a-4686-861e-5fba1fcb6767-kube-api-access-gcz95" (OuterVolumeSpecName: "kube-api-access-gcz95") pod "2d63e438-475a-4686-861e-5fba1fcb6767" (UID: "2d63e438-475a-4686-861e-5fba1fcb6767"). InnerVolumeSpecName "kube-api-access-gcz95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.662535 4799 scope.go:117] "RemoveContainer" containerID="ab6cec124f22dcd31e62ec157b18fc36096228b880c78321a66eca8e1a726508" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.662704 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-hd42p"] Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.678272 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-hd42p"] Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.690953 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57b30668-20df-41a6-80b4-ee59aea714dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57b30668-20df-41a6-80b4-ee59aea714dc" (UID: "57b30668-20df-41a6-80b4-ee59aea714dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.702385 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.707499 4799 scope.go:117] "RemoveContainer" containerID="932c19e1786489198ed2fd00256bc0ea9ef4db8f5bd057c7ca4751c183f4f1d6" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.708332 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.709824 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57b30668-20df-41a6-80b4-ee59aea714dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.709857 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7swqq\" (UniqueName: \"kubernetes.io/projected/57b30668-20df-41a6-80b4-ee59aea714dc-kube-api-access-7swqq\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.709868 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kdmq\" (UniqueName: \"kubernetes.io/projected/786fd8aa-3ed9-420c-bdcd-b15a36795e72-kube-api-access-9kdmq\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.713223 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf8ts\" (UniqueName: \"kubernetes.io/projected/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-kube-api-access-pf8ts\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.713256 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcz95\" (UniqueName: \"kubernetes.io/projected/2d63e438-475a-4686-861e-5fba1fcb6767-kube-api-access-gcz95\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.725047 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3f9568f-3dd7-4bdb-9b53-2e6ec291e813" (UID: "b3f9568f-3dd7-4bdb-9b53-2e6ec291e813"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.752735 4799 scope.go:117] "RemoveContainer" containerID="97b857c500f0dc120edc4b9f7299baa035a1a1571e8961c116690abe3c273321" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.763412 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.779858 4799 scope.go:117] "RemoveContainer" containerID="8b889d06f7ebe01917c15ab23bb6f82de1d3280d85886ca49cee0080e8046c73" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.795145 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-lx6nr" podUID="c92846fc-e305-4af9-816a-4067b79d2403" containerName="ovn-controller" probeResult="failure" output="" Jan 27 08:11:56 crc kubenswrapper[4799]: E0127 08:11:56.795930 4799 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 27 08:11:56 crc kubenswrapper[4799]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-27T08:11:54Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 27 08:11:56 crc kubenswrapper[4799]: /etc/init.d/functions: line 589: 400 Alarm clock "$@" Jan 27 08:11:56 crc kubenswrapper[4799]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-lx6nr" message=< Jan 27 08:11:56 crc kubenswrapper[4799]: Exiting ovn-controller (1) [FAILED] Jan 27 08:11:56 crc kubenswrapper[4799]: Killing ovn-controller (1) [ OK ] Jan 27 08:11:56 crc kubenswrapper[4799]: 2026-01-27T08:11:54Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 27 08:11:56 crc kubenswrapper[4799]: /etc/init.d/functions: line 589: 400 Alarm clock "$@" Jan 27 08:11:56 crc kubenswrapper[4799]: > Jan 27 08:11:56 crc kubenswrapper[4799]: E0127 08:11:56.796262 4799 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 27 08:11:56 crc kubenswrapper[4799]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-27T08:11:54Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 27 08:11:56 crc kubenswrapper[4799]: /etc/init.d/functions: line 589: 400 Alarm clock "$@" Jan 27 08:11:56 crc kubenswrapper[4799]: > pod="openstack/ovn-controller-lx6nr" podUID="c92846fc-e305-4af9-816a-4067b79d2403" containerName="ovn-controller" containerID="cri-o://dba41f475ea813cae4d687243584f94982d501ee64961725b7a4d2f0b2272bd9" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.796470 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-lx6nr" podUID="c92846fc-e305-4af9-816a-4067b79d2403" containerName="ovn-controller" containerID="cri-o://dba41f475ea813cae4d687243584f94982d501ee64961725b7a4d2f0b2272bd9" gracePeriod=26 Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.815162 4799 scope.go:117] "RemoveContainer" containerID="16a1ef8c1f548424a36b350fd3a051e370a5acd755ff4d2a876031dd99b39489" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.818041 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.831108 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d63e438-475a-4686-861e-5fba1fcb6767-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d63e438-475a-4686-861e-5fba1fcb6767" (UID: "2d63e438-475a-4686-861e-5fba1fcb6767"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.844348 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "786fd8aa-3ed9-420c-bdcd-b15a36795e72" (UID: "786fd8aa-3ed9-420c-bdcd-b15a36795e72"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.848244 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d63e438-475a-4686-861e-5fba1fcb6767-config-data" (OuterVolumeSpecName: "config-data") pod "2d63e438-475a-4686-861e-5fba1fcb6767" (UID: "2d63e438-475a-4686-861e-5fba1fcb6767"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.900375 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-config-data" (OuterVolumeSpecName: "config-data") pod "b3f9568f-3dd7-4bdb-9b53-2e6ec291e813" (UID: "b3f9568f-3dd7-4bdb-9b53-2e6ec291e813"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.902021 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "786fd8aa-3ed9-420c-bdcd-b15a36795e72" (UID: "786fd8aa-3ed9-420c-bdcd-b15a36795e72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.927336 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e17670-568e-498f-be09-ffb1406c3152-combined-ca-bundle\") pod \"26e17670-568e-498f-be09-ffb1406c3152\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.927457 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/26e17670-568e-498f-be09-ffb1406c3152-config-data-default\") pod \"26e17670-568e-498f-be09-ffb1406c3152\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.927484 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"26e17670-568e-498f-be09-ffb1406c3152\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.927541 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ssps\" (UniqueName: \"kubernetes.io/projected/26e17670-568e-498f-be09-ffb1406c3152-kube-api-access-8ssps\") pod \"26e17670-568e-498f-be09-ffb1406c3152\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.927598 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/26e17670-568e-498f-be09-ffb1406c3152-config-data-generated\") pod \"26e17670-568e-498f-be09-ffb1406c3152\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.927669 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e17670-568e-498f-be09-ffb1406c3152-galera-tls-certs\") pod \"26e17670-568e-498f-be09-ffb1406c3152\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.927702 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/26e17670-568e-498f-be09-ffb1406c3152-kolla-config\") pod \"26e17670-568e-498f-be09-ffb1406c3152\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.927726 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e17670-568e-498f-be09-ffb1406c3152-operator-scripts\") pod \"26e17670-568e-498f-be09-ffb1406c3152\" (UID: \"26e17670-568e-498f-be09-ffb1406c3152\") " Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.928231 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.928255 4799 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.928267 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.928279 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d63e438-475a-4686-861e-5fba1fcb6767-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.928290 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d63e438-475a-4686-861e-5fba1fcb6767-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.940109 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26e17670-568e-498f-be09-ffb1406c3152-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "26e17670-568e-498f-be09-ffb1406c3152" (UID: "26e17670-568e-498f-be09-ffb1406c3152"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.943808 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26e17670-568e-498f-be09-ffb1406c3152-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "26e17670-568e-498f-be09-ffb1406c3152" (UID: "26e17670-568e-498f-be09-ffb1406c3152"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.945416 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26e17670-568e-498f-be09-ffb1406c3152-kube-api-access-8ssps" (OuterVolumeSpecName: "kube-api-access-8ssps") pod "26e17670-568e-498f-be09-ffb1406c3152" (UID: "26e17670-568e-498f-be09-ffb1406c3152"). InnerVolumeSpecName "kube-api-access-8ssps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.952040 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26e17670-568e-498f-be09-ffb1406c3152-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "26e17670-568e-498f-be09-ffb1406c3152" (UID: "26e17670-568e-498f-be09-ffb1406c3152"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.954738 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26e17670-568e-498f-be09-ffb1406c3152-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26e17670-568e-498f-be09-ffb1406c3152" (UID: "26e17670-568e-498f-be09-ffb1406c3152"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.954751 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "b3f9568f-3dd7-4bdb-9b53-2e6ec291e813" (UID: "b3f9568f-3dd7-4bdb-9b53-2e6ec291e813"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.954920 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57b30668-20df-41a6-80b4-ee59aea714dc-config-data" (OuterVolumeSpecName: "config-data") pod "57b30668-20df-41a6-80b4-ee59aea714dc" (UID: "57b30668-20df-41a6-80b4-ee59aea714dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.957485 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "b3f9568f-3dd7-4bdb-9b53-2e6ec291e813" (UID: "b3f9568f-3dd7-4bdb-9b53-2e6ec291e813"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:56 crc kubenswrapper[4799]: I0127 08:11:56.986866 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "mysql-db") pod "26e17670-568e-498f-be09-ffb1406c3152" (UID: "26e17670-568e-498f-be09-ffb1406c3152"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.001013 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-config-data" (OuterVolumeSpecName: "config-data") pod "786fd8aa-3ed9-420c-bdcd-b15a36795e72" (UID: "786fd8aa-3ed9-420c-bdcd-b15a36795e72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.029218 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "786fd8aa-3ed9-420c-bdcd-b15a36795e72" (UID: "786fd8aa-3ed9-420c-bdcd-b15a36795e72"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.033914 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ssps\" (UniqueName: \"kubernetes.io/projected/26e17670-568e-498f-be09-ffb1406c3152-kube-api-access-8ssps\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.033942 4799 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/26e17670-568e-498f-be09-ffb1406c3152-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.033952 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57b30668-20df-41a6-80b4-ee59aea714dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.033961 4799 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/26e17670-568e-498f-be09-ffb1406c3152-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.033969 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e17670-568e-498f-be09-ffb1406c3152-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.033978 4799 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.033987 4799 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.033994 4799 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.034002 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/786fd8aa-3ed9-420c-bdcd-b15a36795e72-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.034010 4799 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/26e17670-568e-498f-be09-ffb1406c3152-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.034028 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.051326 4799 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.116624 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e17670-568e-498f-be09-ffb1406c3152-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26e17670-568e-498f-be09-ffb1406c3152" (UID: "26e17670-568e-498f-be09-ffb1406c3152"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.136134 4799 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.136161 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e17670-568e-498f-be09-ffb1406c3152-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:57 crc kubenswrapper[4799]: E0127 08:11:57.136252 4799 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Jan 27 08:11:57 crc kubenswrapper[4799]: E0127 08:11:57.136312 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-scripts podName:182368c8-7aeb-4cfe-8de7-60794b59792c nodeName:}" failed. No retries permitted until 2026-01-27 08:12:01.13628381 +0000 UTC m=+1587.447387875 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-scripts") pod "cinder-scheduler-0" (UID: "182368c8-7aeb-4cfe-8de7-60794b59792c") : secret "cinder-scripts" not found Jan 27 08:11:57 crc kubenswrapper[4799]: E0127 08:11:57.136626 4799 secret.go:188] Couldn't get secret openstack/cinder-scheduler-config-data: secret "cinder-scheduler-config-data" not found Jan 27 08:11:57 crc kubenswrapper[4799]: E0127 08:11:57.136670 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data-custom podName:182368c8-7aeb-4cfe-8de7-60794b59792c nodeName:}" failed. No retries permitted until 2026-01-27 08:12:01.13664737 +0000 UTC m=+1587.447751435 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data-custom") pod "cinder-scheduler-0" (UID: "182368c8-7aeb-4cfe-8de7-60794b59792c") : secret "cinder-scheduler-config-data" not found Jan 27 08:11:57 crc kubenswrapper[4799]: E0127 08:11:57.136732 4799 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Jan 27 08:11:57 crc kubenswrapper[4799]: E0127 08:11:57.136838 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data podName:182368c8-7aeb-4cfe-8de7-60794b59792c nodeName:}" failed. No retries permitted until 2026-01-27 08:12:01.136801414 +0000 UTC m=+1587.447905479 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data") pod "cinder-scheduler-0" (UID: "182368c8-7aeb-4cfe-8de7-60794b59792c") : secret "cinder-config-data" not found Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.138900 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e17670-568e-498f-be09-ffb1406c3152-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "26e17670-568e-498f-be09-ffb1406c3152" (UID: "26e17670-568e-498f-be09-ffb1406c3152"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.186212 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t6sqk"] Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.241497 4799 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e17670-568e-498f-be09-ffb1406c3152-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.373695 4799 scope.go:117] "RemoveContainer" containerID="242ed1a16fd5f7f954693a993a1d2ded4c83efdbd95645efc9040027f0bb6c24" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.374247 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-07af-account-create-update-wm8x8" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.435585 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3eda-account-create-update-9krkt" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.461608 4799 scope.go:117] "RemoveContainer" containerID="6ddb09c90bfd7470ba6d8bdea139c01bef7b6743e8c08b484044d02f2fa7ad61" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.472002 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c37c-account-create-update-mb86j" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.493864 4799 scope.go:117] "RemoveContainer" containerID="f1755401242c059dad35b5bc55293a45e3c2338cff7e596ef0a63fa469e5e36a" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.508626 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.518747 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.549167 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fghhh\" (UniqueName: \"kubernetes.io/projected/7a55cd9e-7386-41a1-912c-0876a917bd93-kube-api-access-fghhh\") pod \"7a55cd9e-7386-41a1-912c-0876a917bd93\" (UID: \"7a55cd9e-7386-41a1-912c-0876a917bd93\") " Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.549281 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z58s4\" (UniqueName: \"kubernetes.io/projected/1ac2fc5a-2192-497c-ad7f-76a3fef58da6-kube-api-access-z58s4\") pod \"1ac2fc5a-2192-497c-ad7f-76a3fef58da6\" (UID: \"1ac2fc5a-2192-497c-ad7f-76a3fef58da6\") " Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.549375 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade86985-ca70-4f21-ae7a-825353f912cb-operator-scripts\") pod \"ade86985-ca70-4f21-ae7a-825353f912cb\" (UID: \"ade86985-ca70-4f21-ae7a-825353f912cb\") " Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.549411 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ac2fc5a-2192-497c-ad7f-76a3fef58da6-operator-scripts\") pod \"1ac2fc5a-2192-497c-ad7f-76a3fef58da6\" (UID: \"1ac2fc5a-2192-497c-ad7f-76a3fef58da6\") " Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.549469 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a55cd9e-7386-41a1-912c-0876a917bd93-operator-scripts\") pod \"7a55cd9e-7386-41a1-912c-0876a917bd93\" (UID: \"7a55cd9e-7386-41a1-912c-0876a917bd93\") " Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.549562 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtxth\" (UniqueName: \"kubernetes.io/projected/ade86985-ca70-4f21-ae7a-825353f912cb-kube-api-access-vtxth\") pod \"ade86985-ca70-4f21-ae7a-825353f912cb\" (UID: \"ade86985-ca70-4f21-ae7a-825353f912cb\") " Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.550542 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ade86985-ca70-4f21-ae7a-825353f912cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ade86985-ca70-4f21-ae7a-825353f912cb" (UID: "ade86985-ca70-4f21-ae7a-825353f912cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.551871 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a55cd9e-7386-41a1-912c-0876a917bd93-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a55cd9e-7386-41a1-912c-0876a917bd93" (UID: "7a55cd9e-7386-41a1-912c-0876a917bd93"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.567099 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="cdad1fc3-eebb-4dcb-b69a-076d1dc63a89" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": read tcp 10.217.0.2:33322->10.217.0.199:8775: read: connection reset by peer" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.567730 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="cdad1fc3-eebb-4dcb-b69a-076d1dc63a89" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": read tcp 10.217.0.2:33336->10.217.0.199:8775: read: connection reset by peer" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.570594 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade86985-ca70-4f21-ae7a-825353f912cb-kube-api-access-vtxth" (OuterVolumeSpecName: "kube-api-access-vtxth") pod "ade86985-ca70-4f21-ae7a-825353f912cb" (UID: "ade86985-ca70-4f21-ae7a-825353f912cb"). InnerVolumeSpecName "kube-api-access-vtxth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.574592 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ac2fc5a-2192-497c-ad7f-76a3fef58da6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ac2fc5a-2192-497c-ad7f-76a3fef58da6" (UID: "1ac2fc5a-2192-497c-ad7f-76a3fef58da6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.579124 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a55cd9e-7386-41a1-912c-0876a917bd93-kube-api-access-fghhh" (OuterVolumeSpecName: "kube-api-access-fghhh") pod "7a55cd9e-7386-41a1-912c-0876a917bd93" (UID: "7a55cd9e-7386-41a1-912c-0876a917bd93"). InnerVolumeSpecName "kube-api-access-fghhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.591283 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-796576645f-ws7ff"] Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.592252 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ac2fc5a-2192-497c-ad7f-76a3fef58da6-kube-api-access-z58s4" (OuterVolumeSpecName: "kube-api-access-z58s4") pod "1ac2fc5a-2192-497c-ad7f-76a3fef58da6" (UID: "1ac2fc5a-2192-497c-ad7f-76a3fef58da6"). InnerVolumeSpecName "kube-api-access-z58s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.622343 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-796576645f-ws7ff"] Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.631327 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-85fc64b547-v7lvv"] Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.637917 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-85fc64b547-v7lvv"] Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.640912 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="b04e9a37-9722-491b-ada1-992d747e5bed" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.162:8776/healthcheck\": read tcp 10.217.0.2:49412->10.217.0.162:8776: read: connection reset by peer" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.643633 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-b7647d64-tp8mw"] Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.648343 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-b7647d64-tp8mw"] Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.653083 4799 scope.go:117] "RemoveContainer" containerID="a22901acd6a50b1282be5aa7812b776f75d8908b54e3c27f5cd0447e61303815" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.655018 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade86985-ca70-4f21-ae7a-825353f912cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.655039 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ac2fc5a-2192-497c-ad7f-76a3fef58da6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.655150 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a55cd9e-7386-41a1-912c-0876a917bd93-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.655411 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtxth\" (UniqueName: \"kubernetes.io/projected/ade86985-ca70-4f21-ae7a-825353f912cb-kube-api-access-vtxth\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.655424 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fghhh\" (UniqueName: \"kubernetes.io/projected/7a55cd9e-7386-41a1-912c-0876a917bd93-kube-api-access-fghhh\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.655434 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z58s4\" (UniqueName: \"kubernetes.io/projected/1ac2fc5a-2192-497c-ad7f-76a3fef58da6-kube-api-access-z58s4\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.695761 4799 generic.go:334] "Generic (PLEG): container finished" podID="cdad1fc3-eebb-4dcb-b69a-076d1dc63a89" containerID="9baffec134930c3ab03eac84affdcfbddaeaf581caaa16db927895c2feaa8b6f" exitCode=0 Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.695865 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89","Type":"ContainerDied","Data":"9baffec134930c3ab03eac84affdcfbddaeaf581caaa16db927895c2feaa8b6f"} Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.710081 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-lx6nr_c92846fc-e305-4af9-816a-4067b79d2403/ovn-controller/0.log" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.710131 4799 generic.go:334] "Generic (PLEG): container finished" podID="c92846fc-e305-4af9-816a-4067b79d2403" containerID="dba41f475ea813cae4d687243584f94982d501ee64961725b7a4d2f0b2272bd9" exitCode=143 Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.710225 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lx6nr" event={"ID":"c92846fc-e305-4af9-816a-4067b79d2403","Type":"ContainerDied","Data":"dba41f475ea813cae4d687243584f94982d501ee64961725b7a4d2f0b2272bd9"} Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.712607 4799 scope.go:117] "RemoveContainer" containerID="138d26b6968877e65cfe794731a3ffaae35afc65d14bc11d0641011dd81c571a" Jan 27 08:11:57 crc kubenswrapper[4799]: E0127 08:11:57.712881 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-9vvt6_openstack(f97b84a5-a34c-405f-8357-70cad8efedbc)\"" pod="openstack/root-account-create-update-9vvt6" podUID="f97b84a5-a34c-405f-8357-70cad8efedbc" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.739407 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"26e17670-568e-498f-be09-ffb1406c3152","Type":"ContainerDied","Data":"5db7784bd31ea60c5e7d19657594f084e52f4dccc06e5be9cb330076ec46e324"} Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.739450 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.754678 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3eda-account-create-update-9krkt" event={"ID":"ade86985-ca70-4f21-ae7a-825353f912cb","Type":"ContainerDied","Data":"9846208d51de11a10ef4cc74656c4c6a063aa48cc86a6416c8bd6ac5d7eb4865"} Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.755173 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3eda-account-create-update-9krkt" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.757321 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-07af-account-create-update-wm8x8" event={"ID":"7a55cd9e-7386-41a1-912c-0876a917bd93","Type":"ContainerDied","Data":"138745f101a65274022c16971d361d1fcf306be1f5f446c8620362a101f67679"} Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.757421 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-07af-account-create-update-wm8x8" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.776769 4799 generic.go:334] "Generic (PLEG): container finished" podID="fbb51a95-a5db-4e7c-8cca-a59d07200ad5" containerID="8cd4ca5237c50f4d23bf8d52a5873e5a8a629a0e179bd22093979620a71f464d" exitCode=0 Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.776883 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fbb51a95-a5db-4e7c-8cca-a59d07200ad5","Type":"ContainerDied","Data":"8cd4ca5237c50f4d23bf8d52a5873e5a8a629a0e179bd22093979620a71f464d"} Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.836253 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c37c-account-create-update-mb86j" event={"ID":"1ac2fc5a-2192-497c-ad7f-76a3fef58da6","Type":"ContainerDied","Data":"7787c5f2a4243ae1971d2c7f8d79b348bd25ff6a781a046bec44b98290010ae0"} Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.836376 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c37c-account-create-update-mb86j" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.839363 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5c72-account-create-update-mpznk" event={"ID":"2fa13966-417e-4920-8ecc-5afc73396410","Type":"ContainerStarted","Data":"68a5b6483afc7f56e5996950e048d2e072ab2b26d636229789d1047907be6ae8"} Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.857621 4799 scope.go:117] "RemoveContainer" containerID="f1755401242c059dad35b5bc55293a45e3c2338cff7e596ef0a63fa469e5e36a" Jan 27 08:11:57 crc kubenswrapper[4799]: E0127 08:11:57.862007 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1755401242c059dad35b5bc55293a45e3c2338cff7e596ef0a63fa469e5e36a\": container with ID starting with f1755401242c059dad35b5bc55293a45e3c2338cff7e596ef0a63fa469e5e36a not found: ID does not exist" containerID="f1755401242c059dad35b5bc55293a45e3c2338cff7e596ef0a63fa469e5e36a" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.862056 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1755401242c059dad35b5bc55293a45e3c2338cff7e596ef0a63fa469e5e36a"} err="failed to get container status \"f1755401242c059dad35b5bc55293a45e3c2338cff7e596ef0a63fa469e5e36a\": rpc error: code = NotFound desc = could not find container \"f1755401242c059dad35b5bc55293a45e3c2338cff7e596ef0a63fa469e5e36a\": container with ID starting with f1755401242c059dad35b5bc55293a45e3c2338cff7e596ef0a63fa469e5e36a not found: ID does not exist" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.862087 4799 scope.go:117] "RemoveContainer" containerID="a22901acd6a50b1282be5aa7812b776f75d8908b54e3c27f5cd0447e61303815" Jan 27 08:11:57 crc kubenswrapper[4799]: E0127 08:11:57.863513 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a22901acd6a50b1282be5aa7812b776f75d8908b54e3c27f5cd0447e61303815\": container with ID starting with a22901acd6a50b1282be5aa7812b776f75d8908b54e3c27f5cd0447e61303815 not found: ID does not exist" containerID="a22901acd6a50b1282be5aa7812b776f75d8908b54e3c27f5cd0447e61303815" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.863566 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a22901acd6a50b1282be5aa7812b776f75d8908b54e3c27f5cd0447e61303815"} err="failed to get container status \"a22901acd6a50b1282be5aa7812b776f75d8908b54e3c27f5cd0447e61303815\": rpc error: code = NotFound desc = could not find container \"a22901acd6a50b1282be5aa7812b776f75d8908b54e3c27f5cd0447e61303815\": container with ID starting with a22901acd6a50b1282be5aa7812b776f75d8908b54e3c27f5cd0447e61303815 not found: ID does not exist" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.863591 4799 scope.go:117] "RemoveContainer" containerID="d0a0014e66a5c2d50763987c6970b274c017a47ee7c6c4a889fdb90cf619e3f2" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.905821 4799 generic.go:334] "Generic (PLEG): container finished" podID="8d1ca94d-0dc1-402e-87b0-e76fc390a9a4" containerID="9163187825d75d65281abfc71d2b39e88d5e1b584e17b29a6cb086d7ce38d30f" exitCode=0 Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.905910 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4","Type":"ContainerDied","Data":"9163187825d75d65281abfc71d2b39e88d5e1b584e17b29a6cb086d7ce38d30f"} Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.917077 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-lx6nr_c92846fc-e305-4af9-816a-4067b79d2403/ovn-controller/0.log" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.917160 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lx6nr" Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.929141 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6sqk" event={"ID":"3c0d170a-443e-438c-b4cd-0be234b7594c","Type":"ContainerStarted","Data":"b0e5b9ce82e0d25ae73b26c4db605a4da0cb9c9d9a523f627cb61870f2c5183a"} Jan 27 08:11:57 crc kubenswrapper[4799]: I0127 08:11:57.951510 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f1e-account-create-update-9pcqg" event={"ID":"caa95ce2-79d9-4314-af1c-6d3b93667cb5","Type":"ContainerStarted","Data":"c9fd979a05216d2d96ffeb8a0d2cd3eacd78d1bf364dd444e29c0a6feacf0d6a"} Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.070200 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c92846fc-e305-4af9-816a-4067b79d2403-ovn-controller-tls-certs\") pod \"c92846fc-e305-4af9-816a-4067b79d2403\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.070280 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c92846fc-e305-4af9-816a-4067b79d2403-var-run-ovn\") pod \"c92846fc-e305-4af9-816a-4067b79d2403\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.070331 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c92846fc-e305-4af9-816a-4067b79d2403-var-run\") pod \"c92846fc-e305-4af9-816a-4067b79d2403\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.070393 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c92846fc-e305-4af9-816a-4067b79d2403-combined-ca-bundle\") pod \"c92846fc-e305-4af9-816a-4067b79d2403\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.070463 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btb6k\" (UniqueName: \"kubernetes.io/projected/c92846fc-e305-4af9-816a-4067b79d2403-kube-api-access-btb6k\") pod \"c92846fc-e305-4af9-816a-4067b79d2403\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.070500 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c92846fc-e305-4af9-816a-4067b79d2403-var-log-ovn\") pod \"c92846fc-e305-4af9-816a-4067b79d2403\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.070536 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c92846fc-e305-4af9-816a-4067b79d2403-scripts\") pod \"c92846fc-e305-4af9-816a-4067b79d2403\" (UID: \"c92846fc-e305-4af9-816a-4067b79d2403\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.074249 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c92846fc-e305-4af9-816a-4067b79d2403-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c92846fc-e305-4af9-816a-4067b79d2403" (UID: "c92846fc-e305-4af9-816a-4067b79d2403"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.075131 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c92846fc-e305-4af9-816a-4067b79d2403-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c92846fc-e305-4af9-816a-4067b79d2403" (UID: "c92846fc-e305-4af9-816a-4067b79d2403"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.075195 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c92846fc-e305-4af9-816a-4067b79d2403-var-run" (OuterVolumeSpecName: "var-run") pod "c92846fc-e305-4af9-816a-4067b79d2403" (UID: "c92846fc-e305-4af9-816a-4067b79d2403"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.076059 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c92846fc-e305-4af9-816a-4067b79d2403-scripts" (OuterVolumeSpecName: "scripts") pod "c92846fc-e305-4af9-816a-4067b79d2403" (UID: "c92846fc-e305-4af9-816a-4067b79d2403"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.083092 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c92846fc-e305-4af9-816a-4067b79d2403-kube-api-access-btb6k" (OuterVolumeSpecName: "kube-api-access-btb6k") pod "c92846fc-e305-4af9-816a-4067b79d2403" (UID: "c92846fc-e305-4af9-816a-4067b79d2403"). InnerVolumeSpecName "kube-api-access-btb6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.125918 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85c6c54fbb-zhvhw" podUID="0dbfc3a0-883d-46a6-af9b-879efb42840e" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.161:9311/healthcheck\": dial tcp 10.217.0.161:9311: connect: connection refused" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.126210 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85c6c54fbb-zhvhw" podUID="0dbfc3a0-883d-46a6-af9b-879efb42840e" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.161:9311/healthcheck\": dial tcp 10.217.0.161:9311: connect: connection refused" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.138559 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c92846fc-e305-4af9-816a-4067b79d2403-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c92846fc-e305-4af9-816a-4067b79d2403" (UID: "c92846fc-e305-4af9-816a-4067b79d2403"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.149399 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c92846fc-e305-4af9-816a-4067b79d2403-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "c92846fc-e305-4af9-816a-4067b79d2403" (UID: "c92846fc-e305-4af9-816a-4067b79d2403"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.179634 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btb6k\" (UniqueName: \"kubernetes.io/projected/c92846fc-e305-4af9-816a-4067b79d2403-kube-api-access-btb6k\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.179659 4799 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c92846fc-e305-4af9-816a-4067b79d2403-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.179668 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c92846fc-e305-4af9-816a-4067b79d2403-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.179688 4799 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c92846fc-e305-4af9-816a-4067b79d2403-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.179697 4799 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c92846fc-e305-4af9-816a-4067b79d2403-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.179705 4799 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c92846fc-e305-4af9-816a-4067b79d2403-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.179713 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c92846fc-e305-4af9-816a-4067b79d2403-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.221003 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.234337 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.283484 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.284208 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e2bc07cc-2292-4fdf-9444-866ce10a6bf8" containerName="sg-core" containerID="cri-o://f13f2f82669106dde4a669c0c36f489b18439b25cce12c8268b4c6dacd82fd32" gracePeriod=30 Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.284372 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e2bc07cc-2292-4fdf-9444-866ce10a6bf8" containerName="proxy-httpd" containerID="cri-o://bb48f58362059b6ca1888fa50c758a2329cd6dbc499ff416935701be8bede32a" gracePeriod=30 Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.284436 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e2bc07cc-2292-4fdf-9444-866ce10a6bf8" containerName="ceilometer-notification-agent" containerID="cri-o://087ca23e3553e56a77f6d4a218fb2efba0d2f0caa0a25536beeb55f006e98774" gracePeriod=30 Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.284153 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e2bc07cc-2292-4fdf-9444-866ce10a6bf8" containerName="ceilometer-central-agent" containerID="cri-o://4056910ece0b74cfae9ac370024f462da85fe0f74588ea569acc4376d75af528" gracePeriod=30 Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.284874 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.301232 4799 scope.go:117] "RemoveContainer" containerID="a410fa87aa27bc5aec692e3904102b11a8beb6ba58b545d710293acc1e8b85db" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.301613 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.324267 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-07af-account-create-update-wm8x8"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.338243 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-07af-account-create-update-wm8x8"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.364358 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.364544 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="0707039f-a588-4975-a71f-dfe2054ba4e6" containerName="kube-state-metrics" containerID="cri-o://72014f16587b1075c455dccabf89b73f3156eb870ef19e90bd26b184dfc0c813" gracePeriod=30 Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.391951 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-scripts\") pod \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.392008 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-public-tls-certs\") pod \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.392039 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-logs\") pod \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.392076 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82jrg\" (UniqueName: \"kubernetes.io/projected/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-kube-api-access-82jrg\") pod \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.392100 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-httpd-run\") pod \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.392157 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fsvk\" (UniqueName: \"kubernetes.io/projected/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-kube-api-access-5fsvk\") pod \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.392216 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-scripts\") pod \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.392282 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-httpd-run\") pod \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.392370 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.392395 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.392422 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-combined-ca-bundle\") pod \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.392445 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-config-data\") pod \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.392469 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-logs\") pod \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.392492 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-internal-tls-certs\") pod \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.392548 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-config-data\") pod \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\" (UID: \"fbb51a95-a5db-4e7c-8cca-a59d07200ad5\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.392578 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-combined-ca-bundle\") pod \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\" (UID: \"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.394614 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fbb51a95-a5db-4e7c-8cca-a59d07200ad5" (UID: "fbb51a95-a5db-4e7c-8cca-a59d07200ad5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.409451 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-logs" (OuterVolumeSpecName: "logs") pod "8d1ca94d-0dc1-402e-87b0-e76fc390a9a4" (UID: "8d1ca94d-0dc1-402e-87b0-e76fc390a9a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.417669 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8d1ca94d-0dc1-402e-87b0-e76fc390a9a4" (UID: "8d1ca94d-0dc1-402e-87b0-e76fc390a9a4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.417999 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-logs" (OuterVolumeSpecName: "logs") pod "fbb51a95-a5db-4e7c-8cca-a59d07200ad5" (UID: "fbb51a95-a5db-4e7c-8cca-a59d07200ad5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.441074 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-scripts" (OuterVolumeSpecName: "scripts") pod "fbb51a95-a5db-4e7c-8cca-a59d07200ad5" (UID: "fbb51a95-a5db-4e7c-8cca-a59d07200ad5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.441315 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-scripts" (OuterVolumeSpecName: "scripts") pod "8d1ca94d-0dc1-402e-87b0-e76fc390a9a4" (UID: "8d1ca94d-0dc1-402e-87b0-e76fc390a9a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.446571 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "fbb51a95-a5db-4e7c-8cca-a59d07200ad5" (UID: "fbb51a95-a5db-4e7c-8cca-a59d07200ad5"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.446998 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-kube-api-access-5fsvk" (OuterVolumeSpecName: "kube-api-access-5fsvk") pod "fbb51a95-a5db-4e7c-8cca-a59d07200ad5" (UID: "fbb51a95-a5db-4e7c-8cca-a59d07200ad5"). InnerVolumeSpecName "kube-api-access-5fsvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.496161 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fsvk\" (UniqueName: \"kubernetes.io/projected/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-kube-api-access-5fsvk\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.496188 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.496196 4799 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.496224 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.496233 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.496241 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.496248 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.496256 4799 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.501278 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "8d1ca94d-0dc1-402e-87b0-e76fc390a9a4" (UID: "8d1ca94d-0dc1-402e-87b0-e76fc390a9a4"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.502082 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26e17670-568e-498f-be09-ffb1406c3152" path="/var/lib/kubelet/pods/26e17670-568e-498f-be09-ffb1406c3152/volumes" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.502275 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-kube-api-access-82jrg" (OuterVolumeSpecName: "kube-api-access-82jrg") pod "8d1ca94d-0dc1-402e-87b0-e76fc390a9a4" (UID: "8d1ca94d-0dc1-402e-87b0-e76fc390a9a4"). InnerVolumeSpecName "kube-api-access-82jrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.502724 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d63e438-475a-4686-861e-5fba1fcb6767" path="/var/lib/kubelet/pods/2d63e438-475a-4686-861e-5fba1fcb6767/volumes" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.503283 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57b30668-20df-41a6-80b4-ee59aea714dc" path="/var/lib/kubelet/pods/57b30668-20df-41a6-80b4-ee59aea714dc/volumes" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.504715 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="617cc655-aae2-4918-ba79-05e346cf9200" path="/var/lib/kubelet/pods/617cc655-aae2-4918-ba79-05e346cf9200/volumes" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.505604 4799 scope.go:117] "RemoveContainer" containerID="d0a0014e66a5c2d50763987c6970b274c017a47ee7c6c4a889fdb90cf619e3f2" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.508810 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="786fd8aa-3ed9-420c-bdcd-b15a36795e72" path="/var/lib/kubelet/pods/786fd8aa-3ed9-420c-bdcd-b15a36795e72/volumes" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.509793 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a55cd9e-7386-41a1-912c-0876a917bd93" path="/var/lib/kubelet/pods/7a55cd9e-7386-41a1-912c-0876a917bd93/volumes" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.510707 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3f9568f-3dd7-4bdb-9b53-2e6ec291e813" path="/var/lib/kubelet/pods/b3f9568f-3dd7-4bdb-9b53-2e6ec291e813/volumes" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.511178 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d57ed20b-0573-4924-aeee-bef05838e330" path="/var/lib/kubelet/pods/d57ed20b-0573-4924-aeee-bef05838e330/volumes" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.513135 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0a0014e66a5c2d50763987c6970b274c017a47ee7c6c4a889fdb90cf619e3f2\": container with ID starting with d0a0014e66a5c2d50763987c6970b274c017a47ee7c6c4a889fdb90cf619e3f2 not found: ID does not exist" containerID="d0a0014e66a5c2d50763987c6970b274c017a47ee7c6c4a889fdb90cf619e3f2" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.513340 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a0014e66a5c2d50763987c6970b274c017a47ee7c6c4a889fdb90cf619e3f2"} err="failed to get container status \"d0a0014e66a5c2d50763987c6970b274c017a47ee7c6c4a889fdb90cf619e3f2\": rpc error: code = NotFound desc = could not find container \"d0a0014e66a5c2d50763987c6970b274c017a47ee7c6c4a889fdb90cf619e3f2\": container with ID starting with d0a0014e66a5c2d50763987c6970b274c017a47ee7c6c4a889fdb90cf619e3f2 not found: ID does not exist" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.513441 4799 scope.go:117] "RemoveContainer" containerID="a410fa87aa27bc5aec692e3904102b11a8beb6ba58b545d710293acc1e8b85db" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.517695 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a410fa87aa27bc5aec692e3904102b11a8beb6ba58b545d710293acc1e8b85db\": container with ID starting with a410fa87aa27bc5aec692e3904102b11a8beb6ba58b545d710293acc1e8b85db not found: ID does not exist" containerID="a410fa87aa27bc5aec692e3904102b11a8beb6ba58b545d710293acc1e8b85db" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.517766 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a410fa87aa27bc5aec692e3904102b11a8beb6ba58b545d710293acc1e8b85db"} err="failed to get container status \"a410fa87aa27bc5aec692e3904102b11a8beb6ba58b545d710293acc1e8b85db\": rpc error: code = NotFound desc = could not find container \"a410fa87aa27bc5aec692e3904102b11a8beb6ba58b545d710293acc1e8b85db\": container with ID starting with a410fa87aa27bc5aec692e3904102b11a8beb6ba58b545d710293acc1e8b85db not found: ID does not exist" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.517799 4799 scope.go:117] "RemoveContainer" containerID="d0a0014e66a5c2d50763987c6970b274c017a47ee7c6c4a889fdb90cf619e3f2" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.519722 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6b7da0a-2774-4bae-ba2f-3b943e027082" path="/var/lib/kubelet/pods/e6b7da0a-2774-4bae-ba2f-3b943e027082/volumes" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.521485 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a0014e66a5c2d50763987c6970b274c017a47ee7c6c4a889fdb90cf619e3f2"} err="failed to get container status \"d0a0014e66a5c2d50763987c6970b274c017a47ee7c6c4a889fdb90cf619e3f2\": rpc error: code = NotFound desc = could not find container \"d0a0014e66a5c2d50763987c6970b274c017a47ee7c6c4a889fdb90cf619e3f2\": container with ID starting with d0a0014e66a5c2d50763987c6970b274c017a47ee7c6c4a889fdb90cf619e3f2 not found: ID does not exist" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.521535 4799 scope.go:117] "RemoveContainer" containerID="a410fa87aa27bc5aec692e3904102b11a8beb6ba58b545d710293acc1e8b85db" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.528621 4799 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.534972 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a410fa87aa27bc5aec692e3904102b11a8beb6ba58b545d710293acc1e8b85db"} err="failed to get container status \"a410fa87aa27bc5aec692e3904102b11a8beb6ba58b545d710293acc1e8b85db\": rpc error: code = NotFound desc = could not find container \"a410fa87aa27bc5aec692e3904102b11a8beb6ba58b545d710293acc1e8b85db\": container with ID starting with a410fa87aa27bc5aec692e3904102b11a8beb6ba58b545d710293acc1e8b85db not found: ID does not exist" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.535125 4799 scope.go:117] "RemoveContainer" containerID="4bd65b5bc7d74ca250c680832d09104b02e9463eba911724fc54dcc3b8686b82" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.551026 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c37c-account-create-update-mb86j"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.551074 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c37c-account-create-update-mb86j"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.598430 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.598680 4799 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.598748 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82jrg\" (UniqueName: \"kubernetes.io/projected/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-kube-api-access-82jrg\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.608609 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3eda-account-create-update-9krkt"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.647379 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3eda-account-create-update-9krkt"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.650550 4799 scope.go:117] "RemoveContainer" containerID="85a174ff8c67bb38bd7c91ea1b4524dd854982c1de39341da5ed9b10d4340709" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.653981 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d1ca94d-0dc1-402e-87b0-e76fc390a9a4" (UID: "8d1ca94d-0dc1-402e-87b0-e76fc390a9a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.655416 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0981-account-create-update-2s2dd"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.695854 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0981-account-create-update-2s2dd"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.700182 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.713597 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.714133 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="963110c4-038a-4208-b712-f66e885aff69" containerName="memcached" containerID="cri-o://d98d0c79854cb8f58e852a15bda25d609c871904704067e2c7d590e5fdacb53a" gracePeriod=30 Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.728854 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-fq478"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.733408 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-g9rt8"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.737841 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0981-account-create-update-z9vjs"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.743567 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.744061 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.747738 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1ca94d-0dc1-402e-87b0-e76fc390a9a4" containerName="glance-log" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.747758 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1ca94d-0dc1-402e-87b0-e76fc390a9a4" containerName="glance-log" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.747776 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c28c66b4-aa13-41ed-8045-b6f131d48146" containerName="placement-log" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.747783 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c28c66b4-aa13-41ed-8045-b6f131d48146" containerName="placement-log" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.747798 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb51a95-a5db-4e7c-8cca-a59d07200ad5" containerName="glance-log" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.747805 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb51a95-a5db-4e7c-8cca-a59d07200ad5" containerName="glance-log" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.747815 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="786fd8aa-3ed9-420c-bdcd-b15a36795e72" containerName="proxy-server" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.747821 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="786fd8aa-3ed9-420c-bdcd-b15a36795e72" containerName="proxy-server" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.747835 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b7da0a-2774-4bae-ba2f-3b943e027082" containerName="ovsdbserver-nb" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.747841 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b7da0a-2774-4bae-ba2f-3b943e027082" containerName="ovsdbserver-nb" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.747855 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b04e9a37-9722-491b-ada1-992d747e5bed" containerName="cinder-api" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.747862 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b04e9a37-9722-491b-ada1-992d747e5bed" containerName="cinder-api" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.747878 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57b30668-20df-41a6-80b4-ee59aea714dc" containerName="barbican-worker" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.747885 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b30668-20df-41a6-80b4-ee59aea714dc" containerName="barbican-worker" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.747896 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d63e438-475a-4686-861e-5fba1fcb6767" containerName="barbican-keystone-listener" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.747903 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d63e438-475a-4686-861e-5fba1fcb6767" containerName="barbican-keystone-listener" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.747913 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1ca94d-0dc1-402e-87b0-e76fc390a9a4" containerName="glance-httpd" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.747921 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1ca94d-0dc1-402e-87b0-e76fc390a9a4" containerName="glance-httpd" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.748000 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="617cc655-aae2-4918-ba79-05e346cf9200" containerName="openstack-network-exporter" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748007 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="617cc655-aae2-4918-ba79-05e346cf9200" containerName="openstack-network-exporter" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.748018 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c28c66b4-aa13-41ed-8045-b6f131d48146" containerName="placement-api" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748027 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c28c66b4-aa13-41ed-8045-b6f131d48146" containerName="placement-api" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.748041 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d57ed20b-0573-4924-aeee-bef05838e330" containerName="init" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748049 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57ed20b-0573-4924-aeee-bef05838e330" containerName="init" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.748060 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d63e438-475a-4686-861e-5fba1fcb6767" containerName="barbican-keystone-listener-log" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748067 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d63e438-475a-4686-861e-5fba1fcb6767" containerName="barbican-keystone-listener-log" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.748078 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f9568f-3dd7-4bdb-9b53-2e6ec291e813" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748084 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f9568f-3dd7-4bdb-9b53-2e6ec291e813" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.748096 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e17670-568e-498f-be09-ffb1406c3152" containerName="galera" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748102 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e17670-568e-498f-be09-ffb1406c3152" containerName="galera" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.748115 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="786fd8aa-3ed9-420c-bdcd-b15a36795e72" containerName="proxy-httpd" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748122 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="786fd8aa-3ed9-420c-bdcd-b15a36795e72" containerName="proxy-httpd" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.748138 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c92846fc-e305-4af9-816a-4067b79d2403" containerName="ovn-controller" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748145 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c92846fc-e305-4af9-816a-4067b79d2403" containerName="ovn-controller" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.748154 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb51a95-a5db-4e7c-8cca-a59d07200ad5" containerName="glance-httpd" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748163 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb51a95-a5db-4e7c-8cca-a59d07200ad5" containerName="glance-httpd" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.748171 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b7da0a-2774-4bae-ba2f-3b943e027082" containerName="openstack-network-exporter" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748179 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b7da0a-2774-4bae-ba2f-3b943e027082" containerName="openstack-network-exporter" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.748188 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="617cc655-aae2-4918-ba79-05e346cf9200" containerName="ovsdbserver-sb" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748196 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="617cc655-aae2-4918-ba79-05e346cf9200" containerName="ovsdbserver-sb" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.748207 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b04e9a37-9722-491b-ada1-992d747e5bed" containerName="cinder-api-log" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748215 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b04e9a37-9722-491b-ada1-992d747e5bed" containerName="cinder-api-log" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.748223 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d57ed20b-0573-4924-aeee-bef05838e330" containerName="dnsmasq-dns" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748229 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57ed20b-0573-4924-aeee-bef05838e330" containerName="dnsmasq-dns" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.748242 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57b30668-20df-41a6-80b4-ee59aea714dc" containerName="barbican-worker-log" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748249 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b30668-20df-41a6-80b4-ee59aea714dc" containerName="barbican-worker-log" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.748260 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e17670-568e-498f-be09-ffb1406c3152" containerName="mysql-bootstrap" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748266 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e17670-568e-498f-be09-ffb1406c3152" containerName="mysql-bootstrap" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748453 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c28c66b4-aa13-41ed-8045-b6f131d48146" containerName="placement-api" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748471 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d63e438-475a-4686-861e-5fba1fcb6767" containerName="barbican-keystone-listener" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748479 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbb51a95-a5db-4e7c-8cca-a59d07200ad5" containerName="glance-log" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748490 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c28c66b4-aa13-41ed-8045-b6f131d48146" containerName="placement-log" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748501 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="786fd8aa-3ed9-420c-bdcd-b15a36795e72" containerName="proxy-server" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748513 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1ca94d-0dc1-402e-87b0-e76fc390a9a4" containerName="glance-httpd" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748520 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="26e17670-568e-498f-be09-ffb1406c3152" containerName="galera" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748529 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="786fd8aa-3ed9-420c-bdcd-b15a36795e72" containerName="proxy-httpd" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748537 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="617cc655-aae2-4918-ba79-05e346cf9200" containerName="openstack-network-exporter" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748546 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b7da0a-2774-4bae-ba2f-3b943e027082" containerName="openstack-network-exporter" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748558 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f9568f-3dd7-4bdb-9b53-2e6ec291e813" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748566 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="57b30668-20df-41a6-80b4-ee59aea714dc" containerName="barbican-worker-log" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748577 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="b04e9a37-9722-491b-ada1-992d747e5bed" containerName="cinder-api" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748586 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="d57ed20b-0573-4924-aeee-bef05838e330" containerName="dnsmasq-dns" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748595 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1ca94d-0dc1-402e-87b0-e76fc390a9a4" containerName="glance-log" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748608 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="57b30668-20df-41a6-80b4-ee59aea714dc" containerName="barbican-worker" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748620 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbb51a95-a5db-4e7c-8cca-a59d07200ad5" containerName="glance-httpd" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748630 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="b04e9a37-9722-491b-ada1-992d747e5bed" containerName="cinder-api-log" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748644 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b7da0a-2774-4bae-ba2f-3b943e027082" containerName="ovsdbserver-nb" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748654 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="617cc655-aae2-4918-ba79-05e346cf9200" containerName="ovsdbserver-sb" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748665 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c92846fc-e305-4af9-816a-4067b79d2403" containerName="ovn-controller" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.748677 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d63e438-475a-4686-861e-5fba1fcb6767" containerName="barbican-keystone-listener-log" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.749267 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-fq478"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.749389 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0981-account-create-update-z9vjs" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.752498 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.756136 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-g9rt8"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.763989 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbb51a95-a5db-4e7c-8cca-a59d07200ad5" (UID: "fbb51a95-a5db-4e7c-8cca-a59d07200ad5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.776968 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0981-account-create-update-z9vjs"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.784125 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.784191 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7d94bcc8dc-5hh96"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.784739 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-7d94bcc8dc-5hh96" podUID="b32c7a11-1bfb-494f-a2d9-8800ba707e94" containerName="keystone-api" containerID="cri-o://8ac05fa5a627833e782a394e656005413a8d6b8562b382febb6252fc92879e3a" gracePeriod=30 Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.786675 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5c72-account-create-update-mpznk" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.815827 4799 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.816607 4799 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.816639 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.836767 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-config-data" (OuterVolumeSpecName: "config-data") pod "fbb51a95-a5db-4e7c-8cca-a59d07200ad5" (UID: "fbb51a95-a5db-4e7c-8cca-a59d07200ad5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.845327 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.855035 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fbb51a95-a5db-4e7c-8cca-a59d07200ad5" (UID: "fbb51a95-a5db-4e7c-8cca-a59d07200ad5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.860717 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-bf84n"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.876898 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-config-data" (OuterVolumeSpecName: "config-data") pod "8d1ca94d-0dc1-402e-87b0-e76fc390a9a4" (UID: "8d1ca94d-0dc1-402e-87b0-e76fc390a9a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.890669 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0981-account-create-update-z9vjs"] Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.891539 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-ndqzs operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-0981-account-create-update-z9vjs" podUID="f58cac96-5092-4157-a782-d11b81313966" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.902095 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-bf84n"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.902387 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8d1ca94d-0dc1-402e-87b0-e76fc390a9a4" (UID: "8d1ca94d-0dc1-402e-87b0-e76fc390a9a4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917238 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fa13966-417e-4920-8ecc-5afc73396410-operator-scripts\") pod \"2fa13966-417e-4920-8ecc-5afc73396410\" (UID: \"2fa13966-417e-4920-8ecc-5afc73396410\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917309 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-public-tls-certs\") pod \"b04e9a37-9722-491b-ada1-992d747e5bed\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917352 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-internal-tls-certs\") pod \"c28c66b4-aa13-41ed-8045-b6f131d48146\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917381 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkf92\" (UniqueName: \"kubernetes.io/projected/2fa13966-417e-4920-8ecc-5afc73396410-kube-api-access-hkf92\") pod \"2fa13966-417e-4920-8ecc-5afc73396410\" (UID: \"2fa13966-417e-4920-8ecc-5afc73396410\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917434 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-scripts\") pod \"c28c66b4-aa13-41ed-8045-b6f131d48146\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917473 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c28c66b4-aa13-41ed-8045-b6f131d48146-logs\") pod \"c28c66b4-aa13-41ed-8045-b6f131d48146\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917503 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b04e9a37-9722-491b-ada1-992d747e5bed-etc-machine-id\") pod \"b04e9a37-9722-491b-ada1-992d747e5bed\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917519 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcfk4\" (UniqueName: \"kubernetes.io/projected/b04e9a37-9722-491b-ada1-992d747e5bed-kube-api-access-pcfk4\") pod \"b04e9a37-9722-491b-ada1-992d747e5bed\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917569 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-config-data\") pod \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\" (UID: \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917589 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-logs\") pod \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\" (UID: \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917625 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-config-data-custom\") pod \"b04e9a37-9722-491b-ada1-992d747e5bed\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917649 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-nova-metadata-tls-certs\") pod \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\" (UID: \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917666 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-combined-ca-bundle\") pod \"b04e9a37-9722-491b-ada1-992d747e5bed\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917711 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-combined-ca-bundle\") pod \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\" (UID: \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917733 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b04e9a37-9722-491b-ada1-992d747e5bed-logs\") pod \"b04e9a37-9722-491b-ada1-992d747e5bed\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917758 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-combined-ca-bundle\") pod \"c28c66b4-aa13-41ed-8045-b6f131d48146\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917778 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-config-data\") pod \"b04e9a37-9722-491b-ada1-992d747e5bed\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917794 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-config-data\") pod \"c28c66b4-aa13-41ed-8045-b6f131d48146\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917817 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-public-tls-certs\") pod \"c28c66b4-aa13-41ed-8045-b6f131d48146\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917876 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk7sr\" (UniqueName: \"kubernetes.io/projected/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-kube-api-access-gk7sr\") pod \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\" (UID: \"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917899 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-scripts\") pod \"b04e9a37-9722-491b-ada1-992d747e5bed\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917921 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vms2q\" (UniqueName: \"kubernetes.io/projected/c28c66b4-aa13-41ed-8045-b6f131d48146-kube-api-access-vms2q\") pod \"c28c66b4-aa13-41ed-8045-b6f131d48146\" (UID: \"c28c66b4-aa13-41ed-8045-b6f131d48146\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.917940 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-internal-tls-certs\") pod \"b04e9a37-9722-491b-ada1-992d747e5bed\" (UID: \"b04e9a37-9722-491b-ada1-992d747e5bed\") " Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.921598 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f58cac96-5092-4157-a782-d11b81313966-operator-scripts\") pod \"keystone-0981-account-create-update-z9vjs\" (UID: \"f58cac96-5092-4157-a782-d11b81313966\") " pod="openstack/keystone-0981-account-create-update-z9vjs" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.921736 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndqzs\" (UniqueName: \"kubernetes.io/projected/f58cac96-5092-4157-a782-d11b81313966-kube-api-access-ndqzs\") pod \"keystone-0981-account-create-update-z9vjs\" (UID: \"f58cac96-5092-4157-a782-d11b81313966\") " pod="openstack/keystone-0981-account-create-update-z9vjs" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.922009 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.922030 4799 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.922045 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.922057 4799 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbb51a95-a5db-4e7c-8cca-a59d07200ad5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.922855 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fa13966-417e-4920-8ecc-5afc73396410-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2fa13966-417e-4920-8ecc-5afc73396410" (UID: "2fa13966-417e-4920-8ecc-5afc73396410"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.932943 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c28c66b4-aa13-41ed-8045-b6f131d48146-logs" (OuterVolumeSpecName: "logs") pod "c28c66b4-aa13-41ed-8045-b6f131d48146" (UID: "c28c66b4-aa13-41ed-8045-b6f131d48146"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.933078 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fa13966-417e-4920-8ecc-5afc73396410-kube-api-access-hkf92" (OuterVolumeSpecName: "kube-api-access-hkf92") pod "2fa13966-417e-4920-8ecc-5afc73396410" (UID: "2fa13966-417e-4920-8ecc-5afc73396410"). InnerVolumeSpecName "kube-api-access-hkf92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.933134 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9vvt6"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.933625 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b04e9a37-9722-491b-ada1-992d747e5bed-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b04e9a37-9722-491b-ada1-992d747e5bed" (UID: "b04e9a37-9722-491b-ada1-992d747e5bed"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.933952 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b04e9a37-9722-491b-ada1-992d747e5bed-logs" (OuterVolumeSpecName: "logs") pod "b04e9a37-9722-491b-ada1-992d747e5bed" (UID: "b04e9a37-9722-491b-ada1-992d747e5bed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.941520 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-logs" (OuterVolumeSpecName: "logs") pod "cdad1fc3-eebb-4dcb-b69a-076d1dc63a89" (UID: "cdad1fc3-eebb-4dcb-b69a-076d1dc63a89"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.943408 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-scripts" (OuterVolumeSpecName: "scripts") pod "c28c66b4-aa13-41ed-8045-b6f131d48146" (UID: "c28c66b4-aa13-41ed-8045-b6f131d48146"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.951377 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c28c66b4-aa13-41ed-8045-b6f131d48146-kube-api-access-vms2q" (OuterVolumeSpecName: "kube-api-access-vms2q") pod "c28c66b4-aa13-41ed-8045-b6f131d48146" (UID: "c28c66b4-aa13-41ed-8045-b6f131d48146"). InnerVolumeSpecName "kube-api-access-vms2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.951567 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-scripts" (OuterVolumeSpecName: "scripts") pod "b04e9a37-9722-491b-ada1-992d747e5bed" (UID: "b04e9a37-9722-491b-ada1-992d747e5bed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.953929 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b04e9a37-9722-491b-ada1-992d747e5bed" (UID: "b04e9a37-9722-491b-ada1-992d747e5bed"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.953924 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b04e9a37-9722-491b-ada1-992d747e5bed-kube-api-access-pcfk4" (OuterVolumeSpecName: "kube-api-access-pcfk4") pod "b04e9a37-9722-491b-ada1-992d747e5bed" (UID: "b04e9a37-9722-491b-ada1-992d747e5bed"). InnerVolumeSpecName "kube-api-access-pcfk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.968155 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-kube-api-access-gk7sr" (OuterVolumeSpecName: "kube-api-access-gk7sr") pod "cdad1fc3-eebb-4dcb-b69a-076d1dc63a89" (UID: "cdad1fc3-eebb-4dcb-b69a-076d1dc63a89"). InnerVolumeSpecName "kube-api-access-gk7sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.968205 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbcff016e9760704203725b31f6b8f4186145b4f641856d6714921eacca80540" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 27 08:11:58 crc kubenswrapper[4799]: E0127 08:11:58.986874 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbcff016e9760704203725b31f6b8f4186145b4f641856d6714921eacca80540" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.993153 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fbb51a95-a5db-4e7c-8cca-a59d07200ad5","Type":"ContainerDied","Data":"d26a7759bc073d43ed7139c4eabc6e26943a22df69a9ca1460423b7cd817ad0a"} Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.993205 4799 scope.go:117] "RemoveContainer" containerID="8cd4ca5237c50f4d23bf8d52a5873e5a8a629a0e179bd22093979620a71f464d" Jan 27 08:11:58 crc kubenswrapper[4799]: I0127 08:11:58.995427 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.002901 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.002947 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d1ca94d-0dc1-402e-87b0-e76fc390a9a4","Type":"ContainerDied","Data":"983e1aee27eff1308ac3123ecad7bb84214018db1cd40ee868c1c898cb997849"} Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.015925 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbcff016e9760704203725b31f6b8f4186145b4f641856d6714921eacca80540" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.016221 4799 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="54237546-70b8-4475-bd97-53ea6047786b" containerName="ovn-northd" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.024170 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f58cac96-5092-4157-a782-d11b81313966-operator-scripts\") pod \"keystone-0981-account-create-update-z9vjs\" (UID: \"f58cac96-5092-4157-a782-d11b81313966\") " pod="openstack/keystone-0981-account-create-update-z9vjs" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.024233 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndqzs\" (UniqueName: \"kubernetes.io/projected/f58cac96-5092-4157-a782-d11b81313966-kube-api-access-ndqzs\") pod \"keystone-0981-account-create-update-z9vjs\" (UID: \"f58cac96-5092-4157-a782-d11b81313966\") " pod="openstack/keystone-0981-account-create-update-z9vjs" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.024506 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b04e9a37-9722-491b-ada1-992d747e5bed-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.024523 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gk7sr\" (UniqueName: \"kubernetes.io/projected/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-kube-api-access-gk7sr\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.024540 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.024552 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vms2q\" (UniqueName: \"kubernetes.io/projected/c28c66b4-aa13-41ed-8045-b6f131d48146-kube-api-access-vms2q\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.024564 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fa13966-417e-4920-8ecc-5afc73396410-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.024575 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkf92\" (UniqueName: \"kubernetes.io/projected/2fa13966-417e-4920-8ecc-5afc73396410-kube-api-access-hkf92\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.024589 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.024599 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c28c66b4-aa13-41ed-8045-b6f131d48146-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.024609 4799 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b04e9a37-9722-491b-ada1-992d747e5bed-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.024621 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcfk4\" (UniqueName: \"kubernetes.io/projected/b04e9a37-9722-491b-ada1-992d747e5bed-kube-api-access-pcfk4\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.024631 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.024642 4799 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.025975 4799 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.026359 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f58cac96-5092-4157-a782-d11b81313966-operator-scripts podName:f58cac96-5092-4157-a782-d11b81313966 nodeName:}" failed. No retries permitted until 2026-01-27 08:11:59.526338874 +0000 UTC m=+1585.837442939 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/f58cac96-5092-4157-a782-d11b81313966-operator-scripts") pod "keystone-0981-account-create-update-z9vjs" (UID: "f58cac96-5092-4157-a782-d11b81313966") : configmap "openstack-scripts" not found Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.032565 4799 generic.go:334] "Generic (PLEG): container finished" podID="b04e9a37-9722-491b-ada1-992d747e5bed" containerID="116993352c0ec841dd44d8855c494723f90029b4a77addd4297eb275455f13c3" exitCode=0 Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.032743 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.033643 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b04e9a37-9722-491b-ada1-992d747e5bed","Type":"ContainerDied","Data":"116993352c0ec841dd44d8855c494723f90029b4a77addd4297eb275455f13c3"} Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.033680 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b04e9a37-9722-491b-ada1-992d747e5bed","Type":"ContainerDied","Data":"81cc342fe76174057a1e4ac1d424f9ae5b786539e062c9b4410b8ca621aa20ea"} Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.033842 4799 projected.go:194] Error preparing data for projected volume kube-api-access-ndqzs for pod openstack/keystone-0981-account-create-update-z9vjs: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.033917 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f58cac96-5092-4157-a782-d11b81313966-kube-api-access-ndqzs podName:f58cac96-5092-4157-a782-d11b81313966 nodeName:}" failed. No retries permitted until 2026-01-27 08:11:59.533894652 +0000 UTC m=+1585.844998717 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ndqzs" (UniqueName: "kubernetes.io/projected/f58cac96-5092-4157-a782-d11b81313966-kube-api-access-ndqzs") pod "keystone-0981-account-create-update-z9vjs" (UID: "f58cac96-5092-4157-a782-d11b81313966") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.072339 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-lx6nr_c92846fc-e305-4af9-816a-4067b79d2403/ovn-controller/0.log" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.072552 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lx6nr" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.072623 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lx6nr" event={"ID":"c92846fc-e305-4af9-816a-4067b79d2403","Type":"ContainerDied","Data":"90f54639820210985b8db6a1ac08dca9fcb1bd24e3e1719337175bbe0a116f89"} Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.074580 4799 generic.go:334] "Generic (PLEG): container finished" podID="034b328a-c365-4b0a-8346-1cd571d65921" containerID="3aded0d751c3418825c7df5a4d4839d2ed013993df821070e2de8ffc8b9aa2d3" exitCode=0 Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.074627 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"034b328a-c365-4b0a-8346-1cd571d65921","Type":"ContainerDied","Data":"3aded0d751c3418825c7df5a4d4839d2ed013993df821070e2de8ffc8b9aa2d3"} Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.077617 4799 generic.go:334] "Generic (PLEG): container finished" podID="0dbfc3a0-883d-46a6-af9b-879efb42840e" containerID="f6c0c751dfd74d698477e4e018861e43ef7141cef238f287a434550b2a21af4b" exitCode=0 Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.077684 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85c6c54fbb-zhvhw" event={"ID":"0dbfc3a0-883d-46a6-af9b-879efb42840e","Type":"ContainerDied","Data":"f6c0c751dfd74d698477e4e018861e43ef7141cef238f287a434550b2a21af4b"} Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.090204 4799 generic.go:334] "Generic (PLEG): container finished" podID="c28c66b4-aa13-41ed-8045-b6f131d48146" containerID="c2b2e7928c28e65fd10ed3d02c25b2c2d9f228016a43047cdc597b20a6fb8409" exitCode=0 Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.090271 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c8985574d-z64hk" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.090288 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c8985574d-z64hk" event={"ID":"c28c66b4-aa13-41ed-8045-b6f131d48146","Type":"ContainerDied","Data":"c2b2e7928c28e65fd10ed3d02c25b2c2d9f228016a43047cdc597b20a6fb8409"} Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.090366 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c8985574d-z64hk" event={"ID":"c28c66b4-aa13-41ed-8045-b6f131d48146","Type":"ContainerDied","Data":"028a6f7811fa87472ac04365809f2fa32017c274a71bdd8731df32a4ae803c9a"} Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.104103 4799 generic.go:334] "Generic (PLEG): container finished" podID="0707039f-a588-4975-a71f-dfe2054ba4e6" containerID="72014f16587b1075c455dccabf89b73f3156eb870ef19e90bd26b184dfc0c813" exitCode=2 Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.104940 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0707039f-a588-4975-a71f-dfe2054ba4e6","Type":"ContainerDied","Data":"72014f16587b1075c455dccabf89b73f3156eb870ef19e90bd26b184dfc0c813"} Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.109876 4799 generic.go:334] "Generic (PLEG): container finished" podID="3c0d170a-443e-438c-b4cd-0be234b7594c" containerID="54aeb31cea07c6fa392c21c2a7e5a7136b4b7094b0d08b80dd03772a0adeb650" exitCode=0 Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.109931 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6sqk" event={"ID":"3c0d170a-443e-438c-b4cd-0be234b7594c","Type":"ContainerDied","Data":"54aeb31cea07c6fa392c21c2a7e5a7136b4b7094b0d08b80dd03772a0adeb650"} Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.135402 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cdad1fc3-eebb-4dcb-b69a-076d1dc63a89","Type":"ContainerDied","Data":"a4560661f3873cc9c6037b9ddf6cdbcd8b6e6cb830fdfc3b988ebe18f50da85a"} Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.135479 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.141115 4799 generic.go:334] "Generic (PLEG): container finished" podID="69778bc9-c84e-42d0-9645-7fd3afa2ca28" containerID="bc6c983e01ab338de442045f241c1648fa14c28bc2a221e488c7933c7f13fa66" exitCode=0 Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.141170 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"69778bc9-c84e-42d0-9645-7fd3afa2ca28","Type":"ContainerDied","Data":"bc6c983e01ab338de442045f241c1648fa14c28bc2a221e488c7933c7f13fa66"} Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.143119 4799 generic.go:334] "Generic (PLEG): container finished" podID="e2bc07cc-2292-4fdf-9444-866ce10a6bf8" containerID="bb48f58362059b6ca1888fa50c758a2329cd6dbc499ff416935701be8bede32a" exitCode=0 Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.143134 4799 generic.go:334] "Generic (PLEG): container finished" podID="e2bc07cc-2292-4fdf-9444-866ce10a6bf8" containerID="f13f2f82669106dde4a669c0c36f489b18439b25cce12c8268b4c6dacd82fd32" exitCode=2 Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.143143 4799 generic.go:334] "Generic (PLEG): container finished" podID="e2bc07cc-2292-4fdf-9444-866ce10a6bf8" containerID="4056910ece0b74cfae9ac370024f462da85fe0f74588ea569acc4376d75af528" exitCode=0 Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.143176 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2bc07cc-2292-4fdf-9444-866ce10a6bf8","Type":"ContainerDied","Data":"bb48f58362059b6ca1888fa50c758a2329cd6dbc499ff416935701be8bede32a"} Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.143193 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2bc07cc-2292-4fdf-9444-866ce10a6bf8","Type":"ContainerDied","Data":"f13f2f82669106dde4a669c0c36f489b18439b25cce12c8268b4c6dacd82fd32"} Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.143205 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2bc07cc-2292-4fdf-9444-866ce10a6bf8","Type":"ContainerDied","Data":"4056910ece0b74cfae9ac370024f462da85fe0f74588ea569acc4376d75af528"} Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.144293 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5c72-account-create-update-mpznk" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.149335 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0981-account-create-update-z9vjs" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.149447 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5c72-account-create-update-mpznk" event={"ID":"2fa13966-417e-4920-8ecc-5afc73396410","Type":"ContainerDied","Data":"68a5b6483afc7f56e5996950e048d2e072ab2b26d636229789d1047907be6ae8"} Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.149479 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdad1fc3-eebb-4dcb-b69a-076d1dc63a89" (UID: "cdad1fc3-eebb-4dcb-b69a-076d1dc63a89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.183772 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b04e9a37-9722-491b-ada1-992d747e5bed" (UID: "b04e9a37-9722-491b-ada1-992d747e5bed"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.183979 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-config-data" (OuterVolumeSpecName: "config-data") pod "b04e9a37-9722-491b-ada1-992d747e5bed" (UID: "b04e9a37-9722-491b-ada1-992d747e5bed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.193506 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e2bc07cc-2292-4fdf-9444-866ce10a6bf8" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.196:3000/\": dial tcp 10.217.0.196:3000: connect: connection refused" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.204423 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-config-data" (OuterVolumeSpecName: "config-data") pod "cdad1fc3-eebb-4dcb-b69a-076d1dc63a89" (UID: "cdad1fc3-eebb-4dcb-b69a-076d1dc63a89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.238378 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.238408 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.238418 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.238426 4799 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.261396 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "cdad1fc3-eebb-4dcb-b69a-076d1dc63a89" (UID: "cdad1fc3-eebb-4dcb-b69a-076d1dc63a89"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.266492 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="eff64e6c-4e67-435e-9f12-2d0e77530da3" containerName="galera" containerID="cri-o://bdf097745f232f49646a67ce09032547dfab7180e40c2a444e34628e220dced3" gracePeriod=30 Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.281400 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c28c66b4-aa13-41ed-8045-b6f131d48146" (UID: "c28c66b4-aa13-41ed-8045-b6f131d48146"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.294992 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-config-data" (OuterVolumeSpecName: "config-data") pod "c28c66b4-aa13-41ed-8045-b6f131d48146" (UID: "c28c66b4-aa13-41ed-8045-b6f131d48146"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.294965 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b04e9a37-9722-491b-ada1-992d747e5bed" (UID: "b04e9a37-9722-491b-ada1-992d747e5bed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.326322 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b04e9a37-9722-491b-ada1-992d747e5bed" (UID: "b04e9a37-9722-491b-ada1-992d747e5bed"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.337679 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c28c66b4-aa13-41ed-8045-b6f131d48146" (UID: "c28c66b4-aa13-41ed-8045-b6f131d48146"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.340800 4799 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.340834 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.340848 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.340860 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.340871 4799 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b04e9a37-9722-491b-ada1-992d747e5bed-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.340881 4799 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.370741 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.371038 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c28c66b4-aa13-41ed-8045-b6f131d48146" (UID: "c28c66b4-aa13-41ed-8045-b6f131d48146"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.376117 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bc6c983e01ab338de442045f241c1648fa14c28bc2a221e488c7933c7f13fa66 is running failed: container process not found" containerID="bc6c983e01ab338de442045f241c1648fa14c28bc2a221e488c7933c7f13fa66" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.376644 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bc6c983e01ab338de442045f241c1648fa14c28bc2a221e488c7933c7f13fa66 is running failed: container process not found" containerID="bc6c983e01ab338de442045f241c1648fa14c28bc2a221e488c7933c7f13fa66" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.377136 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bc6c983e01ab338de442045f241c1648fa14c28bc2a221e488c7933c7f13fa66 is running failed: container process not found" containerID="bc6c983e01ab338de442045f241c1648fa14c28bc2a221e488c7933c7f13fa66" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.377175 4799 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bc6c983e01ab338de442045f241c1648fa14c28bc2a221e488c7933c7f13fa66 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="69778bc9-c84e-42d0-9645-7fd3afa2ca28" containerName="nova-scheduler-scheduler" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.387377 4799 scope.go:117] "RemoveContainer" containerID="47f6d93a69dd90911aea2e658078c1ee7cf68c9157a2c5886005700a1107b370" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.402023 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f1e-account-create-update-9pcqg" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.428316 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0981-account-create-update-z9vjs" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.437496 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.447422 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.451415 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.454254 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-combined-ca-bundle\") pod \"0dbfc3a0-883d-46a6-af9b-879efb42840e\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.454377 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4fdq\" (UniqueName: \"kubernetes.io/projected/caa95ce2-79d9-4314-af1c-6d3b93667cb5-kube-api-access-h4fdq\") pod \"caa95ce2-79d9-4314-af1c-6d3b93667cb5\" (UID: \"caa95ce2-79d9-4314-af1c-6d3b93667cb5\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.454437 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dbfc3a0-883d-46a6-af9b-879efb42840e-logs\") pod \"0dbfc3a0-883d-46a6-af9b-879efb42840e\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.454504 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caa95ce2-79d9-4314-af1c-6d3b93667cb5-operator-scripts\") pod \"caa95ce2-79d9-4314-af1c-6d3b93667cb5\" (UID: \"caa95ce2-79d9-4314-af1c-6d3b93667cb5\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.454536 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k76kw\" (UniqueName: \"kubernetes.io/projected/0dbfc3a0-883d-46a6-af9b-879efb42840e-kube-api-access-k76kw\") pod \"0dbfc3a0-883d-46a6-af9b-879efb42840e\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.454588 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-internal-tls-certs\") pod \"0dbfc3a0-883d-46a6-af9b-879efb42840e\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.454624 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-config-data-custom\") pod \"0dbfc3a0-883d-46a6-af9b-879efb42840e\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.454714 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-public-tls-certs\") pod \"0dbfc3a0-883d-46a6-af9b-879efb42840e\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.454747 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-config-data\") pod \"0dbfc3a0-883d-46a6-af9b-879efb42840e\" (UID: \"0dbfc3a0-883d-46a6-af9b-879efb42840e\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.455383 4799 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c28c66b4-aa13-41ed-8045-b6f131d48146-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.456033 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dbfc3a0-883d-46a6-af9b-879efb42840e-logs" (OuterVolumeSpecName: "logs") pod "0dbfc3a0-883d-46a6-af9b-879efb42840e" (UID: "0dbfc3a0-883d-46a6-af9b-879efb42840e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.456108 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caa95ce2-79d9-4314-af1c-6d3b93667cb5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "caa95ce2-79d9-4314-af1c-6d3b93667cb5" (UID: "caa95ce2-79d9-4314-af1c-6d3b93667cb5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.460195 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0dbfc3a0-883d-46a6-af9b-879efb42840e" (UID: "0dbfc3a0-883d-46a6-af9b-879efb42840e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.462197 4799 scope.go:117] "RemoveContainer" containerID="9163187825d75d65281abfc71d2b39e88d5e1b584e17b29a6cb086d7ce38d30f" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.473171 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.478531 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caa95ce2-79d9-4314-af1c-6d3b93667cb5-kube-api-access-h4fdq" (OuterVolumeSpecName: "kube-api-access-h4fdq") pod "caa95ce2-79d9-4314-af1c-6d3b93667cb5" (UID: "caa95ce2-79d9-4314-af1c-6d3b93667cb5"). InnerVolumeSpecName "kube-api-access-h4fdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.489898 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.489942 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.489969 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dbfc3a0-883d-46a6-af9b-879efb42840e-kube-api-access-k76kw" (OuterVolumeSpecName: "kube-api-access-k76kw") pod "0dbfc3a0-883d-46a6-af9b-879efb42840e" (UID: "0dbfc3a0-883d-46a6-af9b-879efb42840e"). InnerVolumeSpecName "kube-api-access-k76kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.512957 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5c72-account-create-update-mpznk"] Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.554913 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0dbfc3a0-883d-46a6-af9b-879efb42840e" (UID: "0dbfc3a0-883d-46a6-af9b-879efb42840e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.556699 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rtx8\" (UniqueName: \"kubernetes.io/projected/0707039f-a588-4975-a71f-dfe2054ba4e6-kube-api-access-7rtx8\") pod \"0707039f-a588-4975-a71f-dfe2054ba4e6\" (UID: \"0707039f-a588-4975-a71f-dfe2054ba4e6\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.556754 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7rgl\" (UniqueName: \"kubernetes.io/projected/034b328a-c365-4b0a-8346-1cd571d65921-kube-api-access-p7rgl\") pod \"034b328a-c365-4b0a-8346-1cd571d65921\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.556844 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-combined-ca-bundle\") pod \"034b328a-c365-4b0a-8346-1cd571d65921\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.556891 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-internal-tls-certs\") pod \"034b328a-c365-4b0a-8346-1cd571d65921\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.556926 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0707039f-a588-4975-a71f-dfe2054ba4e6-kube-state-metrics-tls-certs\") pod \"0707039f-a588-4975-a71f-dfe2054ba4e6\" (UID: \"0707039f-a588-4975-a71f-dfe2054ba4e6\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.556948 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0707039f-a588-4975-a71f-dfe2054ba4e6-combined-ca-bundle\") pod \"0707039f-a588-4975-a71f-dfe2054ba4e6\" (UID: \"0707039f-a588-4975-a71f-dfe2054ba4e6\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.556992 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-config-data\") pod \"034b328a-c365-4b0a-8346-1cd571d65921\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.557010 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-public-tls-certs\") pod \"034b328a-c365-4b0a-8346-1cd571d65921\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.557050 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/034b328a-c365-4b0a-8346-1cd571d65921-logs\") pod \"034b328a-c365-4b0a-8346-1cd571d65921\" (UID: \"034b328a-c365-4b0a-8346-1cd571d65921\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.557071 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0707039f-a588-4975-a71f-dfe2054ba4e6-kube-state-metrics-tls-config\") pod \"0707039f-a588-4975-a71f-dfe2054ba4e6\" (UID: \"0707039f-a588-4975-a71f-dfe2054ba4e6\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.557395 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f58cac96-5092-4157-a782-d11b81313966-operator-scripts\") pod \"keystone-0981-account-create-update-z9vjs\" (UID: \"f58cac96-5092-4157-a782-d11b81313966\") " pod="openstack/keystone-0981-account-create-update-z9vjs" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.557434 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndqzs\" (UniqueName: \"kubernetes.io/projected/f58cac96-5092-4157-a782-d11b81313966-kube-api-access-ndqzs\") pod \"keystone-0981-account-create-update-z9vjs\" (UID: \"f58cac96-5092-4157-a782-d11b81313966\") " pod="openstack/keystone-0981-account-create-update-z9vjs" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.557515 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dbfc3a0-883d-46a6-af9b-879efb42840e-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.557528 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caa95ce2-79d9-4314-af1c-6d3b93667cb5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.557539 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k76kw\" (UniqueName: \"kubernetes.io/projected/0dbfc3a0-883d-46a6-af9b-879efb42840e-kube-api-access-k76kw\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.557548 4799 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.557557 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.557566 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4fdq\" (UniqueName: \"kubernetes.io/projected/caa95ce2-79d9-4314-af1c-6d3b93667cb5-kube-api-access-h4fdq\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.560064 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-5c72-account-create-update-mpznk"] Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.559849 4799 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.560254 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/034b328a-c365-4b0a-8346-1cd571d65921-logs" (OuterVolumeSpecName: "logs") pod "034b328a-c365-4b0a-8346-1cd571d65921" (UID: "034b328a-c365-4b0a-8346-1cd571d65921"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.560608 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f58cac96-5092-4157-a782-d11b81313966-operator-scripts podName:f58cac96-5092-4157-a782-d11b81313966 nodeName:}" failed. No retries permitted until 2026-01-27 08:12:00.560586925 +0000 UTC m=+1586.871690990 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/f58cac96-5092-4157-a782-d11b81313966-operator-scripts") pod "keystone-0981-account-create-update-z9vjs" (UID: "f58cac96-5092-4157-a782-d11b81313966") : configmap "openstack-scripts" not found Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.561724 4799 projected.go:194] Error preparing data for projected volume kube-api-access-ndqzs for pod openstack/keystone-0981-account-create-update-z9vjs: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.561781 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f58cac96-5092-4157-a782-d11b81313966-kube-api-access-ndqzs podName:f58cac96-5092-4157-a782-d11b81313966 nodeName:}" failed. No retries permitted until 2026-01-27 08:12:00.561772018 +0000 UTC m=+1586.872876083 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ndqzs" (UniqueName: "kubernetes.io/projected/f58cac96-5092-4157-a782-d11b81313966-kube-api-access-ndqzs") pod "keystone-0981-account-create-update-z9vjs" (UID: "f58cac96-5092-4157-a782-d11b81313966") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.577547 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/034b328a-c365-4b0a-8346-1cd571d65921-kube-api-access-p7rgl" (OuterVolumeSpecName: "kube-api-access-p7rgl") pod "034b328a-c365-4b0a-8346-1cd571d65921" (UID: "034b328a-c365-4b0a-8346-1cd571d65921"). InnerVolumeSpecName "kube-api-access-p7rgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.578697 4799 scope.go:117] "RemoveContainer" containerID="645e7a56ca8ac37dc99398357884c68673d8c691b0491e5e93509703b5f8f491" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.585719 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0707039f-a588-4975-a71f-dfe2054ba4e6-kube-api-access-7rtx8" (OuterVolumeSpecName: "kube-api-access-7rtx8") pod "0707039f-a588-4975-a71f-dfe2054ba4e6" (UID: "0707039f-a588-4975-a71f-dfe2054ba4e6"). InnerVolumeSpecName "kube-api-access-7rtx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.588399 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-lx6nr"] Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.589639 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0dbfc3a0-883d-46a6-af9b-879efb42840e" (UID: "0dbfc3a0-883d-46a6-af9b-879efb42840e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.596375 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-lx6nr"] Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.601411 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.607364 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.611477 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" containerID="cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.611805 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="647797099f96b25df47d1cc66e23dbb35585ab19b6a105db8444e78a1585d8dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.611971 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" containerID="cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.613579 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" containerID="cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.613608 4799 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-zct2j" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovsdb-server" Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.613972 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.614061 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="647797099f96b25df47d1cc66e23dbb35585ab19b6a105db8444e78a1585d8dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.615983 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.617382 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="647797099f96b25df47d1cc66e23dbb35585ab19b6a105db8444e78a1585d8dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.617412 4799 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="8bca1b10-545f-4e35-a5af-e760d464d0ff" containerName="nova-cell1-conductor-conductor" Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.617576 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.617595 4799 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-zct2j" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovs-vswitchd" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.620433 4799 scope.go:117] "RemoveContainer" containerID="116993352c0ec841dd44d8855c494723f90029b4a77addd4297eb275455f13c3" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.638666 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9vvt6" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.638976 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7c8985574d-z64hk"] Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.639339 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "034b328a-c365-4b0a-8346-1cd571d65921" (UID: "034b328a-c365-4b0a-8346-1cd571d65921"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.646115 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7c8985574d-z64hk"] Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.658253 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.658720 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f97b84a5-a34c-405f-8357-70cad8efedbc-operator-scripts\") pod \"f97b84a5-a34c-405f-8357-70cad8efedbc\" (UID: \"f97b84a5-a34c-405f-8357-70cad8efedbc\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.658760 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68ddq\" (UniqueName: \"kubernetes.io/projected/f97b84a5-a34c-405f-8357-70cad8efedbc-kube-api-access-68ddq\") pod \"f97b84a5-a34c-405f-8357-70cad8efedbc\" (UID: \"f97b84a5-a34c-405f-8357-70cad8efedbc\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.659369 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/034b328a-c365-4b0a-8346-1cd571d65921-logs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.659373 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97b84a5-a34c-405f-8357-70cad8efedbc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f97b84a5-a34c-405f-8357-70cad8efedbc" (UID: "f97b84a5-a34c-405f-8357-70cad8efedbc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.659388 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rtx8\" (UniqueName: \"kubernetes.io/projected/0707039f-a588-4975-a71f-dfe2054ba4e6-kube-api-access-7rtx8\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.659435 4799 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.659447 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7rgl\" (UniqueName: \"kubernetes.io/projected/034b328a-c365-4b0a-8346-1cd571d65921-kube-api-access-p7rgl\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.659458 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.661032 4799 scope.go:117] "RemoveContainer" containerID="f06952a9ab57e05a35f1acbc89982309bc73e3bb682bdad8e9a6892475d7d2d6" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.674982 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97b84a5-a34c-405f-8357-70cad8efedbc-kube-api-access-68ddq" (OuterVolumeSpecName: "kube-api-access-68ddq") pod "f97b84a5-a34c-405f-8357-70cad8efedbc" (UID: "f97b84a5-a34c-405f-8357-70cad8efedbc"). InnerVolumeSpecName "kube-api-access-68ddq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.675034 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0707039f-a588-4975-a71f-dfe2054ba4e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0707039f-a588-4975-a71f-dfe2054ba4e6" (UID: "0707039f-a588-4975-a71f-dfe2054ba4e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.675048 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.680889 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0707039f-a588-4975-a71f-dfe2054ba4e6-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "0707039f-a588-4975-a71f-dfe2054ba4e6" (UID: "0707039f-a588-4975-a71f-dfe2054ba4e6"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.683944 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.684508 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-config-data" (OuterVolumeSpecName: "config-data") pod "0dbfc3a0-883d-46a6-af9b-879efb42840e" (UID: "0dbfc3a0-883d-46a6-af9b-879efb42840e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.696607 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0dbfc3a0-883d-46a6-af9b-879efb42840e" (UID: "0dbfc3a0-883d-46a6-af9b-879efb42840e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.718535 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-config-data" (OuterVolumeSpecName: "config-data") pod "034b328a-c365-4b0a-8346-1cd571d65921" (UID: "034b328a-c365-4b0a-8346-1cd571d65921"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.722418 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "034b328a-c365-4b0a-8346-1cd571d65921" (UID: "034b328a-c365-4b0a-8346-1cd571d65921"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.731184 4799 scope.go:117] "RemoveContainer" containerID="116993352c0ec841dd44d8855c494723f90029b4a77addd4297eb275455f13c3" Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.731676 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"116993352c0ec841dd44d8855c494723f90029b4a77addd4297eb275455f13c3\": container with ID starting with 116993352c0ec841dd44d8855c494723f90029b4a77addd4297eb275455f13c3 not found: ID does not exist" containerID="116993352c0ec841dd44d8855c494723f90029b4a77addd4297eb275455f13c3" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.731739 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"116993352c0ec841dd44d8855c494723f90029b4a77addd4297eb275455f13c3"} err="failed to get container status \"116993352c0ec841dd44d8855c494723f90029b4a77addd4297eb275455f13c3\": rpc error: code = NotFound desc = could not find container \"116993352c0ec841dd44d8855c494723f90029b4a77addd4297eb275455f13c3\": container with ID starting with 116993352c0ec841dd44d8855c494723f90029b4a77addd4297eb275455f13c3 not found: ID does not exist" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.731767 4799 scope.go:117] "RemoveContainer" containerID="f06952a9ab57e05a35f1acbc89982309bc73e3bb682bdad8e9a6892475d7d2d6" Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.732839 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f06952a9ab57e05a35f1acbc89982309bc73e3bb682bdad8e9a6892475d7d2d6\": container with ID starting with f06952a9ab57e05a35f1acbc89982309bc73e3bb682bdad8e9a6892475d7d2d6 not found: ID does not exist" containerID="f06952a9ab57e05a35f1acbc89982309bc73e3bb682bdad8e9a6892475d7d2d6" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.732883 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f06952a9ab57e05a35f1acbc89982309bc73e3bb682bdad8e9a6892475d7d2d6"} err="failed to get container status \"f06952a9ab57e05a35f1acbc89982309bc73e3bb682bdad8e9a6892475d7d2d6\": rpc error: code = NotFound desc = could not find container \"f06952a9ab57e05a35f1acbc89982309bc73e3bb682bdad8e9a6892475d7d2d6\": container with ID starting with f06952a9ab57e05a35f1acbc89982309bc73e3bb682bdad8e9a6892475d7d2d6 not found: ID does not exist" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.732911 4799 scope.go:117] "RemoveContainer" containerID="dba41f475ea813cae4d687243584f94982d501ee64961725b7a4d2f0b2272bd9" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.737410 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "034b328a-c365-4b0a-8346-1cd571d65921" (UID: "034b328a-c365-4b0a-8346-1cd571d65921"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.738476 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0707039f-a588-4975-a71f-dfe2054ba4e6-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "0707039f-a588-4975-a71f-dfe2054ba4e6" (UID: "0707039f-a588-4975-a71f-dfe2054ba4e6"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.759971 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69778bc9-c84e-42d0-9645-7fd3afa2ca28-config-data\") pod \"69778bc9-c84e-42d0-9645-7fd3afa2ca28\" (UID: \"69778bc9-c84e-42d0-9645-7fd3afa2ca28\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.760071 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69778bc9-c84e-42d0-9645-7fd3afa2ca28-combined-ca-bundle\") pod \"69778bc9-c84e-42d0-9645-7fd3afa2ca28\" (UID: \"69778bc9-c84e-42d0-9645-7fd3afa2ca28\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.760180 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvhkn\" (UniqueName: \"kubernetes.io/projected/69778bc9-c84e-42d0-9645-7fd3afa2ca28-kube-api-access-fvhkn\") pod \"69778bc9-c84e-42d0-9645-7fd3afa2ca28\" (UID: \"69778bc9-c84e-42d0-9645-7fd3afa2ca28\") " Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.760585 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f97b84a5-a34c-405f-8357-70cad8efedbc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.760604 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68ddq\" (UniqueName: \"kubernetes.io/projected/f97b84a5-a34c-405f-8357-70cad8efedbc-kube-api-access-68ddq\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.760615 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.760624 4799 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.760635 4799 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0707039f-a588-4975-a71f-dfe2054ba4e6-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.760643 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0707039f-a588-4975-a71f-dfe2054ba4e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.760653 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.760661 4799 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/034b328a-c365-4b0a-8346-1cd571d65921-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.760671 4799 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dbfc3a0-883d-46a6-af9b-879efb42840e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.760679 4799 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0707039f-a588-4975-a71f-dfe2054ba4e6-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.762167 4799 scope.go:117] "RemoveContainer" containerID="c2b2e7928c28e65fd10ed3d02c25b2c2d9f228016a43047cdc597b20a6fb8409" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.766406 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69778bc9-c84e-42d0-9645-7fd3afa2ca28-kube-api-access-fvhkn" (OuterVolumeSpecName: "kube-api-access-fvhkn") pod "69778bc9-c84e-42d0-9645-7fd3afa2ca28" (UID: "69778bc9-c84e-42d0-9645-7fd3afa2ca28"). InnerVolumeSpecName "kube-api-access-fvhkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.785877 4799 scope.go:117] "RemoveContainer" containerID="801f7188ac9b031ee354e36b6557438a9bb09e2611de5ef5ebabce452d6cad4b" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.788344 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69778bc9-c84e-42d0-9645-7fd3afa2ca28-config-data" (OuterVolumeSpecName: "config-data") pod "69778bc9-c84e-42d0-9645-7fd3afa2ca28" (UID: "69778bc9-c84e-42d0-9645-7fd3afa2ca28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.791502 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69778bc9-c84e-42d0-9645-7fd3afa2ca28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69778bc9-c84e-42d0-9645-7fd3afa2ca28" (UID: "69778bc9-c84e-42d0-9645-7fd3afa2ca28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.816706 4799 scope.go:117] "RemoveContainer" containerID="c2b2e7928c28e65fd10ed3d02c25b2c2d9f228016a43047cdc597b20a6fb8409" Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.817780 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2b2e7928c28e65fd10ed3d02c25b2c2d9f228016a43047cdc597b20a6fb8409\": container with ID starting with c2b2e7928c28e65fd10ed3d02c25b2c2d9f228016a43047cdc597b20a6fb8409 not found: ID does not exist" containerID="c2b2e7928c28e65fd10ed3d02c25b2c2d9f228016a43047cdc597b20a6fb8409" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.817816 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2b2e7928c28e65fd10ed3d02c25b2c2d9f228016a43047cdc597b20a6fb8409"} err="failed to get container status \"c2b2e7928c28e65fd10ed3d02c25b2c2d9f228016a43047cdc597b20a6fb8409\": rpc error: code = NotFound desc = could not find container \"c2b2e7928c28e65fd10ed3d02c25b2c2d9f228016a43047cdc597b20a6fb8409\": container with ID starting with c2b2e7928c28e65fd10ed3d02c25b2c2d9f228016a43047cdc597b20a6fb8409 not found: ID does not exist" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.817842 4799 scope.go:117] "RemoveContainer" containerID="801f7188ac9b031ee354e36b6557438a9bb09e2611de5ef5ebabce452d6cad4b" Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.818201 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"801f7188ac9b031ee354e36b6557438a9bb09e2611de5ef5ebabce452d6cad4b\": container with ID starting with 801f7188ac9b031ee354e36b6557438a9bb09e2611de5ef5ebabce452d6cad4b not found: ID does not exist" containerID="801f7188ac9b031ee354e36b6557438a9bb09e2611de5ef5ebabce452d6cad4b" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.818230 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"801f7188ac9b031ee354e36b6557438a9bb09e2611de5ef5ebabce452d6cad4b"} err="failed to get container status \"801f7188ac9b031ee354e36b6557438a9bb09e2611de5ef5ebabce452d6cad4b\": rpc error: code = NotFound desc = could not find container \"801f7188ac9b031ee354e36b6557438a9bb09e2611de5ef5ebabce452d6cad4b\": container with ID starting with 801f7188ac9b031ee354e36b6557438a9bb09e2611de5ef5ebabce452d6cad4b not found: ID does not exist" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.818271 4799 scope.go:117] "RemoveContainer" containerID="9baffec134930c3ab03eac84affdcfbddaeaf581caaa16db927895c2feaa8b6f" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.837465 4799 scope.go:117] "RemoveContainer" containerID="947b7b89455c6bd8431f5a9a840ca3ceaaa323705be0a8b142bb4d7b5cf00a77" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.862195 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69778bc9-c84e-42d0-9645-7fd3afa2ca28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.862239 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvhkn\" (UniqueName: \"kubernetes.io/projected/69778bc9-c84e-42d0-9645-7fd3afa2ca28-kube-api-access-fvhkn\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: I0127 08:11:59.862251 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69778bc9-c84e-42d0-9645-7fd3afa2ca28-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.862344 4799 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 27 08:11:59 crc kubenswrapper[4799]: E0127 08:11:59.862399 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-config-data podName:0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0 nodeName:}" failed. No retries permitted until 2026-01-27 08:12:07.862381155 +0000 UTC m=+1594.173485220 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-config-data") pod "rabbitmq-cell1-server-0" (UID: "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0") : configmap "rabbitmq-cell1-config-data" not found Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.156257 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"034b328a-c365-4b0a-8346-1cd571d65921","Type":"ContainerDied","Data":"6df7a8bbfa5f25aba3030c54619c1cd478cb56bca5857024395af234fc65f172"} Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.156335 4799 scope.go:117] "RemoveContainer" containerID="3aded0d751c3418825c7df5a4d4839d2ed013993df821070e2de8ffc8b9aa2d3" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.156566 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.164707 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f1e-account-create-update-9pcqg" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.164719 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f1e-account-create-update-9pcqg" event={"ID":"caa95ce2-79d9-4314-af1c-6d3b93667cb5","Type":"ContainerDied","Data":"c9fd979a05216d2d96ffeb8a0d2cd3eacd78d1bf364dd444e29c0a6feacf0d6a"} Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.175980 4799 generic.go:334] "Generic (PLEG): container finished" podID="3c0d170a-443e-438c-b4cd-0be234b7594c" containerID="ae42256700c9a07707e77718c75d52112ac5af840206cfbb130a919c8d63abf3" exitCode=0 Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.176046 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6sqk" event={"ID":"3c0d170a-443e-438c-b4cd-0be234b7594c","Type":"ContainerDied","Data":"ae42256700c9a07707e77718c75d52112ac5af840206cfbb130a919c8d63abf3"} Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.182902 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9vvt6" event={"ID":"f97b84a5-a34c-405f-8357-70cad8efedbc","Type":"ContainerDied","Data":"333e93a7500bc2fa96959f58aa041856b5f7249b66b7247beb95c35c0357f70c"} Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.182957 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9vvt6" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.183905 4799 scope.go:117] "RemoveContainer" containerID="e4071f639ad9f711f3ce82cf2b36a21d6fcce03277af1036060c6ec9c693832d" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.190354 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85c6c54fbb-zhvhw" event={"ID":"0dbfc3a0-883d-46a6-af9b-879efb42840e","Type":"ContainerDied","Data":"4ea8cf7ad52dfadf665eee540a9fb0534c8e26a962cae0d4cd1154cb7bc37cce"} Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.190469 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85c6c54fbb-zhvhw" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.198648 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.199547 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"69778bc9-c84e-42d0-9645-7fd3afa2ca28","Type":"ContainerDied","Data":"f1c93ed585f701ee8e159ab0d001f5debb73bda9c36a4fa0913614dbca815445"} Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.204106 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.204872 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0707039f-a588-4975-a71f-dfe2054ba4e6","Type":"ContainerDied","Data":"981ae664c354205006d9e304e0fee3a52c3ca02640576e79a28ad441b8a3f40f"} Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.207719 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0981-account-create-update-z9vjs" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.231513 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.266729 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 08:12:00 crc kubenswrapper[4799]: E0127 08:12:00.292675 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="202e2f036574e98bda00448180d3c7a6925f661345419ea82a8d5eedddba0db0" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 08:12:00 crc kubenswrapper[4799]: E0127 08:12:00.294415 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="202e2f036574e98bda00448180d3c7a6925f661345419ea82a8d5eedddba0db0" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 08:12:00 crc kubenswrapper[4799]: E0127 08:12:00.296180 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="202e2f036574e98bda00448180d3c7a6925f661345419ea82a8d5eedddba0db0" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 08:12:00 crc kubenswrapper[4799]: E0127 08:12:00.296252 4799 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="3c53857a-2e9c-4057-9f69-3611704d36f5" containerName="nova-cell0-conductor-conductor" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.362152 4799 scope.go:117] "RemoveContainer" containerID="138d26b6968877e65cfe794731a3ffaae35afc65d14bc11d0641011dd81c571a" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.420242 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0f1e-account-create-update-9pcqg"] Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.428052 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0f1e-account-create-update-9pcqg"] Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.435940 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0981-account-create-update-z9vjs"] Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.451533 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0981-account-create-update-z9vjs"] Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.474412 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="034b328a-c365-4b0a-8346-1cd571d65921" path="/var/lib/kubelet/pods/034b328a-c365-4b0a-8346-1cd571d65921/volumes" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.474742 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndqzs\" (UniqueName: \"kubernetes.io/projected/f58cac96-5092-4157-a782-d11b81313966-kube-api-access-ndqzs\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.475108 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13e399ce-00b2-45ea-980b-338dda00c87d" path="/var/lib/kubelet/pods/13e399ce-00b2-45ea-980b-338dda00c87d/volumes" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.475505 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f58cac96-5092-4157-a782-d11b81313966-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.475839 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ac2fc5a-2192-497c-ad7f-76a3fef58da6" path="/var/lib/kubelet/pods/1ac2fc5a-2192-497c-ad7f-76a3fef58da6/volumes" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.476255 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fa13966-417e-4920-8ecc-5afc73396410" path="/var/lib/kubelet/pods/2fa13966-417e-4920-8ecc-5afc73396410/volumes" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.477125 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c238fc-db9c-4928-95a5-ba3a81f716f8" path="/var/lib/kubelet/pods/49c238fc-db9c-4928-95a5-ba3a81f716f8/volumes" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.478358 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37" path="/var/lib/kubelet/pods/6f6f46c2-2ec6-4dd9-ac29-f9de5ddb2f37/volumes" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.480106 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d1ca94d-0dc1-402e-87b0-e76fc390a9a4" path="/var/lib/kubelet/pods/8d1ca94d-0dc1-402e-87b0-e76fc390a9a4/volumes" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.481070 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6cd25bd-8dfb-4557-a0c8-06b3ae779192" path="/var/lib/kubelet/pods/a6cd25bd-8dfb-4557-a0c8-06b3ae779192/volumes" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.481588 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade86985-ca70-4f21-ae7a-825353f912cb" path="/var/lib/kubelet/pods/ade86985-ca70-4f21-ae7a-825353f912cb/volumes" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.482536 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b04e9a37-9722-491b-ada1-992d747e5bed" path="/var/lib/kubelet/pods/b04e9a37-9722-491b-ada1-992d747e5bed/volumes" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.483193 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c28c66b4-aa13-41ed-8045-b6f131d48146" path="/var/lib/kubelet/pods/c28c66b4-aa13-41ed-8045-b6f131d48146/volumes" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.483965 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c92846fc-e305-4af9-816a-4067b79d2403" path="/var/lib/kubelet/pods/c92846fc-e305-4af9-816a-4067b79d2403/volumes" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.485180 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caa95ce2-79d9-4314-af1c-6d3b93667cb5" path="/var/lib/kubelet/pods/caa95ce2-79d9-4314-af1c-6d3b93667cb5/volumes" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.485576 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdad1fc3-eebb-4dcb-b69a-076d1dc63a89" path="/var/lib/kubelet/pods/cdad1fc3-eebb-4dcb-b69a-076d1dc63a89/volumes" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.491243 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f58cac96-5092-4157-a782-d11b81313966" path="/var/lib/kubelet/pods/f58cac96-5092-4157-a782-d11b81313966/volumes" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.491983 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbb51a95-a5db-4e7c-8cca-a59d07200ad5" path="/var/lib/kubelet/pods/fbb51a95-a5db-4e7c-8cca-a59d07200ad5/volumes" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.493017 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9vvt6"] Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.493047 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9vvt6"] Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.493062 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-85c6c54fbb-zhvhw"] Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.493073 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-85c6c54fbb-zhvhw"] Jan 27 08:12:00 crc kubenswrapper[4799]: E0127 08:12:00.578511 4799 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 27 08:12:00 crc kubenswrapper[4799]: E0127 08:12:00.578874 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-config-data podName:8d822fe6-f547-4b8f-a6e4-c7256e1b2ace nodeName:}" failed. No retries permitted until 2026-01-27 08:12:08.578857132 +0000 UTC m=+1594.889961197 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-config-data") pod "rabbitmq-server-0" (UID: "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace") : configmap "rabbitmq-config-data" not found Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.592210 4799 scope.go:117] "RemoveContainer" containerID="f6c0c751dfd74d698477e4e018861e43ef7141cef238f287a434550b2a21af4b" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.667368 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.676856 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.679100 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg4fc\" (UniqueName: \"kubernetes.io/projected/963110c4-038a-4208-b712-f66e885aff69-kube-api-access-vg4fc\") pod \"963110c4-038a-4208-b712-f66e885aff69\" (UID: \"963110c4-038a-4208-b712-f66e885aff69\") " Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.679209 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/963110c4-038a-4208-b712-f66e885aff69-config-data\") pod \"963110c4-038a-4208-b712-f66e885aff69\" (UID: \"963110c4-038a-4208-b712-f66e885aff69\") " Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.679253 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/963110c4-038a-4208-b712-f66e885aff69-memcached-tls-certs\") pod \"963110c4-038a-4208-b712-f66e885aff69\" (UID: \"963110c4-038a-4208-b712-f66e885aff69\") " Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.679346 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/963110c4-038a-4208-b712-f66e885aff69-kolla-config\") pod \"963110c4-038a-4208-b712-f66e885aff69\" (UID: \"963110c4-038a-4208-b712-f66e885aff69\") " Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.679420 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/963110c4-038a-4208-b712-f66e885aff69-combined-ca-bundle\") pod \"963110c4-038a-4208-b712-f66e885aff69\" (UID: \"963110c4-038a-4208-b712-f66e885aff69\") " Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.684076 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.685324 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/963110c4-038a-4208-b712-f66e885aff69-config-data" (OuterVolumeSpecName: "config-data") pod "963110c4-038a-4208-b712-f66e885aff69" (UID: "963110c4-038a-4208-b712-f66e885aff69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.685415 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/963110c4-038a-4208-b712-f66e885aff69-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "963110c4-038a-4208-b712-f66e885aff69" (UID: "963110c4-038a-4208-b712-f66e885aff69"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.688120 4799 scope.go:117] "RemoveContainer" containerID="40d6b9faa74af8ff6a32d01f9fc3a6c0f6258a0b08ea53fa5774e5655a3aa97d" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.701483 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.705007 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/963110c4-038a-4208-b712-f66e885aff69-kube-api-access-vg4fc" (OuterVolumeSpecName: "kube-api-access-vg4fc") pod "963110c4-038a-4208-b712-f66e885aff69" (UID: "963110c4-038a-4208-b712-f66e885aff69"). InnerVolumeSpecName "kube-api-access-vg4fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.709609 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/963110c4-038a-4208-b712-f66e885aff69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "963110c4-038a-4208-b712-f66e885aff69" (UID: "963110c4-038a-4208-b712-f66e885aff69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.730462 4799 scope.go:117] "RemoveContainer" containerID="bc6c983e01ab338de442045f241c1648fa14c28bc2a221e488c7933c7f13fa66" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.742266 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/963110c4-038a-4208-b712-f66e885aff69-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "963110c4-038a-4208-b712-f66e885aff69" (UID: "963110c4-038a-4208-b712-f66e885aff69"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.742338 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.763924 4799 scope.go:117] "RemoveContainer" containerID="72014f16587b1075c455dccabf89b73f3156eb870ef19e90bd26b184dfc0c813" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.780646 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/963110c4-038a-4208-b712-f66e885aff69-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.780692 4799 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/963110c4-038a-4208-b712-f66e885aff69-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.780706 4799 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/963110c4-038a-4208-b712-f66e885aff69-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.780716 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/963110c4-038a-4208-b712-f66e885aff69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.780728 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vg4fc\" (UniqueName: \"kubernetes.io/projected/963110c4-038a-4208-b712-f66e885aff69-kube-api-access-vg4fc\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.862911 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.884808 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-config-data\") pod \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.885197 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-plugins\") pod \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.885248 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-plugins-conf\") pod \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.885343 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-tls\") pod \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.885374 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-pod-info\") pod \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.885452 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-erlang-cookie\") pod \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.885477 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-erlang-cookie-secret\") pod \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.885510 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-confd\") pod \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.885533 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.885556 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wrsz\" (UniqueName: \"kubernetes.io/projected/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-kube-api-access-5wrsz\") pod \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.885595 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-server-conf\") pod \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\" (UID: \"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0\") " Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.888217 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" (UID: "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.889329 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" (UID: "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.889647 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" (UID: "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.897437 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" (UID: "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.897456 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" (UID: "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.897450 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-pod-info" (OuterVolumeSpecName: "pod-info") pod "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" (UID: "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.898479 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" (UID: "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.916395 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-kube-api-access-5wrsz" (OuterVolumeSpecName: "kube-api-access-5wrsz") pod "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" (UID: "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0"). InnerVolumeSpecName "kube-api-access-5wrsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.929907 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-config-data" (OuterVolumeSpecName: "config-data") pod "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" (UID: "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.972164 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-server-conf" (OuterVolumeSpecName: "server-conf") pod "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" (UID: "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.999790 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.999822 4799 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.999832 4799 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.999841 4799 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.999848 4799 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.999857 4799 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:00 crc kubenswrapper[4799]: I0127 08:12:00.999865 4799 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:00.999890 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:00.999900 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wrsz\" (UniqueName: \"kubernetes.io/projected/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-kube-api-access-5wrsz\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:00.999909 4799 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.039733 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" (UID: "0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.041224 4799 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.100855 4799 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.100890 4799 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.178673 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_54237546-70b8-4475-bd97-53ea6047786b/ovn-northd/0.log" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.178747 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.201993 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/54237546-70b8-4475-bd97-53ea6047786b-ovn-northd-tls-certs\") pod \"54237546-70b8-4475-bd97-53ea6047786b\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.202051 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/54237546-70b8-4475-bd97-53ea6047786b-ovn-rundir\") pod \"54237546-70b8-4475-bd97-53ea6047786b\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.202075 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54237546-70b8-4475-bd97-53ea6047786b-config\") pod \"54237546-70b8-4475-bd97-53ea6047786b\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.202093 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvdx7\" (UniqueName: \"kubernetes.io/projected/54237546-70b8-4475-bd97-53ea6047786b-kube-api-access-jvdx7\") pod \"54237546-70b8-4475-bd97-53ea6047786b\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.202112 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/54237546-70b8-4475-bd97-53ea6047786b-scripts\") pod \"54237546-70b8-4475-bd97-53ea6047786b\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.202160 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/54237546-70b8-4475-bd97-53ea6047786b-metrics-certs-tls-certs\") pod \"54237546-70b8-4475-bd97-53ea6047786b\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.202210 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54237546-70b8-4475-bd97-53ea6047786b-combined-ca-bundle\") pod \"54237546-70b8-4475-bd97-53ea6047786b\" (UID: \"54237546-70b8-4475-bd97-53ea6047786b\") " Jan 27 08:12:01 crc kubenswrapper[4799]: E0127 08:12:01.202504 4799 secret.go:188] Couldn't get secret openstack/cinder-scheduler-config-data: secret "cinder-scheduler-config-data" not found Jan 27 08:12:01 crc kubenswrapper[4799]: E0127 08:12:01.202553 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data-custom podName:182368c8-7aeb-4cfe-8de7-60794b59792c nodeName:}" failed. No retries permitted until 2026-01-27 08:12:09.202540491 +0000 UTC m=+1595.513644556 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data-custom") pod "cinder-scheduler-0" (UID: "182368c8-7aeb-4cfe-8de7-60794b59792c") : secret "cinder-scheduler-config-data" not found Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.202666 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54237546-70b8-4475-bd97-53ea6047786b-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "54237546-70b8-4475-bd97-53ea6047786b" (UID: "54237546-70b8-4475-bd97-53ea6047786b"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:12:01 crc kubenswrapper[4799]: E0127 08:12:01.202705 4799 secret.go:188] Couldn't get secret openstack/cinder-config-data: secret "cinder-config-data" not found Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.202747 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54237546-70b8-4475-bd97-53ea6047786b-config" (OuterVolumeSpecName: "config") pod "54237546-70b8-4475-bd97-53ea6047786b" (UID: "54237546-70b8-4475-bd97-53ea6047786b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:12:01 crc kubenswrapper[4799]: E0127 08:12:01.202771 4799 secret.go:188] Couldn't get secret openstack/cinder-scripts: secret "cinder-scripts" not found Jan 27 08:12:01 crc kubenswrapper[4799]: E0127 08:12:01.202785 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data podName:182368c8-7aeb-4cfe-8de7-60794b59792c nodeName:}" failed. No retries permitted until 2026-01-27 08:12:09.202765257 +0000 UTC m=+1595.513869322 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data") pod "cinder-scheduler-0" (UID: "182368c8-7aeb-4cfe-8de7-60794b59792c") : secret "cinder-config-data" not found Jan 27 08:12:01 crc kubenswrapper[4799]: E0127 08:12:01.202875 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-scripts podName:182368c8-7aeb-4cfe-8de7-60794b59792c nodeName:}" failed. No retries permitted until 2026-01-27 08:12:09.20285804 +0000 UTC m=+1595.513962105 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-scripts") pod "cinder-scheduler-0" (UID: "182368c8-7aeb-4cfe-8de7-60794b59792c") : secret "cinder-scripts" not found Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.203060 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54237546-70b8-4475-bd97-53ea6047786b-scripts" (OuterVolumeSpecName: "scripts") pod "54237546-70b8-4475-bd97-53ea6047786b" (UID: "54237546-70b8-4475-bd97-53ea6047786b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.207203 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54237546-70b8-4475-bd97-53ea6047786b-kube-api-access-jvdx7" (OuterVolumeSpecName: "kube-api-access-jvdx7") pod "54237546-70b8-4475-bd97-53ea6047786b" (UID: "54237546-70b8-4475-bd97-53ea6047786b"). InnerVolumeSpecName "kube-api-access-jvdx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.232591 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54237546-70b8-4475-bd97-53ea6047786b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54237546-70b8-4475-bd97-53ea6047786b" (UID: "54237546-70b8-4475-bd97-53ea6047786b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.233730 4799 generic.go:334] "Generic (PLEG): container finished" podID="8d822fe6-f547-4b8f-a6e4-c7256e1b2ace" containerID="89a107c494f936fe6c451a1549012a9e938164def6d5383c5c024b222a155ca4" exitCode=0 Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.233761 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace","Type":"ContainerDied","Data":"89a107c494f936fe6c451a1549012a9e938164def6d5383c5c024b222a155ca4"} Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.249266 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6sqk" event={"ID":"3c0d170a-443e-438c-b4cd-0be234b7594c","Type":"ContainerStarted","Data":"3cbb887f41c911c48410d2eb4292fe436fc43579c7ef40fea63b0af2dc290afe"} Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.256536 4799 generic.go:334] "Generic (PLEG): container finished" podID="963110c4-038a-4208-b712-f66e885aff69" containerID="d98d0c79854cb8f58e852a15bda25d609c871904704067e2c7d590e5fdacb53a" exitCode=0 Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.256601 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"963110c4-038a-4208-b712-f66e885aff69","Type":"ContainerDied","Data":"d98d0c79854cb8f58e852a15bda25d609c871904704067e2c7d590e5fdacb53a"} Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.256606 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.256626 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"963110c4-038a-4208-b712-f66e885aff69","Type":"ContainerDied","Data":"b53f152c6711522f8a575f1c61b868522bc55151f6f63b08e2cad87bbfc69bdb"} Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.256645 4799 scope.go:117] "RemoveContainer" containerID="d98d0c79854cb8f58e852a15bda25d609c871904704067e2c7d590e5fdacb53a" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.280810 4799 generic.go:334] "Generic (PLEG): container finished" podID="0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" containerID="966f0455778a97fcdcd44dc78bdffda4cc3f28c1ba75e48f0a8013fb5a8ec713" exitCode=0 Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.280972 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0","Type":"ContainerDied","Data":"966f0455778a97fcdcd44dc78bdffda4cc3f28c1ba75e48f0a8013fb5a8ec713"} Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.280947 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.281028 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0","Type":"ContainerDied","Data":"98ee0c6e03cf0b63151da3373ee9831ee36f5330872278b158b020ad2f402eee"} Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.281997 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t6sqk" podStartSLOduration=5.487369794 podStartE2EDuration="8.281974134s" podCreationTimestamp="2026-01-27 08:11:53 +0000 UTC" firstStartedPulling="2026-01-27 08:11:57.935819061 +0000 UTC m=+1584.246923126" lastFinishedPulling="2026-01-27 08:12:00.730423401 +0000 UTC m=+1587.041527466" observedRunningTime="2026-01-27 08:12:01.268824345 +0000 UTC m=+1587.579928420" watchObservedRunningTime="2026-01-27 08:12:01.281974134 +0000 UTC m=+1587.593078199" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.288739 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_54237546-70b8-4475-bd97-53ea6047786b/ovn-northd/0.log" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.288814 4799 generic.go:334] "Generic (PLEG): container finished" podID="54237546-70b8-4475-bd97-53ea6047786b" containerID="fbcff016e9760704203725b31f6b8f4186145b4f641856d6714921eacca80540" exitCode=139 Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.288861 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"54237546-70b8-4475-bd97-53ea6047786b","Type":"ContainerDied","Data":"fbcff016e9760704203725b31f6b8f4186145b4f641856d6714921eacca80540"} Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.288892 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"54237546-70b8-4475-bd97-53ea6047786b","Type":"ContainerDied","Data":"8bc1e658ce322d9a110b37012f6d585bc2b4c99706cdbf248e49fd1af4932bcf"} Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.288952 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.303772 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54237546-70b8-4475-bd97-53ea6047786b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.303806 4799 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/54237546-70b8-4475-bd97-53ea6047786b-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.303819 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54237546-70b8-4475-bd97-53ea6047786b-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.303830 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvdx7\" (UniqueName: \"kubernetes.io/projected/54237546-70b8-4475-bd97-53ea6047786b-kube-api-access-jvdx7\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.303842 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/54237546-70b8-4475-bd97-53ea6047786b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.314585 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54237546-70b8-4475-bd97-53ea6047786b-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "54237546-70b8-4475-bd97-53ea6047786b" (UID: "54237546-70b8-4475-bd97-53ea6047786b"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.321074 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.327222 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.332028 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.335849 4799 scope.go:117] "RemoveContainer" containerID="d98d0c79854cb8f58e852a15bda25d609c871904704067e2c7d590e5fdacb53a" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.336220 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 08:12:01 crc kubenswrapper[4799]: E0127 08:12:01.336251 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d98d0c79854cb8f58e852a15bda25d609c871904704067e2c7d590e5fdacb53a\": container with ID starting with d98d0c79854cb8f58e852a15bda25d609c871904704067e2c7d590e5fdacb53a not found: ID does not exist" containerID="d98d0c79854cb8f58e852a15bda25d609c871904704067e2c7d590e5fdacb53a" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.336277 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d98d0c79854cb8f58e852a15bda25d609c871904704067e2c7d590e5fdacb53a"} err="failed to get container status \"d98d0c79854cb8f58e852a15bda25d609c871904704067e2c7d590e5fdacb53a\": rpc error: code = NotFound desc = could not find container \"d98d0c79854cb8f58e852a15bda25d609c871904704067e2c7d590e5fdacb53a\": container with ID starting with d98d0c79854cb8f58e852a15bda25d609c871904704067e2c7d590e5fdacb53a not found: ID does not exist" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.336357 4799 scope.go:117] "RemoveContainer" containerID="966f0455778a97fcdcd44dc78bdffda4cc3f28c1ba75e48f0a8013fb5a8ec713" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.360837 4799 scope.go:117] "RemoveContainer" containerID="f22f93b6f1734abe8a314a5e928f427e1c1c4b7777e7932be85882de326727e8" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.368974 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54237546-70b8-4475-bd97-53ea6047786b-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "54237546-70b8-4475-bd97-53ea6047786b" (UID: "54237546-70b8-4475-bd97-53ea6047786b"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.406889 4799 scope.go:117] "RemoveContainer" containerID="966f0455778a97fcdcd44dc78bdffda4cc3f28c1ba75e48f0a8013fb5a8ec713" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.408234 4799 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/54237546-70b8-4475-bd97-53ea6047786b-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.408274 4799 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/54237546-70b8-4475-bd97-53ea6047786b-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:01 crc kubenswrapper[4799]: E0127 08:12:01.409656 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"966f0455778a97fcdcd44dc78bdffda4cc3f28c1ba75e48f0a8013fb5a8ec713\": container with ID starting with 966f0455778a97fcdcd44dc78bdffda4cc3f28c1ba75e48f0a8013fb5a8ec713 not found: ID does not exist" containerID="966f0455778a97fcdcd44dc78bdffda4cc3f28c1ba75e48f0a8013fb5a8ec713" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.409697 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"966f0455778a97fcdcd44dc78bdffda4cc3f28c1ba75e48f0a8013fb5a8ec713"} err="failed to get container status \"966f0455778a97fcdcd44dc78bdffda4cc3f28c1ba75e48f0a8013fb5a8ec713\": rpc error: code = NotFound desc = could not find container \"966f0455778a97fcdcd44dc78bdffda4cc3f28c1ba75e48f0a8013fb5a8ec713\": container with ID starting with 966f0455778a97fcdcd44dc78bdffda4cc3f28c1ba75e48f0a8013fb5a8ec713 not found: ID does not exist" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.409734 4799 scope.go:117] "RemoveContainer" containerID="f22f93b6f1734abe8a314a5e928f427e1c1c4b7777e7932be85882de326727e8" Jan 27 08:12:01 crc kubenswrapper[4799]: E0127 08:12:01.412739 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f22f93b6f1734abe8a314a5e928f427e1c1c4b7777e7932be85882de326727e8\": container with ID starting with f22f93b6f1734abe8a314a5e928f427e1c1c4b7777e7932be85882de326727e8 not found: ID does not exist" containerID="f22f93b6f1734abe8a314a5e928f427e1c1c4b7777e7932be85882de326727e8" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.412783 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f22f93b6f1734abe8a314a5e928f427e1c1c4b7777e7932be85882de326727e8"} err="failed to get container status \"f22f93b6f1734abe8a314a5e928f427e1c1c4b7777e7932be85882de326727e8\": rpc error: code = NotFound desc = could not find container \"f22f93b6f1734abe8a314a5e928f427e1c1c4b7777e7932be85882de326727e8\": container with ID starting with f22f93b6f1734abe8a314a5e928f427e1c1c4b7777e7932be85882de326727e8 not found: ID does not exist" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.412823 4799 scope.go:117] "RemoveContainer" containerID="960c6e3a2d0404b26224ebbe0c842e8ab32ce14fc65333840ba8d2163a57fc6d" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.443918 4799 scope.go:117] "RemoveContainer" containerID="fbcff016e9760704203725b31f6b8f4186145b4f641856d6714921eacca80540" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.470354 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.483928 4799 scope.go:117] "RemoveContainer" containerID="960c6e3a2d0404b26224ebbe0c842e8ab32ce14fc65333840ba8d2163a57fc6d" Jan 27 08:12:01 crc kubenswrapper[4799]: E0127 08:12:01.484364 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"960c6e3a2d0404b26224ebbe0c842e8ab32ce14fc65333840ba8d2163a57fc6d\": container with ID starting with 960c6e3a2d0404b26224ebbe0c842e8ab32ce14fc65333840ba8d2163a57fc6d not found: ID does not exist" containerID="960c6e3a2d0404b26224ebbe0c842e8ab32ce14fc65333840ba8d2163a57fc6d" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.484400 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"960c6e3a2d0404b26224ebbe0c842e8ab32ce14fc65333840ba8d2163a57fc6d"} err="failed to get container status \"960c6e3a2d0404b26224ebbe0c842e8ab32ce14fc65333840ba8d2163a57fc6d\": rpc error: code = NotFound desc = could not find container \"960c6e3a2d0404b26224ebbe0c842e8ab32ce14fc65333840ba8d2163a57fc6d\": container with ID starting with 960c6e3a2d0404b26224ebbe0c842e8ab32ce14fc65333840ba8d2163a57fc6d not found: ID does not exist" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.484429 4799 scope.go:117] "RemoveContainer" containerID="fbcff016e9760704203725b31f6b8f4186145b4f641856d6714921eacca80540" Jan 27 08:12:01 crc kubenswrapper[4799]: E0127 08:12:01.484884 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbcff016e9760704203725b31f6b8f4186145b4f641856d6714921eacca80540\": container with ID starting with fbcff016e9760704203725b31f6b8f4186145b4f641856d6714921eacca80540 not found: ID does not exist" containerID="fbcff016e9760704203725b31f6b8f4186145b4f641856d6714921eacca80540" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.484924 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbcff016e9760704203725b31f6b8f4186145b4f641856d6714921eacca80540"} err="failed to get container status \"fbcff016e9760704203725b31f6b8f4186145b4f641856d6714921eacca80540\": rpc error: code = NotFound desc = could not find container \"fbcff016e9760704203725b31f6b8f4186145b4f641856d6714921eacca80540\": container with ID starting with fbcff016e9760704203725b31f6b8f4186145b4f641856d6714921eacca80540 not found: ID does not exist" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.614293 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-erlang-cookie-secret\") pod \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.614575 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-plugins-conf\") pod \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.614668 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-pod-info\") pod \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.614807 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-config-data\") pod \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.614936 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4c2h\" (UniqueName: \"kubernetes.io/projected/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-kube-api-access-z4c2h\") pod \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.615033 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-server-conf\") pod \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.615129 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-plugins\") pod \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.615216 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-confd\") pod \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.615325 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-erlang-cookie\") pod \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.615412 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-tls\") pod \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.615505 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\" (UID: \"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace\") " Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.618818 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace" (UID: "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.621965 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace" (UID: "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.622613 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace" (UID: "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.624323 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.626183 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-pod-info" (OuterVolumeSpecName: "pod-info") pod "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace" (UID: "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.626439 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace" (UID: "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.626666 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace" (UID: "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.632546 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace" (UID: "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.634351 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.634486 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-kube-api-access-z4c2h" (OuterVolumeSpecName: "kube-api-access-z4c2h") pod "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace" (UID: "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace"). InnerVolumeSpecName "kube-api-access-z4c2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.646358 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-config-data" (OuterVolumeSpecName: "config-data") pod "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace" (UID: "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.680884 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-server-conf" (OuterVolumeSpecName: "server-conf") pod "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace" (UID: "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.715776 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace" (UID: "8d822fe6-f547-4b8f-a6e4-c7256e1b2ace"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.716261 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4c2h\" (UniqueName: \"kubernetes.io/projected/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-kube-api-access-z4c2h\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.716276 4799 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.716284 4799 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.716292 4799 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.716320 4799 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.716331 4799 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.716359 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.716372 4799 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.716385 4799 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.716397 4799 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.716405 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.734693 4799 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 27 08:12:01 crc kubenswrapper[4799]: I0127 08:12:01.817395 4799 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.320799 4799 generic.go:334] "Generic (PLEG): container finished" podID="8bca1b10-545f-4e35-a5af-e760d464d0ff" containerID="647797099f96b25df47d1cc66e23dbb35585ab19b6a105db8444e78a1585d8dc" exitCode=0 Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.320933 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8bca1b10-545f-4e35-a5af-e760d464d0ff","Type":"ContainerDied","Data":"647797099f96b25df47d1cc66e23dbb35585ab19b6a105db8444e78a1585d8dc"} Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.322011 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.326362 4799 generic.go:334] "Generic (PLEG): container finished" podID="eff64e6c-4e67-435e-9f12-2d0e77530da3" containerID="bdf097745f232f49646a67ce09032547dfab7180e40c2a444e34628e220dced3" exitCode=0 Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.326407 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eff64e6c-4e67-435e-9f12-2d0e77530da3","Type":"ContainerDied","Data":"bdf097745f232f49646a67ce09032547dfab7180e40c2a444e34628e220dced3"} Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.326425 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eff64e6c-4e67-435e-9f12-2d0e77530da3","Type":"ContainerDied","Data":"464d2ad70eb9a8e4e82c08e06b5e82f558f4c2eb2ee4f1ea7f5f6feee954dca9"} Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.326439 4799 scope.go:117] "RemoveContainer" containerID="bdf097745f232f49646a67ce09032547dfab7180e40c2a444e34628e220dced3" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.333116 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8d822fe6-f547-4b8f-a6e4-c7256e1b2ace","Type":"ContainerDied","Data":"c75dd90023ac23b7428b0c415868224ee6c738dcd4b11a7831e7889365354e98"} Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.333191 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.336888 4799 generic.go:334] "Generic (PLEG): container finished" podID="b32c7a11-1bfb-494f-a2d9-8800ba707e94" containerID="8ac05fa5a627833e782a394e656005413a8d6b8562b382febb6252fc92879e3a" exitCode=0 Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.336946 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7d94bcc8dc-5hh96" event={"ID":"b32c7a11-1bfb-494f-a2d9-8800ba707e94","Type":"ContainerDied","Data":"8ac05fa5a627833e782a394e656005413a8d6b8562b382febb6252fc92879e3a"} Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.366726 4799 scope.go:117] "RemoveContainer" containerID="41ceb431597d999dd9865e384f1d372534e35d3e6a1cead6b6867fb70170d327" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.409130 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.414154 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.429826 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eff64e6c-4e67-435e-9f12-2d0e77530da3-config-data-generated\") pod \"eff64e6c-4e67-435e-9f12-2d0e77530da3\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.429877 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqmkv\" (UniqueName: \"kubernetes.io/projected/eff64e6c-4e67-435e-9f12-2d0e77530da3-kube-api-access-cqmkv\") pod \"eff64e6c-4e67-435e-9f12-2d0e77530da3\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.429907 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eff64e6c-4e67-435e-9f12-2d0e77530da3-operator-scripts\") pod \"eff64e6c-4e67-435e-9f12-2d0e77530da3\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.430148 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff64e6c-4e67-435e-9f12-2d0e77530da3-combined-ca-bundle\") pod \"eff64e6c-4e67-435e-9f12-2d0e77530da3\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.430618 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eff64e6c-4e67-435e-9f12-2d0e77530da3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eff64e6c-4e67-435e-9f12-2d0e77530da3" (UID: "eff64e6c-4e67-435e-9f12-2d0e77530da3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.431419 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eff64e6c-4e67-435e-9f12-2d0e77530da3-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "eff64e6c-4e67-435e-9f12-2d0e77530da3" (UID: "eff64e6c-4e67-435e-9f12-2d0e77530da3"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.431937 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eff64e6c-4e67-435e-9f12-2d0e77530da3-galera-tls-certs\") pod \"eff64e6c-4e67-435e-9f12-2d0e77530da3\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.431997 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eff64e6c-4e67-435e-9f12-2d0e77530da3-kolla-config\") pod \"eff64e6c-4e67-435e-9f12-2d0e77530da3\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.432026 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"eff64e6c-4e67-435e-9f12-2d0e77530da3\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.432076 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eff64e6c-4e67-435e-9f12-2d0e77530da3-config-data-default\") pod \"eff64e6c-4e67-435e-9f12-2d0e77530da3\" (UID: \"eff64e6c-4e67-435e-9f12-2d0e77530da3\") " Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.432784 4799 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eff64e6c-4e67-435e-9f12-2d0e77530da3-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.432824 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eff64e6c-4e67-435e-9f12-2d0e77530da3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.434986 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eff64e6c-4e67-435e-9f12-2d0e77530da3-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "eff64e6c-4e67-435e-9f12-2d0e77530da3" (UID: "eff64e6c-4e67-435e-9f12-2d0e77530da3"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.435445 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eff64e6c-4e67-435e-9f12-2d0e77530da3-kube-api-access-cqmkv" (OuterVolumeSpecName: "kube-api-access-cqmkv") pod "eff64e6c-4e67-435e-9f12-2d0e77530da3" (UID: "eff64e6c-4e67-435e-9f12-2d0e77530da3"). InnerVolumeSpecName "kube-api-access-cqmkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.436537 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eff64e6c-4e67-435e-9f12-2d0e77530da3-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "eff64e6c-4e67-435e-9f12-2d0e77530da3" (UID: "eff64e6c-4e67-435e-9f12-2d0e77530da3"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.456631 4799 scope.go:117] "RemoveContainer" containerID="bdf097745f232f49646a67ce09032547dfab7180e40c2a444e34628e220dced3" Jan 27 08:12:02 crc kubenswrapper[4799]: E0127 08:12:02.457391 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdf097745f232f49646a67ce09032547dfab7180e40c2a444e34628e220dced3\": container with ID starting with bdf097745f232f49646a67ce09032547dfab7180e40c2a444e34628e220dced3 not found: ID does not exist" containerID="bdf097745f232f49646a67ce09032547dfab7180e40c2a444e34628e220dced3" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.457444 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdf097745f232f49646a67ce09032547dfab7180e40c2a444e34628e220dced3"} err="failed to get container status \"bdf097745f232f49646a67ce09032547dfab7180e40c2a444e34628e220dced3\": rpc error: code = NotFound desc = could not find container \"bdf097745f232f49646a67ce09032547dfab7180e40c2a444e34628e220dced3\": container with ID starting with bdf097745f232f49646a67ce09032547dfab7180e40c2a444e34628e220dced3 not found: ID does not exist" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.457473 4799 scope.go:117] "RemoveContainer" containerID="41ceb431597d999dd9865e384f1d372534e35d3e6a1cead6b6867fb70170d327" Jan 27 08:12:02 crc kubenswrapper[4799]: E0127 08:12:02.458088 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41ceb431597d999dd9865e384f1d372534e35d3e6a1cead6b6867fb70170d327\": container with ID starting with 41ceb431597d999dd9865e384f1d372534e35d3e6a1cead6b6867fb70170d327 not found: ID does not exist" containerID="41ceb431597d999dd9865e384f1d372534e35d3e6a1cead6b6867fb70170d327" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.458114 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41ceb431597d999dd9865e384f1d372534e35d3e6a1cead6b6867fb70170d327"} err="failed to get container status \"41ceb431597d999dd9865e384f1d372534e35d3e6a1cead6b6867fb70170d327\": rpc error: code = NotFound desc = could not find container \"41ceb431597d999dd9865e384f1d372534e35d3e6a1cead6b6867fb70170d327\": container with ID starting with 41ceb431597d999dd9865e384f1d372534e35d3e6a1cead6b6867fb70170d327 not found: ID does not exist" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.458132 4799 scope.go:117] "RemoveContainer" containerID="89a107c494f936fe6c451a1549012a9e938164def6d5383c5c024b222a155ca4" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.471043 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff64e6c-4e67-435e-9f12-2d0e77530da3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eff64e6c-4e67-435e-9f12-2d0e77530da3" (UID: "eff64e6c-4e67-435e-9f12-2d0e77530da3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.475390 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "mysql-db") pod "eff64e6c-4e67-435e-9f12-2d0e77530da3" (UID: "eff64e6c-4e67-435e-9f12-2d0e77530da3"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.483413 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff64e6c-4e67-435e-9f12-2d0e77530da3-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "eff64e6c-4e67-435e-9f12-2d0e77530da3" (UID: "eff64e6c-4e67-435e-9f12-2d0e77530da3"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.507983 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0707039f-a588-4975-a71f-dfe2054ba4e6" path="/var/lib/kubelet/pods/0707039f-a588-4975-a71f-dfe2054ba4e6/volumes" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.508703 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dbfc3a0-883d-46a6-af9b-879efb42840e" path="/var/lib/kubelet/pods/0dbfc3a0-883d-46a6-af9b-879efb42840e/volumes" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.509402 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" path="/var/lib/kubelet/pods/0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0/volumes" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.510788 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54237546-70b8-4475-bd97-53ea6047786b" path="/var/lib/kubelet/pods/54237546-70b8-4475-bd97-53ea6047786b/volumes" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.511283 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69778bc9-c84e-42d0-9645-7fd3afa2ca28" path="/var/lib/kubelet/pods/69778bc9-c84e-42d0-9645-7fd3afa2ca28/volumes" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.511896 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d822fe6-f547-4b8f-a6e4-c7256e1b2ace" path="/var/lib/kubelet/pods/8d822fe6-f547-4b8f-a6e4-c7256e1b2ace/volumes" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.515799 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="963110c4-038a-4208-b712-f66e885aff69" path="/var/lib/kubelet/pods/963110c4-038a-4208-b712-f66e885aff69/volumes" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.516379 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f97b84a5-a34c-405f-8357-70cad8efedbc" path="/var/lib/kubelet/pods/f97b84a5-a34c-405f-8357-70cad8efedbc/volumes" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.534734 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqmkv\" (UniqueName: \"kubernetes.io/projected/eff64e6c-4e67-435e-9f12-2d0e77530da3-kube-api-access-cqmkv\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.534775 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff64e6c-4e67-435e-9f12-2d0e77530da3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.534788 4799 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eff64e6c-4e67-435e-9f12-2d0e77530da3-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.534798 4799 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eff64e6c-4e67-435e-9f12-2d0e77530da3-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.534819 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.534831 4799 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eff64e6c-4e67-435e-9f12-2d0e77530da3-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.553779 4799 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.566873 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.645418 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bca1b10-545f-4e35-a5af-e760d464d0ff-combined-ca-bundle\") pod \"8bca1b10-545f-4e35-a5af-e760d464d0ff\" (UID: \"8bca1b10-545f-4e35-a5af-e760d464d0ff\") " Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.645571 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxb2x\" (UniqueName: \"kubernetes.io/projected/8bca1b10-545f-4e35-a5af-e760d464d0ff-kube-api-access-vxb2x\") pod \"8bca1b10-545f-4e35-a5af-e760d464d0ff\" (UID: \"8bca1b10-545f-4e35-a5af-e760d464d0ff\") " Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.645688 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bca1b10-545f-4e35-a5af-e760d464d0ff-config-data\") pod \"8bca1b10-545f-4e35-a5af-e760d464d0ff\" (UID: \"8bca1b10-545f-4e35-a5af-e760d464d0ff\") " Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.646930 4799 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.656449 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bca1b10-545f-4e35-a5af-e760d464d0ff-kube-api-access-vxb2x" (OuterVolumeSpecName: "kube-api-access-vxb2x") pod "8bca1b10-545f-4e35-a5af-e760d464d0ff" (UID: "8bca1b10-545f-4e35-a5af-e760d464d0ff"). InnerVolumeSpecName "kube-api-access-vxb2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.658609 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.662164 4799 scope.go:117] "RemoveContainer" containerID="eeb8b3edecbf9c4102ac408a97dae6338b573a4495066dcb7a4630df2561b314" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.699277 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bca1b10-545f-4e35-a5af-e760d464d0ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8bca1b10-545f-4e35-a5af-e760d464d0ff" (UID: "8bca1b10-545f-4e35-a5af-e760d464d0ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.703258 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bca1b10-545f-4e35-a5af-e760d464d0ff-config-data" (OuterVolumeSpecName: "config-data") pod "8bca1b10-545f-4e35-a5af-e760d464d0ff" (UID: "8bca1b10-545f-4e35-a5af-e760d464d0ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.749690 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-credential-keys\") pod \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.750034 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s6tb\" (UniqueName: \"kubernetes.io/projected/b32c7a11-1bfb-494f-a2d9-8800ba707e94-kube-api-access-7s6tb\") pod \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.750054 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-combined-ca-bundle\") pod \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.750105 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-config-data\") pod \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.750128 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-internal-tls-certs\") pod \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.750187 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-scripts\") pod \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.750270 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-fernet-keys\") pod \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.750320 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-public-tls-certs\") pod \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\" (UID: \"b32c7a11-1bfb-494f-a2d9-8800ba707e94\") " Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.750612 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bca1b10-545f-4e35-a5af-e760d464d0ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.750624 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bca1b10-545f-4e35-a5af-e760d464d0ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.750634 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxb2x\" (UniqueName: \"kubernetes.io/projected/8bca1b10-545f-4e35-a5af-e760d464d0ff-kube-api-access-vxb2x\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.757177 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b32c7a11-1bfb-494f-a2d9-8800ba707e94" (UID: "b32c7a11-1bfb-494f-a2d9-8800ba707e94"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.757214 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b32c7a11-1bfb-494f-a2d9-8800ba707e94" (UID: "b32c7a11-1bfb-494f-a2d9-8800ba707e94"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.760247 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b32c7a11-1bfb-494f-a2d9-8800ba707e94-kube-api-access-7s6tb" (OuterVolumeSpecName: "kube-api-access-7s6tb") pod "b32c7a11-1bfb-494f-a2d9-8800ba707e94" (UID: "b32c7a11-1bfb-494f-a2d9-8800ba707e94"). InnerVolumeSpecName "kube-api-access-7s6tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.773495 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-scripts" (OuterVolumeSpecName: "scripts") pod "b32c7a11-1bfb-494f-a2d9-8800ba707e94" (UID: "b32c7a11-1bfb-494f-a2d9-8800ba707e94"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.793038 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b32c7a11-1bfb-494f-a2d9-8800ba707e94" (UID: "b32c7a11-1bfb-494f-a2d9-8800ba707e94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.815542 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-config-data" (OuterVolumeSpecName: "config-data") pod "b32c7a11-1bfb-494f-a2d9-8800ba707e94" (UID: "b32c7a11-1bfb-494f-a2d9-8800ba707e94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.833577 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b32c7a11-1bfb-494f-a2d9-8800ba707e94" (UID: "b32c7a11-1bfb-494f-a2d9-8800ba707e94"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.837178 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b32c7a11-1bfb-494f-a2d9-8800ba707e94" (UID: "b32c7a11-1bfb-494f-a2d9-8800ba707e94"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.852205 4799 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.852238 4799 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.852249 4799 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.852259 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s6tb\" (UniqueName: \"kubernetes.io/projected/b32c7a11-1bfb-494f-a2d9-8800ba707e94-kube-api-access-7s6tb\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.852268 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.852278 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.852285 4799 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:02 crc kubenswrapper[4799]: I0127 08:12:02.852307 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b32c7a11-1bfb-494f-a2d9-8800ba707e94-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.117838 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.121769 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.156293 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-sg-core-conf-yaml\") pod \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.156351 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-log-httpd\") pod \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.156382 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-combined-ca-bundle\") pod \"182368c8-7aeb-4cfe-8de7-60794b59792c\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.156413 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data\") pod \"182368c8-7aeb-4cfe-8de7-60794b59792c\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.156444 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data-custom\") pod \"182368c8-7aeb-4cfe-8de7-60794b59792c\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.156460 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-combined-ca-bundle\") pod \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.156511 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/182368c8-7aeb-4cfe-8de7-60794b59792c-etc-machine-id\") pod \"182368c8-7aeb-4cfe-8de7-60794b59792c\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.156530 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzw8x\" (UniqueName: \"kubernetes.io/projected/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-kube-api-access-xzw8x\") pod \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.156562 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpbg7\" (UniqueName: \"kubernetes.io/projected/182368c8-7aeb-4cfe-8de7-60794b59792c-kube-api-access-dpbg7\") pod \"182368c8-7aeb-4cfe-8de7-60794b59792c\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.156592 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-run-httpd\") pod \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.156614 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-ceilometer-tls-certs\") pod \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.156661 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-scripts\") pod \"182368c8-7aeb-4cfe-8de7-60794b59792c\" (UID: \"182368c8-7aeb-4cfe-8de7-60794b59792c\") " Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.156711 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-config-data\") pod \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.156730 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-scripts\") pod \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\" (UID: \"e2bc07cc-2292-4fdf-9444-866ce10a6bf8\") " Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.157048 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/182368c8-7aeb-4cfe-8de7-60794b59792c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "182368c8-7aeb-4cfe-8de7-60794b59792c" (UID: "182368c8-7aeb-4cfe-8de7-60794b59792c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.157545 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e2bc07cc-2292-4fdf-9444-866ce10a6bf8" (UID: "e2bc07cc-2292-4fdf-9444-866ce10a6bf8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.162459 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e2bc07cc-2292-4fdf-9444-866ce10a6bf8" (UID: "e2bc07cc-2292-4fdf-9444-866ce10a6bf8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.169498 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-scripts" (OuterVolumeSpecName: "scripts") pod "e2bc07cc-2292-4fdf-9444-866ce10a6bf8" (UID: "e2bc07cc-2292-4fdf-9444-866ce10a6bf8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.172440 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-scripts" (OuterVolumeSpecName: "scripts") pod "182368c8-7aeb-4cfe-8de7-60794b59792c" (UID: "182368c8-7aeb-4cfe-8de7-60794b59792c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.172485 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/182368c8-7aeb-4cfe-8de7-60794b59792c-kube-api-access-dpbg7" (OuterVolumeSpecName: "kube-api-access-dpbg7") pod "182368c8-7aeb-4cfe-8de7-60794b59792c" (UID: "182368c8-7aeb-4cfe-8de7-60794b59792c"). InnerVolumeSpecName "kube-api-access-dpbg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.172552 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "182368c8-7aeb-4cfe-8de7-60794b59792c" (UID: "182368c8-7aeb-4cfe-8de7-60794b59792c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.172702 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-kube-api-access-xzw8x" (OuterVolumeSpecName: "kube-api-access-xzw8x") pod "e2bc07cc-2292-4fdf-9444-866ce10a6bf8" (UID: "e2bc07cc-2292-4fdf-9444-866ce10a6bf8"). InnerVolumeSpecName "kube-api-access-xzw8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.213672 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e2bc07cc-2292-4fdf-9444-866ce10a6bf8" (UID: "e2bc07cc-2292-4fdf-9444-866ce10a6bf8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.233746 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "182368c8-7aeb-4cfe-8de7-60794b59792c" (UID: "182368c8-7aeb-4cfe-8de7-60794b59792c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.236696 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e2bc07cc-2292-4fdf-9444-866ce10a6bf8" (UID: "e2bc07cc-2292-4fdf-9444-866ce10a6bf8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.237294 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2bc07cc-2292-4fdf-9444-866ce10a6bf8" (UID: "e2bc07cc-2292-4fdf-9444-866ce10a6bf8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.257910 4799 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/182368c8-7aeb-4cfe-8de7-60794b59792c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.257940 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzw8x\" (UniqueName: \"kubernetes.io/projected/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-kube-api-access-xzw8x\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.257953 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpbg7\" (UniqueName: \"kubernetes.io/projected/182368c8-7aeb-4cfe-8de7-60794b59792c-kube-api-access-dpbg7\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.257963 4799 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.257973 4799 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.257993 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.258006 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.258017 4799 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.258027 4799 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.258040 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.258051 4799 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.258059 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.259428 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.275338 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-config-data" (OuterVolumeSpecName: "config-data") pod "e2bc07cc-2292-4fdf-9444-866ce10a6bf8" (UID: "e2bc07cc-2292-4fdf-9444-866ce10a6bf8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.292481 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data" (OuterVolumeSpecName: "config-data") pod "182368c8-7aeb-4cfe-8de7-60794b59792c" (UID: "182368c8-7aeb-4cfe-8de7-60794b59792c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.359280 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c53857a-2e9c-4057-9f69-3611704d36f5-config-data\") pod \"3c53857a-2e9c-4057-9f69-3611704d36f5\" (UID: \"3c53857a-2e9c-4057-9f69-3611704d36f5\") " Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.359394 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2xtd\" (UniqueName: \"kubernetes.io/projected/3c53857a-2e9c-4057-9f69-3611704d36f5-kube-api-access-c2xtd\") pod \"3c53857a-2e9c-4057-9f69-3611704d36f5\" (UID: \"3c53857a-2e9c-4057-9f69-3611704d36f5\") " Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.359439 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c53857a-2e9c-4057-9f69-3611704d36f5-combined-ca-bundle\") pod \"3c53857a-2e9c-4057-9f69-3611704d36f5\" (UID: \"3c53857a-2e9c-4057-9f69-3611704d36f5\") " Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.359809 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2bc07cc-2292-4fdf-9444-866ce10a6bf8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.359832 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/182368c8-7aeb-4cfe-8de7-60794b59792c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.362682 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c53857a-2e9c-4057-9f69-3611704d36f5-kube-api-access-c2xtd" (OuterVolumeSpecName: "kube-api-access-c2xtd") pod "3c53857a-2e9c-4057-9f69-3611704d36f5" (UID: "3c53857a-2e9c-4057-9f69-3611704d36f5"). InnerVolumeSpecName "kube-api-access-c2xtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.380061 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c53857a-2e9c-4057-9f69-3611704d36f5-config-data" (OuterVolumeSpecName: "config-data") pod "3c53857a-2e9c-4057-9f69-3611704d36f5" (UID: "3c53857a-2e9c-4057-9f69-3611704d36f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.388344 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c53857a-2e9c-4057-9f69-3611704d36f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c53857a-2e9c-4057-9f69-3611704d36f5" (UID: "3c53857a-2e9c-4057-9f69-3611704d36f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.390044 4799 generic.go:334] "Generic (PLEG): container finished" podID="e2bc07cc-2292-4fdf-9444-866ce10a6bf8" containerID="087ca23e3553e56a77f6d4a218fb2efba0d2f0caa0a25536beeb55f006e98774" exitCode=0 Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.390108 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.390100 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2bc07cc-2292-4fdf-9444-866ce10a6bf8","Type":"ContainerDied","Data":"087ca23e3553e56a77f6d4a218fb2efba0d2f0caa0a25536beeb55f006e98774"} Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.390537 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2bc07cc-2292-4fdf-9444-866ce10a6bf8","Type":"ContainerDied","Data":"3aa9b4c90bf01c243d509a471b3aa7d04706b94259b27e0dc53f30e45dd3fa35"} Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.390591 4799 scope.go:117] "RemoveContainer" containerID="bb48f58362059b6ca1888fa50c758a2329cd6dbc499ff416935701be8bede32a" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.393201 4799 generic.go:334] "Generic (PLEG): container finished" podID="182368c8-7aeb-4cfe-8de7-60794b59792c" containerID="887b7522c7a579e1357187dc19c73cd82c504e4628cbbadfecd2ef27a7755903" exitCode=0 Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.393285 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.393277 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"182368c8-7aeb-4cfe-8de7-60794b59792c","Type":"ContainerDied","Data":"887b7522c7a579e1357187dc19c73cd82c504e4628cbbadfecd2ef27a7755903"} Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.393357 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"182368c8-7aeb-4cfe-8de7-60794b59792c","Type":"ContainerDied","Data":"2e947bd71a4d3d6420047f74fd8fbdd510df04573e79e63ef63546e4851cfba1"} Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.395398 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8bca1b10-545f-4e35-a5af-e760d464d0ff","Type":"ContainerDied","Data":"a4775c61029a392a18b16843bdfea2a3a01d17c8fb63acb95483fef0852b91c8"} Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.395461 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.398039 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.420217 4799 scope.go:117] "RemoveContainer" containerID="f13f2f82669106dde4a669c0c36f489b18439b25cce12c8268b4c6dacd82fd32" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.425521 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7d94bcc8dc-5hh96" event={"ID":"b32c7a11-1bfb-494f-a2d9-8800ba707e94","Type":"ContainerDied","Data":"49ea0e43eac8e94ff1f377c43617cfa0891f723aa81cd12f6093408c92e1c4a6"} Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.425846 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7d94bcc8dc-5hh96" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.429033 4799 generic.go:334] "Generic (PLEG): container finished" podID="3c53857a-2e9c-4057-9f69-3611704d36f5" containerID="202e2f036574e98bda00448180d3c7a6925f661345419ea82a8d5eedddba0db0" exitCode=0 Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.429046 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.429090 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3c53857a-2e9c-4057-9f69-3611704d36f5","Type":"ContainerDied","Data":"202e2f036574e98bda00448180d3c7a6925f661345419ea82a8d5eedddba0db0"} Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.429109 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3c53857a-2e9c-4057-9f69-3611704d36f5","Type":"ContainerDied","Data":"e77182b95cdea62c62e3a2fbee3a9a5b43fd764a7c99a5aafbbc88407ec768d1"} Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.429161 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.441339 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.449827 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.455807 4799 scope.go:117] "RemoveContainer" containerID="087ca23e3553e56a77f6d4a218fb2efba0d2f0caa0a25536beeb55f006e98774" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.457514 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.461569 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c53857a-2e9c-4057-9f69-3611704d36f5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.461959 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2xtd\" (UniqueName: \"kubernetes.io/projected/3c53857a-2e9c-4057-9f69-3611704d36f5-kube-api-access-c2xtd\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.461971 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c53857a-2e9c-4057-9f69-3611704d36f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.466645 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.475949 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.483445 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.492226 4799 scope.go:117] "RemoveContainer" containerID="4056910ece0b74cfae9ac370024f462da85fe0f74588ea569acc4376d75af528" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.492966 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.497759 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7d94bcc8dc-5hh96"] Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.506060 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7d94bcc8dc-5hh96"] Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.519438 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.522563 4799 scope.go:117] "RemoveContainer" containerID="bb48f58362059b6ca1888fa50c758a2329cd6dbc499ff416935701be8bede32a" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.522667 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 08:12:03 crc kubenswrapper[4799]: E0127 08:12:03.523025 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb48f58362059b6ca1888fa50c758a2329cd6dbc499ff416935701be8bede32a\": container with ID starting with bb48f58362059b6ca1888fa50c758a2329cd6dbc499ff416935701be8bede32a not found: ID does not exist" containerID="bb48f58362059b6ca1888fa50c758a2329cd6dbc499ff416935701be8bede32a" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.523142 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb48f58362059b6ca1888fa50c758a2329cd6dbc499ff416935701be8bede32a"} err="failed to get container status \"bb48f58362059b6ca1888fa50c758a2329cd6dbc499ff416935701be8bede32a\": rpc error: code = NotFound desc = could not find container \"bb48f58362059b6ca1888fa50c758a2329cd6dbc499ff416935701be8bede32a\": container with ID starting with bb48f58362059b6ca1888fa50c758a2329cd6dbc499ff416935701be8bede32a not found: ID does not exist" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.523240 4799 scope.go:117] "RemoveContainer" containerID="f13f2f82669106dde4a669c0c36f489b18439b25cce12c8268b4c6dacd82fd32" Jan 27 08:12:03 crc kubenswrapper[4799]: E0127 08:12:03.523863 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f13f2f82669106dde4a669c0c36f489b18439b25cce12c8268b4c6dacd82fd32\": container with ID starting with f13f2f82669106dde4a669c0c36f489b18439b25cce12c8268b4c6dacd82fd32 not found: ID does not exist" containerID="f13f2f82669106dde4a669c0c36f489b18439b25cce12c8268b4c6dacd82fd32" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.523899 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f13f2f82669106dde4a669c0c36f489b18439b25cce12c8268b4c6dacd82fd32"} err="failed to get container status \"f13f2f82669106dde4a669c0c36f489b18439b25cce12c8268b4c6dacd82fd32\": rpc error: code = NotFound desc = could not find container \"f13f2f82669106dde4a669c0c36f489b18439b25cce12c8268b4c6dacd82fd32\": container with ID starting with f13f2f82669106dde4a669c0c36f489b18439b25cce12c8268b4c6dacd82fd32 not found: ID does not exist" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.523940 4799 scope.go:117] "RemoveContainer" containerID="087ca23e3553e56a77f6d4a218fb2efba0d2f0caa0a25536beeb55f006e98774" Jan 27 08:12:03 crc kubenswrapper[4799]: E0127 08:12:03.524162 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"087ca23e3553e56a77f6d4a218fb2efba0d2f0caa0a25536beeb55f006e98774\": container with ID starting with 087ca23e3553e56a77f6d4a218fb2efba0d2f0caa0a25536beeb55f006e98774 not found: ID does not exist" containerID="087ca23e3553e56a77f6d4a218fb2efba0d2f0caa0a25536beeb55f006e98774" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.524240 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"087ca23e3553e56a77f6d4a218fb2efba0d2f0caa0a25536beeb55f006e98774"} err="failed to get container status \"087ca23e3553e56a77f6d4a218fb2efba0d2f0caa0a25536beeb55f006e98774\": rpc error: code = NotFound desc = could not find container \"087ca23e3553e56a77f6d4a218fb2efba0d2f0caa0a25536beeb55f006e98774\": container with ID starting with 087ca23e3553e56a77f6d4a218fb2efba0d2f0caa0a25536beeb55f006e98774 not found: ID does not exist" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.524326 4799 scope.go:117] "RemoveContainer" containerID="4056910ece0b74cfae9ac370024f462da85fe0f74588ea569acc4376d75af528" Jan 27 08:12:03 crc kubenswrapper[4799]: E0127 08:12:03.524698 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4056910ece0b74cfae9ac370024f462da85fe0f74588ea569acc4376d75af528\": container with ID starting with 4056910ece0b74cfae9ac370024f462da85fe0f74588ea569acc4376d75af528 not found: ID does not exist" containerID="4056910ece0b74cfae9ac370024f462da85fe0f74588ea569acc4376d75af528" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.524770 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4056910ece0b74cfae9ac370024f462da85fe0f74588ea569acc4376d75af528"} err="failed to get container status \"4056910ece0b74cfae9ac370024f462da85fe0f74588ea569acc4376d75af528\": rpc error: code = NotFound desc = could not find container \"4056910ece0b74cfae9ac370024f462da85fe0f74588ea569acc4376d75af528\": container with ID starting with 4056910ece0b74cfae9ac370024f462da85fe0f74588ea569acc4376d75af528 not found: ID does not exist" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.524830 4799 scope.go:117] "RemoveContainer" containerID="2e0014740e85a33f412b5d3841b82af95fac4e3521ee8803886d47fa9713d82f" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.554155 4799 scope.go:117] "RemoveContainer" containerID="887b7522c7a579e1357187dc19c73cd82c504e4628cbbadfecd2ef27a7755903" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.582959 4799 scope.go:117] "RemoveContainer" containerID="2e0014740e85a33f412b5d3841b82af95fac4e3521ee8803886d47fa9713d82f" Jan 27 08:12:03 crc kubenswrapper[4799]: E0127 08:12:03.583530 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e0014740e85a33f412b5d3841b82af95fac4e3521ee8803886d47fa9713d82f\": container with ID starting with 2e0014740e85a33f412b5d3841b82af95fac4e3521ee8803886d47fa9713d82f not found: ID does not exist" containerID="2e0014740e85a33f412b5d3841b82af95fac4e3521ee8803886d47fa9713d82f" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.583566 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e0014740e85a33f412b5d3841b82af95fac4e3521ee8803886d47fa9713d82f"} err="failed to get container status \"2e0014740e85a33f412b5d3841b82af95fac4e3521ee8803886d47fa9713d82f\": rpc error: code = NotFound desc = could not find container \"2e0014740e85a33f412b5d3841b82af95fac4e3521ee8803886d47fa9713d82f\": container with ID starting with 2e0014740e85a33f412b5d3841b82af95fac4e3521ee8803886d47fa9713d82f not found: ID does not exist" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.583592 4799 scope.go:117] "RemoveContainer" containerID="887b7522c7a579e1357187dc19c73cd82c504e4628cbbadfecd2ef27a7755903" Jan 27 08:12:03 crc kubenswrapper[4799]: E0127 08:12:03.583975 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"887b7522c7a579e1357187dc19c73cd82c504e4628cbbadfecd2ef27a7755903\": container with ID starting with 887b7522c7a579e1357187dc19c73cd82c504e4628cbbadfecd2ef27a7755903 not found: ID does not exist" containerID="887b7522c7a579e1357187dc19c73cd82c504e4628cbbadfecd2ef27a7755903" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.584007 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"887b7522c7a579e1357187dc19c73cd82c504e4628cbbadfecd2ef27a7755903"} err="failed to get container status \"887b7522c7a579e1357187dc19c73cd82c504e4628cbbadfecd2ef27a7755903\": rpc error: code = NotFound desc = could not find container \"887b7522c7a579e1357187dc19c73cd82c504e4628cbbadfecd2ef27a7755903\": container with ID starting with 887b7522c7a579e1357187dc19c73cd82c504e4628cbbadfecd2ef27a7755903 not found: ID does not exist" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.584027 4799 scope.go:117] "RemoveContainer" containerID="647797099f96b25df47d1cc66e23dbb35585ab19b6a105db8444e78a1585d8dc" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.604500 4799 scope.go:117] "RemoveContainer" containerID="8ac05fa5a627833e782a394e656005413a8d6b8562b382febb6252fc92879e3a" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.627175 4799 scope.go:117] "RemoveContainer" containerID="202e2f036574e98bda00448180d3c7a6925f661345419ea82a8d5eedddba0db0" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.665848 4799 scope.go:117] "RemoveContainer" containerID="202e2f036574e98bda00448180d3c7a6925f661345419ea82a8d5eedddba0db0" Jan 27 08:12:03 crc kubenswrapper[4799]: E0127 08:12:03.666257 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"202e2f036574e98bda00448180d3c7a6925f661345419ea82a8d5eedddba0db0\": container with ID starting with 202e2f036574e98bda00448180d3c7a6925f661345419ea82a8d5eedddba0db0 not found: ID does not exist" containerID="202e2f036574e98bda00448180d3c7a6925f661345419ea82a8d5eedddba0db0" Jan 27 08:12:03 crc kubenswrapper[4799]: I0127 08:12:03.666289 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"202e2f036574e98bda00448180d3c7a6925f661345419ea82a8d5eedddba0db0"} err="failed to get container status \"202e2f036574e98bda00448180d3c7a6925f661345419ea82a8d5eedddba0db0\": rpc error: code = NotFound desc = could not find container \"202e2f036574e98bda00448180d3c7a6925f661345419ea82a8d5eedddba0db0\": container with ID starting with 202e2f036574e98bda00448180d3c7a6925f661345419ea82a8d5eedddba0db0 not found: ID does not exist" Jan 27 08:12:04 crc kubenswrapper[4799]: I0127 08:12:04.463633 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="182368c8-7aeb-4cfe-8de7-60794b59792c" path="/var/lib/kubelet/pods/182368c8-7aeb-4cfe-8de7-60794b59792c/volumes" Jan 27 08:12:04 crc kubenswrapper[4799]: I0127 08:12:04.464429 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c53857a-2e9c-4057-9f69-3611704d36f5" path="/var/lib/kubelet/pods/3c53857a-2e9c-4057-9f69-3611704d36f5/volumes" Jan 27 08:12:04 crc kubenswrapper[4799]: I0127 08:12:04.465103 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bca1b10-545f-4e35-a5af-e760d464d0ff" path="/var/lib/kubelet/pods/8bca1b10-545f-4e35-a5af-e760d464d0ff/volumes" Jan 27 08:12:04 crc kubenswrapper[4799]: I0127 08:12:04.466413 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b32c7a11-1bfb-494f-a2d9-8800ba707e94" path="/var/lib/kubelet/pods/b32c7a11-1bfb-494f-a2d9-8800ba707e94/volumes" Jan 27 08:12:04 crc kubenswrapper[4799]: I0127 08:12:04.467109 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2bc07cc-2292-4fdf-9444-866ce10a6bf8" path="/var/lib/kubelet/pods/e2bc07cc-2292-4fdf-9444-866ce10a6bf8/volumes" Jan 27 08:12:04 crc kubenswrapper[4799]: I0127 08:12:04.468202 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eff64e6c-4e67-435e-9f12-2d0e77530da3" path="/var/lib/kubelet/pods/eff64e6c-4e67-435e-9f12-2d0e77530da3/volumes" Jan 27 08:12:04 crc kubenswrapper[4799]: E0127 08:12:04.610017 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" containerID="cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 08:12:04 crc kubenswrapper[4799]: E0127 08:12:04.610686 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" containerID="cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 08:12:04 crc kubenswrapper[4799]: E0127 08:12:04.611106 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" containerID="cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 08:12:04 crc kubenswrapper[4799]: E0127 08:12:04.611185 4799 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-zct2j" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovsdb-server" Jan 27 08:12:04 crc kubenswrapper[4799]: E0127 08:12:04.612488 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 08:12:04 crc kubenswrapper[4799]: E0127 08:12:04.614212 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 08:12:04 crc kubenswrapper[4799]: E0127 08:12:04.618059 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 08:12:04 crc kubenswrapper[4799]: E0127 08:12:04.618120 4799 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-zct2j" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovs-vswitchd" Jan 27 08:12:05 crc kubenswrapper[4799]: I0127 08:12:05.166056 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t6sqk" Jan 27 08:12:05 crc kubenswrapper[4799]: I0127 08:12:05.166335 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t6sqk" Jan 27 08:12:05 crc kubenswrapper[4799]: I0127 08:12:05.236170 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t6sqk" Jan 27 08:12:05 crc kubenswrapper[4799]: I0127 08:12:05.529131 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t6sqk" Jan 27 08:12:05 crc kubenswrapper[4799]: I0127 08:12:05.591334 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t6sqk"] Jan 27 08:12:07 crc kubenswrapper[4799]: I0127 08:12:07.488777 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t6sqk" podUID="3c0d170a-443e-438c-b4cd-0be234b7594c" containerName="registry-server" containerID="cri-o://3cbb887f41c911c48410d2eb4292fe436fc43579c7ef40fea63b0af2dc290afe" gracePeriod=2 Jan 27 08:12:07 crc kubenswrapper[4799]: I0127 08:12:07.963361 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6sqk" Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.013018 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5ff7b8d449-xjt48" podUID="2db9ba76-0532-4ed0-972e-fd5452048b97" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.152:9696/\": dial tcp 10.217.0.152:9696: connect: connection refused" Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.131600 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv5l7\" (UniqueName: \"kubernetes.io/projected/3c0d170a-443e-438c-b4cd-0be234b7594c-kube-api-access-kv5l7\") pod \"3c0d170a-443e-438c-b4cd-0be234b7594c\" (UID: \"3c0d170a-443e-438c-b4cd-0be234b7594c\") " Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.131663 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c0d170a-443e-438c-b4cd-0be234b7594c-utilities\") pod \"3c0d170a-443e-438c-b4cd-0be234b7594c\" (UID: \"3c0d170a-443e-438c-b4cd-0be234b7594c\") " Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.131743 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c0d170a-443e-438c-b4cd-0be234b7594c-catalog-content\") pod \"3c0d170a-443e-438c-b4cd-0be234b7594c\" (UID: \"3c0d170a-443e-438c-b4cd-0be234b7594c\") " Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.132542 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c0d170a-443e-438c-b4cd-0be234b7594c-utilities" (OuterVolumeSpecName: "utilities") pod "3c0d170a-443e-438c-b4cd-0be234b7594c" (UID: "3c0d170a-443e-438c-b4cd-0be234b7594c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.138373 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c0d170a-443e-438c-b4cd-0be234b7594c-kube-api-access-kv5l7" (OuterVolumeSpecName: "kube-api-access-kv5l7") pod "3c0d170a-443e-438c-b4cd-0be234b7594c" (UID: "3c0d170a-443e-438c-b4cd-0be234b7594c"). InnerVolumeSpecName "kube-api-access-kv5l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.208403 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c0d170a-443e-438c-b4cd-0be234b7594c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c0d170a-443e-438c-b4cd-0be234b7594c" (UID: "3c0d170a-443e-438c-b4cd-0be234b7594c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.233153 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c0d170a-443e-438c-b4cd-0be234b7594c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.233186 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c0d170a-443e-438c-b4cd-0be234b7594c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.233201 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv5l7\" (UniqueName: \"kubernetes.io/projected/3c0d170a-443e-438c-b4cd-0be234b7594c-kube-api-access-kv5l7\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.505781 4799 generic.go:334] "Generic (PLEG): container finished" podID="3c0d170a-443e-438c-b4cd-0be234b7594c" containerID="3cbb887f41c911c48410d2eb4292fe436fc43579c7ef40fea63b0af2dc290afe" exitCode=0 Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.505828 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6sqk" event={"ID":"3c0d170a-443e-438c-b4cd-0be234b7594c","Type":"ContainerDied","Data":"3cbb887f41c911c48410d2eb4292fe436fc43579c7ef40fea63b0af2dc290afe"} Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.505858 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6sqk" event={"ID":"3c0d170a-443e-438c-b4cd-0be234b7594c","Type":"ContainerDied","Data":"b0e5b9ce82e0d25ae73b26c4db605a4da0cb9c9d9a523f627cb61870f2c5183a"} Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.505876 4799 scope.go:117] "RemoveContainer" containerID="3cbb887f41c911c48410d2eb4292fe436fc43579c7ef40fea63b0af2dc290afe" Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.506011 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6sqk" Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.539448 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t6sqk"] Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.547555 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t6sqk"] Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.558316 4799 scope.go:117] "RemoveContainer" containerID="ae42256700c9a07707e77718c75d52112ac5af840206cfbb130a919c8d63abf3" Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.612435 4799 scope.go:117] "RemoveContainer" containerID="54aeb31cea07c6fa392c21c2a7e5a7136b4b7094b0d08b80dd03772a0adeb650" Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.671961 4799 scope.go:117] "RemoveContainer" containerID="3cbb887f41c911c48410d2eb4292fe436fc43579c7ef40fea63b0af2dc290afe" Jan 27 08:12:08 crc kubenswrapper[4799]: E0127 08:12:08.675427 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cbb887f41c911c48410d2eb4292fe436fc43579c7ef40fea63b0af2dc290afe\": container with ID starting with 3cbb887f41c911c48410d2eb4292fe436fc43579c7ef40fea63b0af2dc290afe not found: ID does not exist" containerID="3cbb887f41c911c48410d2eb4292fe436fc43579c7ef40fea63b0af2dc290afe" Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.675478 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cbb887f41c911c48410d2eb4292fe436fc43579c7ef40fea63b0af2dc290afe"} err="failed to get container status \"3cbb887f41c911c48410d2eb4292fe436fc43579c7ef40fea63b0af2dc290afe\": rpc error: code = NotFound desc = could not find container \"3cbb887f41c911c48410d2eb4292fe436fc43579c7ef40fea63b0af2dc290afe\": container with ID starting with 3cbb887f41c911c48410d2eb4292fe436fc43579c7ef40fea63b0af2dc290afe not found: ID does not exist" Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.675506 4799 scope.go:117] "RemoveContainer" containerID="ae42256700c9a07707e77718c75d52112ac5af840206cfbb130a919c8d63abf3" Jan 27 08:12:08 crc kubenswrapper[4799]: E0127 08:12:08.679439 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae42256700c9a07707e77718c75d52112ac5af840206cfbb130a919c8d63abf3\": container with ID starting with ae42256700c9a07707e77718c75d52112ac5af840206cfbb130a919c8d63abf3 not found: ID does not exist" containerID="ae42256700c9a07707e77718c75d52112ac5af840206cfbb130a919c8d63abf3" Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.679491 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae42256700c9a07707e77718c75d52112ac5af840206cfbb130a919c8d63abf3"} err="failed to get container status \"ae42256700c9a07707e77718c75d52112ac5af840206cfbb130a919c8d63abf3\": rpc error: code = NotFound desc = could not find container \"ae42256700c9a07707e77718c75d52112ac5af840206cfbb130a919c8d63abf3\": container with ID starting with ae42256700c9a07707e77718c75d52112ac5af840206cfbb130a919c8d63abf3 not found: ID does not exist" Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.679520 4799 scope.go:117] "RemoveContainer" containerID="54aeb31cea07c6fa392c21c2a7e5a7136b4b7094b0d08b80dd03772a0adeb650" Jan 27 08:12:08 crc kubenswrapper[4799]: E0127 08:12:08.687474 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54aeb31cea07c6fa392c21c2a7e5a7136b4b7094b0d08b80dd03772a0adeb650\": container with ID starting with 54aeb31cea07c6fa392c21c2a7e5a7136b4b7094b0d08b80dd03772a0adeb650 not found: ID does not exist" containerID="54aeb31cea07c6fa392c21c2a7e5a7136b4b7094b0d08b80dd03772a0adeb650" Jan 27 08:12:08 crc kubenswrapper[4799]: I0127 08:12:08.687533 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54aeb31cea07c6fa392c21c2a7e5a7136b4b7094b0d08b80dd03772a0adeb650"} err="failed to get container status \"54aeb31cea07c6fa392c21c2a7e5a7136b4b7094b0d08b80dd03772a0adeb650\": rpc error: code = NotFound desc = could not find container \"54aeb31cea07c6fa392c21c2a7e5a7136b4b7094b0d08b80dd03772a0adeb650\": container with ID starting with 54aeb31cea07c6fa392c21c2a7e5a7136b4b7094b0d08b80dd03772a0adeb650 not found: ID does not exist" Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.519120 4799 generic.go:334] "Generic (PLEG): container finished" podID="2db9ba76-0532-4ed0-972e-fd5452048b97" containerID="0dcbd436a5762bcc61d230602c621a9b9849e2da017a7bac5c585459bb6be746" exitCode=0 Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.519420 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ff7b8d449-xjt48" event={"ID":"2db9ba76-0532-4ed0-972e-fd5452048b97","Type":"ContainerDied","Data":"0dcbd436a5762bcc61d230602c621a9b9849e2da017a7bac5c585459bb6be746"} Jan 27 08:12:09 crc kubenswrapper[4799]: E0127 08:12:09.610795 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" containerID="cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 08:12:09 crc kubenswrapper[4799]: E0127 08:12:09.611362 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" containerID="cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 08:12:09 crc kubenswrapper[4799]: E0127 08:12:09.611643 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" containerID="cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 08:12:09 crc kubenswrapper[4799]: E0127 08:12:09.611690 4799 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-zct2j" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovsdb-server" Jan 27 08:12:09 crc kubenswrapper[4799]: E0127 08:12:09.612422 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 08:12:09 crc kubenswrapper[4799]: E0127 08:12:09.614478 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 08:12:09 crc kubenswrapper[4799]: E0127 08:12:09.616038 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 08:12:09 crc kubenswrapper[4799]: E0127 08:12:09.616078 4799 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-zct2j" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovs-vswitchd" Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.644419 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.753949 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-public-tls-certs\") pod \"2db9ba76-0532-4ed0-972e-fd5452048b97\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.754091 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttqs7\" (UniqueName: \"kubernetes.io/projected/2db9ba76-0532-4ed0-972e-fd5452048b97-kube-api-access-ttqs7\") pod \"2db9ba76-0532-4ed0-972e-fd5452048b97\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.754154 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-ovndb-tls-certs\") pod \"2db9ba76-0532-4ed0-972e-fd5452048b97\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.754182 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-httpd-config\") pod \"2db9ba76-0532-4ed0-972e-fd5452048b97\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.754241 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-combined-ca-bundle\") pod \"2db9ba76-0532-4ed0-972e-fd5452048b97\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.754274 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-config\") pod \"2db9ba76-0532-4ed0-972e-fd5452048b97\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.754345 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-internal-tls-certs\") pod \"2db9ba76-0532-4ed0-972e-fd5452048b97\" (UID: \"2db9ba76-0532-4ed0-972e-fd5452048b97\") " Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.772880 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2db9ba76-0532-4ed0-972e-fd5452048b97" (UID: "2db9ba76-0532-4ed0-972e-fd5452048b97"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.773153 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2db9ba76-0532-4ed0-972e-fd5452048b97-kube-api-access-ttqs7" (OuterVolumeSpecName: "kube-api-access-ttqs7") pod "2db9ba76-0532-4ed0-972e-fd5452048b97" (UID: "2db9ba76-0532-4ed0-972e-fd5452048b97"). InnerVolumeSpecName "kube-api-access-ttqs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.804995 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2db9ba76-0532-4ed0-972e-fd5452048b97" (UID: "2db9ba76-0532-4ed0-972e-fd5452048b97"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.813620 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2db9ba76-0532-4ed0-972e-fd5452048b97" (UID: "2db9ba76-0532-4ed0-972e-fd5452048b97"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.814958 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-config" (OuterVolumeSpecName: "config") pod "2db9ba76-0532-4ed0-972e-fd5452048b97" (UID: "2db9ba76-0532-4ed0-972e-fd5452048b97"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.851346 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2db9ba76-0532-4ed0-972e-fd5452048b97" (UID: "2db9ba76-0532-4ed0-972e-fd5452048b97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.853473 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "2db9ba76-0532-4ed0-972e-fd5452048b97" (UID: "2db9ba76-0532-4ed0-972e-fd5452048b97"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.860555 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.860645 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.860689 4799 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.860704 4799 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.860718 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttqs7\" (UniqueName: \"kubernetes.io/projected/2db9ba76-0532-4ed0-972e-fd5452048b97-kube-api-access-ttqs7\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.860736 4799 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:09 crc kubenswrapper[4799]: I0127 08:12:09.860775 4799 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2db9ba76-0532-4ed0-972e-fd5452048b97-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:10 crc kubenswrapper[4799]: I0127 08:12:10.461385 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c0d170a-443e-438c-b4cd-0be234b7594c" path="/var/lib/kubelet/pods/3c0d170a-443e-438c-b4cd-0be234b7594c/volumes" Jan 27 08:12:10 crc kubenswrapper[4799]: I0127 08:12:10.533119 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ff7b8d449-xjt48" event={"ID":"2db9ba76-0532-4ed0-972e-fd5452048b97","Type":"ContainerDied","Data":"47094d7eb8b960ef039bf2ba003d57c2df443425fa0077685754b2e42227a3fa"} Jan 27 08:12:10 crc kubenswrapper[4799]: I0127 08:12:10.533184 4799 scope.go:117] "RemoveContainer" containerID="7a13a4dba57a64680601c65f46bc4e1fd1ddd9881983073fa8db00588d91d96c" Jan 27 08:12:10 crc kubenswrapper[4799]: I0127 08:12:10.533405 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5ff7b8d449-xjt48" Jan 27 08:12:10 crc kubenswrapper[4799]: I0127 08:12:10.566728 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5ff7b8d449-xjt48"] Jan 27 08:12:10 crc kubenswrapper[4799]: I0127 08:12:10.573608 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5ff7b8d449-xjt48"] Jan 27 08:12:10 crc kubenswrapper[4799]: I0127 08:12:10.574257 4799 scope.go:117] "RemoveContainer" containerID="0dcbd436a5762bcc61d230602c621a9b9849e2da017a7bac5c585459bb6be746" Jan 27 08:12:12 crc kubenswrapper[4799]: I0127 08:12:12.460906 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2db9ba76-0532-4ed0-972e-fd5452048b97" path="/var/lib/kubelet/pods/2db9ba76-0532-4ed0-972e-fd5452048b97/volumes" Jan 27 08:12:14 crc kubenswrapper[4799]: E0127 08:12:14.609528 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" containerID="cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 08:12:14 crc kubenswrapper[4799]: E0127 08:12:14.610367 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" containerID="cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 08:12:14 crc kubenswrapper[4799]: E0127 08:12:14.610575 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 08:12:14 crc kubenswrapper[4799]: E0127 08:12:14.610839 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" containerID="cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 08:12:14 crc kubenswrapper[4799]: E0127 08:12:14.610896 4799 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-zct2j" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovsdb-server" Jan 27 08:12:14 crc kubenswrapper[4799]: E0127 08:12:14.612186 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 08:12:14 crc kubenswrapper[4799]: E0127 08:12:14.615649 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 08:12:14 crc kubenswrapper[4799]: E0127 08:12:14.615746 4799 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-zct2j" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovs-vswitchd" Jan 27 08:12:19 crc kubenswrapper[4799]: E0127 08:12:19.612134 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 08:12:19 crc kubenswrapper[4799]: E0127 08:12:19.612114 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" containerID="cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 08:12:19 crc kubenswrapper[4799]: E0127 08:12:19.613687 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" containerID="cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 08:12:19 crc kubenswrapper[4799]: E0127 08:12:19.614050 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" containerID="cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 08:12:19 crc kubenswrapper[4799]: E0127 08:12:19.614167 4799 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-zct2j" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovsdb-server" Jan 27 08:12:19 crc kubenswrapper[4799]: E0127 08:12:19.614517 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 08:12:19 crc kubenswrapper[4799]: E0127 08:12:19.615842 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 08:12:19 crc kubenswrapper[4799]: E0127 08:12:19.615937 4799 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-zct2j" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovs-vswitchd" Jan 27 08:12:23 crc kubenswrapper[4799]: I0127 08:12:23.700189 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zct2j_82b996cd-10af-493c-9972-bb6d9bedc711/ovs-vswitchd/0.log" Jan 27 08:12:23 crc kubenswrapper[4799]: I0127 08:12:23.701763 4799 generic.go:334] "Generic (PLEG): container finished" podID="82b996cd-10af-493c-9972-bb6d9bedc711" containerID="21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f" exitCode=137 Jan 27 08:12:23 crc kubenswrapper[4799]: I0127 08:12:23.701798 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zct2j" event={"ID":"82b996cd-10af-493c-9972-bb6d9bedc711","Type":"ContainerDied","Data":"21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f"} Jan 27 08:12:23 crc kubenswrapper[4799]: I0127 08:12:23.820179 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zct2j_82b996cd-10af-493c-9972-bb6d9bedc711/ovs-vswitchd/0.log" Jan 27 08:12:23 crc kubenswrapper[4799]: I0127 08:12:23.821188 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:12:23 crc kubenswrapper[4799]: I0127 08:12:23.980123 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-var-lib\") pod \"82b996cd-10af-493c-9972-bb6d9bedc711\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " Jan 27 08:12:23 crc kubenswrapper[4799]: I0127 08:12:23.980280 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-etc-ovs\") pod \"82b996cd-10af-493c-9972-bb6d9bedc711\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " Jan 27 08:12:23 crc kubenswrapper[4799]: I0127 08:12:23.980216 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-var-lib" (OuterVolumeSpecName: "var-lib") pod "82b996cd-10af-493c-9972-bb6d9bedc711" (UID: "82b996cd-10af-493c-9972-bb6d9bedc711"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:12:23 crc kubenswrapper[4799]: I0127 08:12:23.980333 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "82b996cd-10af-493c-9972-bb6d9bedc711" (UID: "82b996cd-10af-493c-9972-bb6d9bedc711"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:12:23 crc kubenswrapper[4799]: I0127 08:12:23.980399 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct4f2\" (UniqueName: \"kubernetes.io/projected/82b996cd-10af-493c-9972-bb6d9bedc711-kube-api-access-ct4f2\") pod \"82b996cd-10af-493c-9972-bb6d9bedc711\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " Jan 27 08:12:23 crc kubenswrapper[4799]: I0127 08:12:23.980515 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-var-run\") pod \"82b996cd-10af-493c-9972-bb6d9bedc711\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " Jan 27 08:12:23 crc kubenswrapper[4799]: I0127 08:12:23.980606 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82b996cd-10af-493c-9972-bb6d9bedc711-scripts\") pod \"82b996cd-10af-493c-9972-bb6d9bedc711\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " Jan 27 08:12:23 crc kubenswrapper[4799]: I0127 08:12:23.980717 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-var-run" (OuterVolumeSpecName: "var-run") pod "82b996cd-10af-493c-9972-bb6d9bedc711" (UID: "82b996cd-10af-493c-9972-bb6d9bedc711"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:12:23 crc kubenswrapper[4799]: I0127 08:12:23.981530 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-var-log" (OuterVolumeSpecName: "var-log") pod "82b996cd-10af-493c-9972-bb6d9bedc711" (UID: "82b996cd-10af-493c-9972-bb6d9bedc711"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 08:12:23 crc kubenswrapper[4799]: I0127 08:12:23.981771 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82b996cd-10af-493c-9972-bb6d9bedc711-scripts" (OuterVolumeSpecName: "scripts") pod "82b996cd-10af-493c-9972-bb6d9bedc711" (UID: "82b996cd-10af-493c-9972-bb6d9bedc711"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:12:23 crc kubenswrapper[4799]: I0127 08:12:23.981812 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-var-log\") pod \"82b996cd-10af-493c-9972-bb6d9bedc711\" (UID: \"82b996cd-10af-493c-9972-bb6d9bedc711\") " Jan 27 08:12:23 crc kubenswrapper[4799]: I0127 08:12:23.982183 4799 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-var-lib\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:23 crc kubenswrapper[4799]: I0127 08:12:23.982205 4799 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:23 crc kubenswrapper[4799]: I0127 08:12:23.982217 4799 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:23 crc kubenswrapper[4799]: I0127 08:12:23.982228 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82b996cd-10af-493c-9972-bb6d9bedc711-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:23 crc kubenswrapper[4799]: I0127 08:12:23.982238 4799 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/82b996cd-10af-493c-9972-bb6d9bedc711-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:23 crc kubenswrapper[4799]: I0127 08:12:23.991979 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82b996cd-10af-493c-9972-bb6d9bedc711-kube-api-access-ct4f2" (OuterVolumeSpecName: "kube-api-access-ct4f2") pod "82b996cd-10af-493c-9972-bb6d9bedc711" (UID: "82b996cd-10af-493c-9972-bb6d9bedc711"). InnerVolumeSpecName "kube-api-access-ct4f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.082933 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct4f2\" (UniqueName: \"kubernetes.io/projected/82b996cd-10af-493c-9972-bb6d9bedc711-kube-api-access-ct4f2\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.225839 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.385398 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f707c5d5-a9c3-4fdb-8361-9604b6b70153-cache\") pod \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.385507 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.385530 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nchm6\" (UniqueName: \"kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-kube-api-access-nchm6\") pod \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.385566 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-etc-swift\") pod \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.385594 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f707c5d5-a9c3-4fdb-8361-9604b6b70153-lock\") pod \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.386171 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f707c5d5-a9c3-4fdb-8361-9604b6b70153-cache" (OuterVolumeSpecName: "cache") pod "f707c5d5-a9c3-4fdb-8361-9604b6b70153" (UID: "f707c5d5-a9c3-4fdb-8361-9604b6b70153"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.386213 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f707c5d5-a9c3-4fdb-8361-9604b6b70153-combined-ca-bundle\") pod \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\" (UID: \"f707c5d5-a9c3-4fdb-8361-9604b6b70153\") " Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.386400 4799 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f707c5d5-a9c3-4fdb-8361-9604b6b70153-cache\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.386879 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f707c5d5-a9c3-4fdb-8361-9604b6b70153-lock" (OuterVolumeSpecName: "lock") pod "f707c5d5-a9c3-4fdb-8361-9604b6b70153" (UID: "f707c5d5-a9c3-4fdb-8361-9604b6b70153"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.389444 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "swift") pod "f707c5d5-a9c3-4fdb-8361-9604b6b70153" (UID: "f707c5d5-a9c3-4fdb-8361-9604b6b70153"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.389862 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f707c5d5-a9c3-4fdb-8361-9604b6b70153" (UID: "f707c5d5-a9c3-4fdb-8361-9604b6b70153"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.390386 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-kube-api-access-nchm6" (OuterVolumeSpecName: "kube-api-access-nchm6") pod "f707c5d5-a9c3-4fdb-8361-9604b6b70153" (UID: "f707c5d5-a9c3-4fdb-8361-9604b6b70153"). InnerVolumeSpecName "kube-api-access-nchm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.488160 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.488211 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nchm6\" (UniqueName: \"kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-kube-api-access-nchm6\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.488233 4799 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f707c5d5-a9c3-4fdb-8361-9604b6b70153-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.488251 4799 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f707c5d5-a9c3-4fdb-8361-9604b6b70153-lock\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.508704 4799 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.590340 4799 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.684458 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f707c5d5-a9c3-4fdb-8361-9604b6b70153-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f707c5d5-a9c3-4fdb-8361-9604b6b70153" (UID: "f707c5d5-a9c3-4fdb-8361-9604b6b70153"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.695110 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f707c5d5-a9c3-4fdb-8361-9604b6b70153-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.715344 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zct2j_82b996cd-10af-493c-9972-bb6d9bedc711/ovs-vswitchd/0.log" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.716026 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zct2j" event={"ID":"82b996cd-10af-493c-9972-bb6d9bedc711","Type":"ContainerDied","Data":"fdec8e1c21f0e8cb550396747c7e6f7c5caf17702f25535c340d1c434fd49346"} Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.716073 4799 scope.go:117] "RemoveContainer" containerID="21b0095990753be4151849817ffcbd1e8fde57a9c37bdf61cc72c3cb80a5906f" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.716234 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zct2j" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.724906 4799 generic.go:334] "Generic (PLEG): container finished" podID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerID="c4d3c4b4e64dfbc1c45aeef0d3fa8039b0ef7f24f1db2ef2a53e7b81f2dbf7cd" exitCode=137 Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.724983 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerDied","Data":"c4d3c4b4e64dfbc1c45aeef0d3fa8039b0ef7f24f1db2ef2a53e7b81f2dbf7cd"} Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.725018 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f707c5d5-a9c3-4fdb-8361-9604b6b70153","Type":"ContainerDied","Data":"93d96ff78264b52b471037a52000aa722f8578f5c5c48d5a662ada9c6c454fc5"} Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.725254 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.741373 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-zct2j"] Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.747842 4799 scope.go:117] "RemoveContainer" containerID="cd358c69143d0e85c1c5d15e974cfd7c67d8a00c6546c066867215994a4ca38e" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.748660 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-zct2j"] Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.770340 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.775762 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.786041 4799 scope.go:117] "RemoveContainer" containerID="9273d444c90e860b425f55a10394dc9bc4ec4a919c765d16a707028eb5a0d9d7" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.812874 4799 scope.go:117] "RemoveContainer" containerID="c4d3c4b4e64dfbc1c45aeef0d3fa8039b0ef7f24f1db2ef2a53e7b81f2dbf7cd" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.829723 4799 scope.go:117] "RemoveContainer" containerID="45d6619acf1257ed156eb62ccd78bce4b9de066ddff06f50c79b9cfa7413832a" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.845101 4799 scope.go:117] "RemoveContainer" containerID="3922fa50fa34a49a4cae14b9fb8d549d80b13972fa6b8a9ebc9d6e8b35d4c31a" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.861835 4799 scope.go:117] "RemoveContainer" containerID="9f59e75754ee9b9eac827452a8a976c731b40f46763088ad523dac5e470ed06f" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.885597 4799 scope.go:117] "RemoveContainer" containerID="654d69afe42028cbde3190c61f3ec77cf53f47e3e019c731d93e9629e2ab6f7e" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.911027 4799 scope.go:117] "RemoveContainer" containerID="a0f80023889ce615a3db222ff0674d625f01a9a123a68219af42b2380036e108" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.927189 4799 scope.go:117] "RemoveContainer" containerID="ed9695191592e2cf7c9a81c2f7e406e573fe094f24810d13f52025b42fd14e45" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.942871 4799 scope.go:117] "RemoveContainer" containerID="e030d8bc2db8dd9aa03cb62df712dbc9c8cb6607608f2f7f1450bf93e538b751" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.958948 4799 scope.go:117] "RemoveContainer" containerID="899ef5eb6ef3452c56a64f0c4e70618404205cce529f22053b03a285e9ee13a3" Jan 27 08:12:24 crc kubenswrapper[4799]: I0127 08:12:24.979425 4799 scope.go:117] "RemoveContainer" containerID="929e394a1e6c0338b8779b3f2f7a5f4bcce35d3226afdb70bb609d003ca46732" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.000526 4799 scope.go:117] "RemoveContainer" containerID="27328900cd6228d146086ac95ae8b05b2862be13eb3c0f09db06830d1bca9dcd" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.018915 4799 scope.go:117] "RemoveContainer" containerID="5f6ad523ec32449ea83c02924beadf32d298bbe23dfa33c93e20912e9492a329" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.035067 4799 scope.go:117] "RemoveContainer" containerID="c1c3036953768d8694461daaf820c6e0b6719d2fd7d5cf0b122afc241b86a7f8" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.051560 4799 scope.go:117] "RemoveContainer" containerID="f81a6afb6d0a44c9057b113f1164161ca416a68ae9a9c80e23e3c53a915439ca" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.070130 4799 scope.go:117] "RemoveContainer" containerID="c9d0893ee0366152b7225975257a1cb9bd87ad844aa46d49cb26f4f0a856f1bd" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.086506 4799 scope.go:117] "RemoveContainer" containerID="c4d3c4b4e64dfbc1c45aeef0d3fa8039b0ef7f24f1db2ef2a53e7b81f2dbf7cd" Jan 27 08:12:25 crc kubenswrapper[4799]: E0127 08:12:25.086832 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4d3c4b4e64dfbc1c45aeef0d3fa8039b0ef7f24f1db2ef2a53e7b81f2dbf7cd\": container with ID starting with c4d3c4b4e64dfbc1c45aeef0d3fa8039b0ef7f24f1db2ef2a53e7b81f2dbf7cd not found: ID does not exist" containerID="c4d3c4b4e64dfbc1c45aeef0d3fa8039b0ef7f24f1db2ef2a53e7b81f2dbf7cd" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.086869 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4d3c4b4e64dfbc1c45aeef0d3fa8039b0ef7f24f1db2ef2a53e7b81f2dbf7cd"} err="failed to get container status \"c4d3c4b4e64dfbc1c45aeef0d3fa8039b0ef7f24f1db2ef2a53e7b81f2dbf7cd\": rpc error: code = NotFound desc = could not find container \"c4d3c4b4e64dfbc1c45aeef0d3fa8039b0ef7f24f1db2ef2a53e7b81f2dbf7cd\": container with ID starting with c4d3c4b4e64dfbc1c45aeef0d3fa8039b0ef7f24f1db2ef2a53e7b81f2dbf7cd not found: ID does not exist" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.086897 4799 scope.go:117] "RemoveContainer" containerID="45d6619acf1257ed156eb62ccd78bce4b9de066ddff06f50c79b9cfa7413832a" Jan 27 08:12:25 crc kubenswrapper[4799]: E0127 08:12:25.087235 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45d6619acf1257ed156eb62ccd78bce4b9de066ddff06f50c79b9cfa7413832a\": container with ID starting with 45d6619acf1257ed156eb62ccd78bce4b9de066ddff06f50c79b9cfa7413832a not found: ID does not exist" containerID="45d6619acf1257ed156eb62ccd78bce4b9de066ddff06f50c79b9cfa7413832a" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.087263 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45d6619acf1257ed156eb62ccd78bce4b9de066ddff06f50c79b9cfa7413832a"} err="failed to get container status \"45d6619acf1257ed156eb62ccd78bce4b9de066ddff06f50c79b9cfa7413832a\": rpc error: code = NotFound desc = could not find container \"45d6619acf1257ed156eb62ccd78bce4b9de066ddff06f50c79b9cfa7413832a\": container with ID starting with 45d6619acf1257ed156eb62ccd78bce4b9de066ddff06f50c79b9cfa7413832a not found: ID does not exist" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.087283 4799 scope.go:117] "RemoveContainer" containerID="3922fa50fa34a49a4cae14b9fb8d549d80b13972fa6b8a9ebc9d6e8b35d4c31a" Jan 27 08:12:25 crc kubenswrapper[4799]: E0127 08:12:25.087556 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3922fa50fa34a49a4cae14b9fb8d549d80b13972fa6b8a9ebc9d6e8b35d4c31a\": container with ID starting with 3922fa50fa34a49a4cae14b9fb8d549d80b13972fa6b8a9ebc9d6e8b35d4c31a not found: ID does not exist" containerID="3922fa50fa34a49a4cae14b9fb8d549d80b13972fa6b8a9ebc9d6e8b35d4c31a" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.087583 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3922fa50fa34a49a4cae14b9fb8d549d80b13972fa6b8a9ebc9d6e8b35d4c31a"} err="failed to get container status \"3922fa50fa34a49a4cae14b9fb8d549d80b13972fa6b8a9ebc9d6e8b35d4c31a\": rpc error: code = NotFound desc = could not find container \"3922fa50fa34a49a4cae14b9fb8d549d80b13972fa6b8a9ebc9d6e8b35d4c31a\": container with ID starting with 3922fa50fa34a49a4cae14b9fb8d549d80b13972fa6b8a9ebc9d6e8b35d4c31a not found: ID does not exist" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.087601 4799 scope.go:117] "RemoveContainer" containerID="9f59e75754ee9b9eac827452a8a976c731b40f46763088ad523dac5e470ed06f" Jan 27 08:12:25 crc kubenswrapper[4799]: E0127 08:12:25.087934 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f59e75754ee9b9eac827452a8a976c731b40f46763088ad523dac5e470ed06f\": container with ID starting with 9f59e75754ee9b9eac827452a8a976c731b40f46763088ad523dac5e470ed06f not found: ID does not exist" containerID="9f59e75754ee9b9eac827452a8a976c731b40f46763088ad523dac5e470ed06f" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.087959 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f59e75754ee9b9eac827452a8a976c731b40f46763088ad523dac5e470ed06f"} err="failed to get container status \"9f59e75754ee9b9eac827452a8a976c731b40f46763088ad523dac5e470ed06f\": rpc error: code = NotFound desc = could not find container \"9f59e75754ee9b9eac827452a8a976c731b40f46763088ad523dac5e470ed06f\": container with ID starting with 9f59e75754ee9b9eac827452a8a976c731b40f46763088ad523dac5e470ed06f not found: ID does not exist" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.087977 4799 scope.go:117] "RemoveContainer" containerID="654d69afe42028cbde3190c61f3ec77cf53f47e3e019c731d93e9629e2ab6f7e" Jan 27 08:12:25 crc kubenswrapper[4799]: E0127 08:12:25.088231 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"654d69afe42028cbde3190c61f3ec77cf53f47e3e019c731d93e9629e2ab6f7e\": container with ID starting with 654d69afe42028cbde3190c61f3ec77cf53f47e3e019c731d93e9629e2ab6f7e not found: ID does not exist" containerID="654d69afe42028cbde3190c61f3ec77cf53f47e3e019c731d93e9629e2ab6f7e" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.088262 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"654d69afe42028cbde3190c61f3ec77cf53f47e3e019c731d93e9629e2ab6f7e"} err="failed to get container status \"654d69afe42028cbde3190c61f3ec77cf53f47e3e019c731d93e9629e2ab6f7e\": rpc error: code = NotFound desc = could not find container \"654d69afe42028cbde3190c61f3ec77cf53f47e3e019c731d93e9629e2ab6f7e\": container with ID starting with 654d69afe42028cbde3190c61f3ec77cf53f47e3e019c731d93e9629e2ab6f7e not found: ID does not exist" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.088280 4799 scope.go:117] "RemoveContainer" containerID="a0f80023889ce615a3db222ff0674d625f01a9a123a68219af42b2380036e108" Jan 27 08:12:25 crc kubenswrapper[4799]: E0127 08:12:25.088735 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0f80023889ce615a3db222ff0674d625f01a9a123a68219af42b2380036e108\": container with ID starting with a0f80023889ce615a3db222ff0674d625f01a9a123a68219af42b2380036e108 not found: ID does not exist" containerID="a0f80023889ce615a3db222ff0674d625f01a9a123a68219af42b2380036e108" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.088795 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0f80023889ce615a3db222ff0674d625f01a9a123a68219af42b2380036e108"} err="failed to get container status \"a0f80023889ce615a3db222ff0674d625f01a9a123a68219af42b2380036e108\": rpc error: code = NotFound desc = could not find container \"a0f80023889ce615a3db222ff0674d625f01a9a123a68219af42b2380036e108\": container with ID starting with a0f80023889ce615a3db222ff0674d625f01a9a123a68219af42b2380036e108 not found: ID does not exist" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.088813 4799 scope.go:117] "RemoveContainer" containerID="ed9695191592e2cf7c9a81c2f7e406e573fe094f24810d13f52025b42fd14e45" Jan 27 08:12:25 crc kubenswrapper[4799]: E0127 08:12:25.089175 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed9695191592e2cf7c9a81c2f7e406e573fe094f24810d13f52025b42fd14e45\": container with ID starting with ed9695191592e2cf7c9a81c2f7e406e573fe094f24810d13f52025b42fd14e45 not found: ID does not exist" containerID="ed9695191592e2cf7c9a81c2f7e406e573fe094f24810d13f52025b42fd14e45" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.089204 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9695191592e2cf7c9a81c2f7e406e573fe094f24810d13f52025b42fd14e45"} err="failed to get container status \"ed9695191592e2cf7c9a81c2f7e406e573fe094f24810d13f52025b42fd14e45\": rpc error: code = NotFound desc = could not find container \"ed9695191592e2cf7c9a81c2f7e406e573fe094f24810d13f52025b42fd14e45\": container with ID starting with ed9695191592e2cf7c9a81c2f7e406e573fe094f24810d13f52025b42fd14e45 not found: ID does not exist" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.089248 4799 scope.go:117] "RemoveContainer" containerID="e030d8bc2db8dd9aa03cb62df712dbc9c8cb6607608f2f7f1450bf93e538b751" Jan 27 08:12:25 crc kubenswrapper[4799]: E0127 08:12:25.089554 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e030d8bc2db8dd9aa03cb62df712dbc9c8cb6607608f2f7f1450bf93e538b751\": container with ID starting with e030d8bc2db8dd9aa03cb62df712dbc9c8cb6607608f2f7f1450bf93e538b751 not found: ID does not exist" containerID="e030d8bc2db8dd9aa03cb62df712dbc9c8cb6607608f2f7f1450bf93e538b751" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.089633 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e030d8bc2db8dd9aa03cb62df712dbc9c8cb6607608f2f7f1450bf93e538b751"} err="failed to get container status \"e030d8bc2db8dd9aa03cb62df712dbc9c8cb6607608f2f7f1450bf93e538b751\": rpc error: code = NotFound desc = could not find container \"e030d8bc2db8dd9aa03cb62df712dbc9c8cb6607608f2f7f1450bf93e538b751\": container with ID starting with e030d8bc2db8dd9aa03cb62df712dbc9c8cb6607608f2f7f1450bf93e538b751 not found: ID does not exist" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.089713 4799 scope.go:117] "RemoveContainer" containerID="899ef5eb6ef3452c56a64f0c4e70618404205cce529f22053b03a285e9ee13a3" Jan 27 08:12:25 crc kubenswrapper[4799]: E0127 08:12:25.090224 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"899ef5eb6ef3452c56a64f0c4e70618404205cce529f22053b03a285e9ee13a3\": container with ID starting with 899ef5eb6ef3452c56a64f0c4e70618404205cce529f22053b03a285e9ee13a3 not found: ID does not exist" containerID="899ef5eb6ef3452c56a64f0c4e70618404205cce529f22053b03a285e9ee13a3" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.090255 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"899ef5eb6ef3452c56a64f0c4e70618404205cce529f22053b03a285e9ee13a3"} err="failed to get container status \"899ef5eb6ef3452c56a64f0c4e70618404205cce529f22053b03a285e9ee13a3\": rpc error: code = NotFound desc = could not find container \"899ef5eb6ef3452c56a64f0c4e70618404205cce529f22053b03a285e9ee13a3\": container with ID starting with 899ef5eb6ef3452c56a64f0c4e70618404205cce529f22053b03a285e9ee13a3 not found: ID does not exist" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.090309 4799 scope.go:117] "RemoveContainer" containerID="929e394a1e6c0338b8779b3f2f7a5f4bcce35d3226afdb70bb609d003ca46732" Jan 27 08:12:25 crc kubenswrapper[4799]: E0127 08:12:25.090669 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"929e394a1e6c0338b8779b3f2f7a5f4bcce35d3226afdb70bb609d003ca46732\": container with ID starting with 929e394a1e6c0338b8779b3f2f7a5f4bcce35d3226afdb70bb609d003ca46732 not found: ID does not exist" containerID="929e394a1e6c0338b8779b3f2f7a5f4bcce35d3226afdb70bb609d003ca46732" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.090735 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"929e394a1e6c0338b8779b3f2f7a5f4bcce35d3226afdb70bb609d003ca46732"} err="failed to get container status \"929e394a1e6c0338b8779b3f2f7a5f4bcce35d3226afdb70bb609d003ca46732\": rpc error: code = NotFound desc = could not find container \"929e394a1e6c0338b8779b3f2f7a5f4bcce35d3226afdb70bb609d003ca46732\": container with ID starting with 929e394a1e6c0338b8779b3f2f7a5f4bcce35d3226afdb70bb609d003ca46732 not found: ID does not exist" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.090760 4799 scope.go:117] "RemoveContainer" containerID="27328900cd6228d146086ac95ae8b05b2862be13eb3c0f09db06830d1bca9dcd" Jan 27 08:12:25 crc kubenswrapper[4799]: E0127 08:12:25.091103 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27328900cd6228d146086ac95ae8b05b2862be13eb3c0f09db06830d1bca9dcd\": container with ID starting with 27328900cd6228d146086ac95ae8b05b2862be13eb3c0f09db06830d1bca9dcd not found: ID does not exist" containerID="27328900cd6228d146086ac95ae8b05b2862be13eb3c0f09db06830d1bca9dcd" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.091150 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27328900cd6228d146086ac95ae8b05b2862be13eb3c0f09db06830d1bca9dcd"} err="failed to get container status \"27328900cd6228d146086ac95ae8b05b2862be13eb3c0f09db06830d1bca9dcd\": rpc error: code = NotFound desc = could not find container \"27328900cd6228d146086ac95ae8b05b2862be13eb3c0f09db06830d1bca9dcd\": container with ID starting with 27328900cd6228d146086ac95ae8b05b2862be13eb3c0f09db06830d1bca9dcd not found: ID does not exist" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.091181 4799 scope.go:117] "RemoveContainer" containerID="5f6ad523ec32449ea83c02924beadf32d298bbe23dfa33c93e20912e9492a329" Jan 27 08:12:25 crc kubenswrapper[4799]: E0127 08:12:25.091504 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f6ad523ec32449ea83c02924beadf32d298bbe23dfa33c93e20912e9492a329\": container with ID starting with 5f6ad523ec32449ea83c02924beadf32d298bbe23dfa33c93e20912e9492a329 not found: ID does not exist" containerID="5f6ad523ec32449ea83c02924beadf32d298bbe23dfa33c93e20912e9492a329" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.091539 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f6ad523ec32449ea83c02924beadf32d298bbe23dfa33c93e20912e9492a329"} err="failed to get container status \"5f6ad523ec32449ea83c02924beadf32d298bbe23dfa33c93e20912e9492a329\": rpc error: code = NotFound desc = could not find container \"5f6ad523ec32449ea83c02924beadf32d298bbe23dfa33c93e20912e9492a329\": container with ID starting with 5f6ad523ec32449ea83c02924beadf32d298bbe23dfa33c93e20912e9492a329 not found: ID does not exist" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.091557 4799 scope.go:117] "RemoveContainer" containerID="c1c3036953768d8694461daaf820c6e0b6719d2fd7d5cf0b122afc241b86a7f8" Jan 27 08:12:25 crc kubenswrapper[4799]: E0127 08:12:25.091827 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1c3036953768d8694461daaf820c6e0b6719d2fd7d5cf0b122afc241b86a7f8\": container with ID starting with c1c3036953768d8694461daaf820c6e0b6719d2fd7d5cf0b122afc241b86a7f8 not found: ID does not exist" containerID="c1c3036953768d8694461daaf820c6e0b6719d2fd7d5cf0b122afc241b86a7f8" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.091863 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1c3036953768d8694461daaf820c6e0b6719d2fd7d5cf0b122afc241b86a7f8"} err="failed to get container status \"c1c3036953768d8694461daaf820c6e0b6719d2fd7d5cf0b122afc241b86a7f8\": rpc error: code = NotFound desc = could not find container \"c1c3036953768d8694461daaf820c6e0b6719d2fd7d5cf0b122afc241b86a7f8\": container with ID starting with c1c3036953768d8694461daaf820c6e0b6719d2fd7d5cf0b122afc241b86a7f8 not found: ID does not exist" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.091883 4799 scope.go:117] "RemoveContainer" containerID="f81a6afb6d0a44c9057b113f1164161ca416a68ae9a9c80e23e3c53a915439ca" Jan 27 08:12:25 crc kubenswrapper[4799]: E0127 08:12:25.092232 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f81a6afb6d0a44c9057b113f1164161ca416a68ae9a9c80e23e3c53a915439ca\": container with ID starting with f81a6afb6d0a44c9057b113f1164161ca416a68ae9a9c80e23e3c53a915439ca not found: ID does not exist" containerID="f81a6afb6d0a44c9057b113f1164161ca416a68ae9a9c80e23e3c53a915439ca" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.092326 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f81a6afb6d0a44c9057b113f1164161ca416a68ae9a9c80e23e3c53a915439ca"} err="failed to get container status \"f81a6afb6d0a44c9057b113f1164161ca416a68ae9a9c80e23e3c53a915439ca\": rpc error: code = NotFound desc = could not find container \"f81a6afb6d0a44c9057b113f1164161ca416a68ae9a9c80e23e3c53a915439ca\": container with ID starting with f81a6afb6d0a44c9057b113f1164161ca416a68ae9a9c80e23e3c53a915439ca not found: ID does not exist" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.092376 4799 scope.go:117] "RemoveContainer" containerID="c9d0893ee0366152b7225975257a1cb9bd87ad844aa46d49cb26f4f0a856f1bd" Jan 27 08:12:25 crc kubenswrapper[4799]: E0127 08:12:25.092785 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9d0893ee0366152b7225975257a1cb9bd87ad844aa46d49cb26f4f0a856f1bd\": container with ID starting with c9d0893ee0366152b7225975257a1cb9bd87ad844aa46d49cb26f4f0a856f1bd not found: ID does not exist" containerID="c9d0893ee0366152b7225975257a1cb9bd87ad844aa46d49cb26f4f0a856f1bd" Jan 27 08:12:25 crc kubenswrapper[4799]: I0127 08:12:25.092817 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9d0893ee0366152b7225975257a1cb9bd87ad844aa46d49cb26f4f0a856f1bd"} err="failed to get container status \"c9d0893ee0366152b7225975257a1cb9bd87ad844aa46d49cb26f4f0a856f1bd\": rpc error: code = NotFound desc = could not find container \"c9d0893ee0366152b7225975257a1cb9bd87ad844aa46d49cb26f4f0a856f1bd\": container with ID starting with c9d0893ee0366152b7225975257a1cb9bd87ad844aa46d49cb26f4f0a856f1bd not found: ID does not exist" Jan 27 08:12:26 crc kubenswrapper[4799]: I0127 08:12:26.467557 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" path="/var/lib/kubelet/pods/82b996cd-10af-493c-9972-bb6d9bedc711/volumes" Jan 27 08:12:26 crc kubenswrapper[4799]: I0127 08:12:26.469648 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" path="/var/lib/kubelet/pods/f707c5d5-a9c3-4fdb-8361-9604b6b70153/volumes" Jan 27 08:12:32 crc kubenswrapper[4799]: I0127 08:12:32.441296 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/keystone-7d94bcc8dc-5hh96" podUID="b32c7a11-1bfb-494f-a2d9-8800ba707e94" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.149:5000/v3\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 08:12:53 crc kubenswrapper[4799]: I0127 08:12:53.731520 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:12:53 crc kubenswrapper[4799]: I0127 08:12:53.732149 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:13:23 crc kubenswrapper[4799]: I0127 08:13:23.731440 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:13:23 crc kubenswrapper[4799]: I0127 08:13:23.732007 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:13:36 crc kubenswrapper[4799]: I0127 08:13:36.598494 4799 scope.go:117] "RemoveContainer" containerID="5509cf2b97d8f78121cc9bf786809bf94e29e6e3d4c6777462e39be2a813f69f" Jan 27 08:13:36 crc kubenswrapper[4799]: I0127 08:13:36.627514 4799 scope.go:117] "RemoveContainer" containerID="875be4c83d9eb5ce22a15cfed93a30c532bc23ab0615fda18bc0dede97ba831d" Jan 27 08:13:36 crc kubenswrapper[4799]: I0127 08:13:36.668720 4799 scope.go:117] "RemoveContainer" containerID="8531b9fe91c1d7e57c8cf0e306faf43d14f749256aafc3cbbde126566f3c856a" Jan 27 08:13:36 crc kubenswrapper[4799]: I0127 08:13:36.689264 4799 scope.go:117] "RemoveContainer" containerID="fc2b3e78461f31ae8c0d985b3bfc7d70d2d784521f140b7a91d5d07cc8a1c1bb" Jan 27 08:13:36 crc kubenswrapper[4799]: I0127 08:13:36.714290 4799 scope.go:117] "RemoveContainer" containerID="c66b308f3c279630c385e185b8439caef62eae7e151ec37ec9ff3e1f97bbef5c" Jan 27 08:13:36 crc kubenswrapper[4799]: I0127 08:13:36.736142 4799 scope.go:117] "RemoveContainer" containerID="c5c30ee649059f2df57e1c622c248f05c9721e9c2209bd4f77ee9b887d4f7b83" Jan 27 08:13:36 crc kubenswrapper[4799]: I0127 08:13:36.761425 4799 scope.go:117] "RemoveContainer" containerID="d4baa9a4ca248bb632c726ed19d0931976fe1a80ea8ddd800a2ef255bc28cc84" Jan 27 08:13:36 crc kubenswrapper[4799]: I0127 08:13:36.787489 4799 scope.go:117] "RemoveContainer" containerID="b97a28e671c633aa552074459bcd9ca6370d2cd4bbbec1651f30669159c9ecb0" Jan 27 08:13:36 crc kubenswrapper[4799]: I0127 08:13:36.811885 4799 scope.go:117] "RemoveContainer" containerID="16cbdbeec150e5934b19c040711658f9ffae0d22a481b7b98c67646ecc86a55d" Jan 27 08:13:36 crc kubenswrapper[4799]: I0127 08:13:36.842162 4799 scope.go:117] "RemoveContainer" containerID="42fb8ed3bcac153c476ac7c8729e2db6553912fbdf45263c9f43c02892a1d01d" Jan 27 08:13:36 crc kubenswrapper[4799]: I0127 08:13:36.860718 4799 scope.go:117] "RemoveContainer" containerID="7bfaef694e22e5a40ed6a51a010aed6fc114cc3d82e200c369c016b57c978d3f" Jan 27 08:13:36 crc kubenswrapper[4799]: I0127 08:13:36.880432 4799 scope.go:117] "RemoveContainer" containerID="8ea6ccce7f6c8746e9576d8da35e940f2fec2ce87b0123fcad72b2ce4a91d8ca" Jan 27 08:13:53 crc kubenswrapper[4799]: I0127 08:13:53.731167 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:13:53 crc kubenswrapper[4799]: I0127 08:13:53.731765 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:13:53 crc kubenswrapper[4799]: I0127 08:13:53.731811 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 08:13:53 crc kubenswrapper[4799]: I0127 08:13:53.732517 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 08:13:53 crc kubenswrapper[4799]: I0127 08:13:53.732587 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" gracePeriod=600 Jan 27 08:13:53 crc kubenswrapper[4799]: E0127 08:13:53.856421 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:13:54 crc kubenswrapper[4799]: I0127 08:13:54.710585 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" exitCode=0 Jan 27 08:13:54 crc kubenswrapper[4799]: I0127 08:13:54.710650 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f"} Jan 27 08:13:54 crc kubenswrapper[4799]: I0127 08:13:54.710714 4799 scope.go:117] "RemoveContainer" containerID="45e8464efa823c0efb39a137ea02aa341a85fc57fd2ab60277b88ead10fb975d" Jan 27 08:13:54 crc kubenswrapper[4799]: I0127 08:13:54.711499 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:13:54 crc kubenswrapper[4799]: E0127 08:13:54.712049 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:14:05 crc kubenswrapper[4799]: I0127 08:14:05.452187 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:14:05 crc kubenswrapper[4799]: E0127 08:14:05.453099 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:14:16 crc kubenswrapper[4799]: I0127 08:14:16.451021 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:14:16 crc kubenswrapper[4799]: E0127 08:14:16.451717 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:14:31 crc kubenswrapper[4799]: I0127 08:14:31.452646 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:14:31 crc kubenswrapper[4799]: E0127 08:14:31.454253 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:14:37 crc kubenswrapper[4799]: I0127 08:14:37.049062 4799 scope.go:117] "RemoveContainer" containerID="f0f1a27d1c3775d4f9bb3826cb3def99570c42e6ed6f54bd1e8c144f71e8c3ee" Jan 27 08:14:37 crc kubenswrapper[4799]: I0127 08:14:37.079072 4799 scope.go:117] "RemoveContainer" containerID="eff3e29f80d9de6e897b75e0b02adcfe12220ce6d41edc70ddb3183ed98e9b7c" Jan 27 08:14:37 crc kubenswrapper[4799]: I0127 08:14:37.100004 4799 scope.go:117] "RemoveContainer" containerID="58103e21c893ba0c7f7e115f0cb776fe7e3182e09f7ad2ca9104804b9087f777" Jan 27 08:14:37 crc kubenswrapper[4799]: I0127 08:14:37.127112 4799 scope.go:117] "RemoveContainer" containerID="d3d0c0fe16de7311f2618c3baedd20dd747f2ebffc8915d9deba8c2da79a5917" Jan 27 08:14:37 crc kubenswrapper[4799]: I0127 08:14:37.175517 4799 scope.go:117] "RemoveContainer" containerID="4914f62a9fc682d3a594411b0b616df024ad8aaae1de80500b12fdffdccb724b" Jan 27 08:14:37 crc kubenswrapper[4799]: I0127 08:14:37.219156 4799 scope.go:117] "RemoveContainer" containerID="4d31ba741b0e686370ed07cbf292129da580b7f81e22219052a6901111ce0158" Jan 27 08:14:37 crc kubenswrapper[4799]: I0127 08:14:37.243868 4799 scope.go:117] "RemoveContainer" containerID="84b7b7f060f9d443695f5f2c676792466217b8527027454d39ebf62fc53abc75" Jan 27 08:14:37 crc kubenswrapper[4799]: I0127 08:14:37.262187 4799 scope.go:117] "RemoveContainer" containerID="96c4042bf5406878f8dd3772d3a3371136adb68791394d18af4037fa332114d5" Jan 27 08:14:37 crc kubenswrapper[4799]: I0127 08:14:37.282281 4799 scope.go:117] "RemoveContainer" containerID="81be9102825c99fb252a254b1bc712cdf4bdbbd0f63c3b67b69ea9652402fde0" Jan 27 08:14:37 crc kubenswrapper[4799]: I0127 08:14:37.303683 4799 scope.go:117] "RemoveContainer" containerID="86687dcaddc2d7937cd80f84b8ee9085606202b5a97d31a2d04ae3bf757d7599" Jan 27 08:14:37 crc kubenswrapper[4799]: I0127 08:14:37.318200 4799 scope.go:117] "RemoveContainer" containerID="ef38302687eb542879cd9b75c45fd05dd88772a509cb4ad39c25facc67c4fd68" Jan 27 08:14:37 crc kubenswrapper[4799]: I0127 08:14:37.334371 4799 scope.go:117] "RemoveContainer" containerID="6cc7749f94db9bd51e61ad464a45ac68c16adae2d011d3816b81dd71f9236ae2" Jan 27 08:14:37 crc kubenswrapper[4799]: I0127 08:14:37.364544 4799 scope.go:117] "RemoveContainer" containerID="7fff2a80febe3280b1d6e57b5c687153274f75957492becf2d7bfc6cffdb5f65" Jan 27 08:14:37 crc kubenswrapper[4799]: I0127 08:14:37.383855 4799 scope.go:117] "RemoveContainer" containerID="03bbab10f777d692f5a10d2c248adb7caebe96e8ec90d34037a6c351ed59f741" Jan 27 08:14:37 crc kubenswrapper[4799]: I0127 08:14:37.406627 4799 scope.go:117] "RemoveContainer" containerID="1436dd48ceffbab52561f3a9d13362c3d4099c3e9523d46fb3bd724938714bc6" Jan 27 08:14:37 crc kubenswrapper[4799]: I0127 08:14:37.424865 4799 scope.go:117] "RemoveContainer" containerID="a5ebfe33cee6f6d6b3a58ce99d9d667aadb7fd018e2aae4624e00476cff72f9b" Jan 27 08:14:37 crc kubenswrapper[4799]: I0127 08:14:37.441695 4799 scope.go:117] "RemoveContainer" containerID="fc49bc58c0d0f8ade28b95273e683d735d4bd2fd74ab1408e0ca7cd88feb6437" Jan 27 08:14:46 crc kubenswrapper[4799]: I0127 08:14:46.451573 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:14:46 crc kubenswrapper[4799]: E0127 08:14:46.452433 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:14:59 crc kubenswrapper[4799]: I0127 08:14:59.452597 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:14:59 crc kubenswrapper[4799]: E0127 08:14:59.453138 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.165164 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491695-qzn4t"] Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.165569 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="034b328a-c365-4b0a-8346-1cd571d65921" containerName="nova-api-log" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.165591 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="034b328a-c365-4b0a-8346-1cd571d65921" containerName="nova-api-log" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.165603 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="rsync" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.165611 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="rsync" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.165628 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="182368c8-7aeb-4cfe-8de7-60794b59792c" containerName="probe" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.165636 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="182368c8-7aeb-4cfe-8de7-60794b59792c" containerName="probe" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.165644 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="swift-recon-cron" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.165652 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="swift-recon-cron" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.165660 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="object-auditor" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.165670 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="object-auditor" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.165684 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c0d170a-443e-438c-b4cd-0be234b7594c" containerName="extract-content" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.165692 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c0d170a-443e-438c-b4cd-0be234b7594c" containerName="extract-content" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.165710 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="container-auditor" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.165720 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="container-auditor" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.165729 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="034b328a-c365-4b0a-8346-1cd571d65921" containerName="nova-api-api" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.165739 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="034b328a-c365-4b0a-8346-1cd571d65921" containerName="nova-api-api" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.165752 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2db9ba76-0532-4ed0-972e-fd5452048b97" containerName="neutron-httpd" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.165759 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="2db9ba76-0532-4ed0-972e-fd5452048b97" containerName="neutron-httpd" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.165770 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2bc07cc-2292-4fdf-9444-866ce10a6bf8" containerName="ceilometer-notification-agent" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.165779 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2bc07cc-2292-4fdf-9444-866ce10a6bf8" containerName="ceilometer-notification-agent" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.165788 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c53857a-2e9c-4057-9f69-3611704d36f5" containerName="nova-cell0-conductor-conductor" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.165796 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c53857a-2e9c-4057-9f69-3611704d36f5" containerName="nova-cell0-conductor-conductor" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.165808 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c0d170a-443e-438c-b4cd-0be234b7594c" containerName="extract-utilities" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.165815 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c0d170a-443e-438c-b4cd-0be234b7594c" containerName="extract-utilities" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.165823 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dbfc3a0-883d-46a6-af9b-879efb42840e" containerName="barbican-api-log" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.165831 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dbfc3a0-883d-46a6-af9b-879efb42840e" containerName="barbican-api-log" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.165843 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dbfc3a0-883d-46a6-af9b-879efb42840e" containerName="barbican-api" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.165850 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dbfc3a0-883d-46a6-af9b-879efb42840e" containerName="barbican-api" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.165863 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="container-server" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.165872 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="container-server" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.165887 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="account-reaper" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.165894 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="account-reaper" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.165904 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovsdb-server-init" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.165911 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovsdb-server-init" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.165924 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="object-server" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.165931 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="object-server" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.165943 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdad1fc3-eebb-4dcb-b69a-076d1dc63a89" containerName="nova-metadata-metadata" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.165951 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdad1fc3-eebb-4dcb-b69a-076d1dc63a89" containerName="nova-metadata-metadata" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.165965 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d822fe6-f547-4b8f-a6e4-c7256e1b2ace" containerName="setup-container" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.165973 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d822fe6-f547-4b8f-a6e4-c7256e1b2ace" containerName="setup-container" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.165988 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2db9ba76-0532-4ed0-972e-fd5452048b97" containerName="neutron-api" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.165996 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="2db9ba76-0532-4ed0-972e-fd5452048b97" containerName="neutron-api" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166005 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54237546-70b8-4475-bd97-53ea6047786b" containerName="openstack-network-exporter" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166013 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="54237546-70b8-4475-bd97-53ea6047786b" containerName="openstack-network-exporter" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166026 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2bc07cc-2292-4fdf-9444-866ce10a6bf8" containerName="proxy-httpd" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166034 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2bc07cc-2292-4fdf-9444-866ce10a6bf8" containerName="proxy-httpd" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166045 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c0d170a-443e-438c-b4cd-0be234b7594c" containerName="registry-server" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166053 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c0d170a-443e-438c-b4cd-0be234b7594c" containerName="registry-server" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166061 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovs-vswitchd" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166069 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovs-vswitchd" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166077 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="object-updater" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166085 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="object-updater" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166097 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="182368c8-7aeb-4cfe-8de7-60794b59792c" containerName="cinder-scheduler" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166105 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="182368c8-7aeb-4cfe-8de7-60794b59792c" containerName="cinder-scheduler" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166114 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54237546-70b8-4475-bd97-53ea6047786b" containerName="ovn-northd" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166121 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="54237546-70b8-4475-bd97-53ea6047786b" containerName="ovn-northd" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166134 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="account-auditor" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166143 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="account-auditor" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166155 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff64e6c-4e67-435e-9f12-2d0e77530da3" containerName="mysql-bootstrap" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166162 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff64e6c-4e67-435e-9f12-2d0e77530da3" containerName="mysql-bootstrap" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166176 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0707039f-a588-4975-a71f-dfe2054ba4e6" containerName="kube-state-metrics" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166183 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0707039f-a588-4975-a71f-dfe2054ba4e6" containerName="kube-state-metrics" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166196 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" containerName="rabbitmq" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166205 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" containerName="rabbitmq" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166219 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bca1b10-545f-4e35-a5af-e760d464d0ff" containerName="nova-cell1-conductor-conductor" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166226 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bca1b10-545f-4e35-a5af-e760d464d0ff" containerName="nova-cell1-conductor-conductor" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166239 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97b84a5-a34c-405f-8357-70cad8efedbc" containerName="mariadb-account-create-update" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166246 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97b84a5-a34c-405f-8357-70cad8efedbc" containerName="mariadb-account-create-update" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166258 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovsdb-server" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166266 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovsdb-server" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166276 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2bc07cc-2292-4fdf-9444-866ce10a6bf8" containerName="sg-core" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166284 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2bc07cc-2292-4fdf-9444-866ce10a6bf8" containerName="sg-core" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166296 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69778bc9-c84e-42d0-9645-7fd3afa2ca28" containerName="nova-scheduler-scheduler" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166308 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="69778bc9-c84e-42d0-9645-7fd3afa2ca28" containerName="nova-scheduler-scheduler" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166339 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d822fe6-f547-4b8f-a6e4-c7256e1b2ace" containerName="rabbitmq" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166347 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d822fe6-f547-4b8f-a6e4-c7256e1b2ace" containerName="rabbitmq" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166357 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="account-replicator" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166366 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="account-replicator" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166377 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2bc07cc-2292-4fdf-9444-866ce10a6bf8" containerName="ceilometer-central-agent" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166384 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2bc07cc-2292-4fdf-9444-866ce10a6bf8" containerName="ceilometer-central-agent" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166395 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdad1fc3-eebb-4dcb-b69a-076d1dc63a89" containerName="nova-metadata-log" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166403 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdad1fc3-eebb-4dcb-b69a-076d1dc63a89" containerName="nova-metadata-log" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166415 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963110c4-038a-4208-b712-f66e885aff69" containerName="memcached" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166423 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="963110c4-038a-4208-b712-f66e885aff69" containerName="memcached" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166433 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b32c7a11-1bfb-494f-a2d9-8800ba707e94" containerName="keystone-api" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166441 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b32c7a11-1bfb-494f-a2d9-8800ba707e94" containerName="keystone-api" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166452 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff64e6c-4e67-435e-9f12-2d0e77530da3" containerName="galera" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166460 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff64e6c-4e67-435e-9f12-2d0e77530da3" containerName="galera" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166475 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="object-expirer" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166484 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="object-expirer" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166496 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="account-server" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166505 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="account-server" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166516 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="object-replicator" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166524 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="object-replicator" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166539 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="container-replicator" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166546 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="container-replicator" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166556 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97b84a5-a34c-405f-8357-70cad8efedbc" containerName="mariadb-account-create-update" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166564 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97b84a5-a34c-405f-8357-70cad8efedbc" containerName="mariadb-account-create-update" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166580 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" containerName="setup-container" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166588 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" containerName="setup-container" Jan 27 08:15:00 crc kubenswrapper[4799]: E0127 08:15:00.166598 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="container-updater" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166606 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="container-updater" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166769 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdad1fc3-eebb-4dcb-b69a-076d1dc63a89" containerName="nova-metadata-log" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166782 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="54237546-70b8-4475-bd97-53ea6047786b" containerName="ovn-northd" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166796 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="container-server" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166804 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="container-replicator" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166817 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="account-reaper" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166829 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="034b328a-c365-4b0a-8346-1cd571d65921" containerName="nova-api-log" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166841 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="69778bc9-c84e-42d0-9645-7fd3afa2ca28" containerName="nova-scheduler-scheduler" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166853 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dcc29e2-1fe4-416d-9d14-a90b4dbd27e0" containerName="rabbitmq" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166863 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bca1b10-545f-4e35-a5af-e760d464d0ff" containerName="nova-cell1-conductor-conductor" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166874 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="2db9ba76-0532-4ed0-972e-fd5452048b97" containerName="neutron-httpd" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166886 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2bc07cc-2292-4fdf-9444-866ce10a6bf8" containerName="sg-core" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166895 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="container-updater" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166908 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdad1fc3-eebb-4dcb-b69a-076d1dc63a89" containerName="nova-metadata-metadata" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166916 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2bc07cc-2292-4fdf-9444-866ce10a6bf8" containerName="proxy-httpd" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166927 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="object-updater" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166940 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d822fe6-f547-4b8f-a6e4-c7256e1b2ace" containerName="rabbitmq" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166951 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dbfc3a0-883d-46a6-af9b-879efb42840e" containerName="barbican-api" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166962 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="034b328a-c365-4b0a-8346-1cd571d65921" containerName="nova-api-api" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166973 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="container-auditor" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166983 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="object-replicator" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.166992 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f97b84a5-a34c-405f-8357-70cad8efedbc" containerName="mariadb-account-create-update" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167003 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="0707039f-a588-4975-a71f-dfe2054ba4e6" containerName="kube-state-metrics" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167012 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="963110c4-038a-4208-b712-f66e885aff69" containerName="memcached" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167022 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="account-replicator" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167035 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="b32c7a11-1bfb-494f-a2d9-8800ba707e94" containerName="keystone-api" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167043 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovsdb-server" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167052 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dbfc3a0-883d-46a6-af9b-879efb42840e" containerName="barbican-api-log" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167062 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="object-expirer" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167071 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="account-server" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167082 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="182368c8-7aeb-4cfe-8de7-60794b59792c" containerName="probe" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167090 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="object-server" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167102 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2bc07cc-2292-4fdf-9444-866ce10a6bf8" containerName="ceilometer-central-agent" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167109 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="account-auditor" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167121 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="182368c8-7aeb-4cfe-8de7-60794b59792c" containerName="cinder-scheduler" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167135 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="rsync" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167142 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2bc07cc-2292-4fdf-9444-866ce10a6bf8" containerName="ceilometer-notification-agent" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167150 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="swift-recon-cron" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167159 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f707c5d5-a9c3-4fdb-8361-9604b6b70153" containerName="object-auditor" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167167 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f97b84a5-a34c-405f-8357-70cad8efedbc" containerName="mariadb-account-create-update" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167175 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c0d170a-443e-438c-b4cd-0be234b7594c" containerName="registry-server" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167184 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="eff64e6c-4e67-435e-9f12-2d0e77530da3" containerName="galera" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167193 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="54237546-70b8-4475-bd97-53ea6047786b" containerName="openstack-network-exporter" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167202 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="82b996cd-10af-493c-9972-bb6d9bedc711" containerName="ovs-vswitchd" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167210 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c53857a-2e9c-4057-9f69-3611704d36f5" containerName="nova-cell0-conductor-conductor" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167220 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="2db9ba76-0532-4ed0-972e-fd5452048b97" containerName="neutron-api" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.167795 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-qzn4t" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.170125 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.171489 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.174907 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491695-qzn4t"] Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.230305 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9afe2e1b-9426-4065-b6ee-1df70cdf7b25-config-volume\") pod \"collect-profiles-29491695-qzn4t\" (UID: \"9afe2e1b-9426-4065-b6ee-1df70cdf7b25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-qzn4t" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.230471 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9afe2e1b-9426-4065-b6ee-1df70cdf7b25-secret-volume\") pod \"collect-profiles-29491695-qzn4t\" (UID: \"9afe2e1b-9426-4065-b6ee-1df70cdf7b25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-qzn4t" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.230575 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ksvr\" (UniqueName: \"kubernetes.io/projected/9afe2e1b-9426-4065-b6ee-1df70cdf7b25-kube-api-access-9ksvr\") pod \"collect-profiles-29491695-qzn4t\" (UID: \"9afe2e1b-9426-4065-b6ee-1df70cdf7b25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-qzn4t" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.332447 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9afe2e1b-9426-4065-b6ee-1df70cdf7b25-config-volume\") pod \"collect-profiles-29491695-qzn4t\" (UID: \"9afe2e1b-9426-4065-b6ee-1df70cdf7b25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-qzn4t" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.332563 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9afe2e1b-9426-4065-b6ee-1df70cdf7b25-secret-volume\") pod \"collect-profiles-29491695-qzn4t\" (UID: \"9afe2e1b-9426-4065-b6ee-1df70cdf7b25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-qzn4t" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.332698 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ksvr\" (UniqueName: \"kubernetes.io/projected/9afe2e1b-9426-4065-b6ee-1df70cdf7b25-kube-api-access-9ksvr\") pod \"collect-profiles-29491695-qzn4t\" (UID: \"9afe2e1b-9426-4065-b6ee-1df70cdf7b25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-qzn4t" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.333909 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9afe2e1b-9426-4065-b6ee-1df70cdf7b25-config-volume\") pod \"collect-profiles-29491695-qzn4t\" (UID: \"9afe2e1b-9426-4065-b6ee-1df70cdf7b25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-qzn4t" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.342838 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9afe2e1b-9426-4065-b6ee-1df70cdf7b25-secret-volume\") pod \"collect-profiles-29491695-qzn4t\" (UID: \"9afe2e1b-9426-4065-b6ee-1df70cdf7b25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-qzn4t" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.354669 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ksvr\" (UniqueName: \"kubernetes.io/projected/9afe2e1b-9426-4065-b6ee-1df70cdf7b25-kube-api-access-9ksvr\") pod \"collect-profiles-29491695-qzn4t\" (UID: \"9afe2e1b-9426-4065-b6ee-1df70cdf7b25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-qzn4t" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.510504 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-qzn4t" Jan 27 08:15:00 crc kubenswrapper[4799]: I0127 08:15:00.729910 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491695-qzn4t"] Jan 27 08:15:01 crc kubenswrapper[4799]: I0127 08:15:01.345817 4799 generic.go:334] "Generic (PLEG): container finished" podID="9afe2e1b-9426-4065-b6ee-1df70cdf7b25" containerID="dd1955955a5347e9719e29cb8ab95880d34683ade4e36041d12a64bfa03e6f71" exitCode=0 Jan 27 08:15:01 crc kubenswrapper[4799]: I0127 08:15:01.346072 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-qzn4t" event={"ID":"9afe2e1b-9426-4065-b6ee-1df70cdf7b25","Type":"ContainerDied","Data":"dd1955955a5347e9719e29cb8ab95880d34683ade4e36041d12a64bfa03e6f71"} Jan 27 08:15:01 crc kubenswrapper[4799]: I0127 08:15:01.346281 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-qzn4t" event={"ID":"9afe2e1b-9426-4065-b6ee-1df70cdf7b25","Type":"ContainerStarted","Data":"36f0ccb258ba93cb9987763daddda2cf6ac152607eda1063eafcaacc3f6dfa8b"} Jan 27 08:15:02 crc kubenswrapper[4799]: I0127 08:15:02.660464 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-qzn4t" Jan 27 08:15:02 crc kubenswrapper[4799]: I0127 08:15:02.672675 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9afe2e1b-9426-4065-b6ee-1df70cdf7b25-secret-volume\") pod \"9afe2e1b-9426-4065-b6ee-1df70cdf7b25\" (UID: \"9afe2e1b-9426-4065-b6ee-1df70cdf7b25\") " Jan 27 08:15:02 crc kubenswrapper[4799]: I0127 08:15:02.672745 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ksvr\" (UniqueName: \"kubernetes.io/projected/9afe2e1b-9426-4065-b6ee-1df70cdf7b25-kube-api-access-9ksvr\") pod \"9afe2e1b-9426-4065-b6ee-1df70cdf7b25\" (UID: \"9afe2e1b-9426-4065-b6ee-1df70cdf7b25\") " Jan 27 08:15:02 crc kubenswrapper[4799]: I0127 08:15:02.672782 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9afe2e1b-9426-4065-b6ee-1df70cdf7b25-config-volume\") pod \"9afe2e1b-9426-4065-b6ee-1df70cdf7b25\" (UID: \"9afe2e1b-9426-4065-b6ee-1df70cdf7b25\") " Jan 27 08:15:02 crc kubenswrapper[4799]: I0127 08:15:02.673606 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9afe2e1b-9426-4065-b6ee-1df70cdf7b25-config-volume" (OuterVolumeSpecName: "config-volume") pod "9afe2e1b-9426-4065-b6ee-1df70cdf7b25" (UID: "9afe2e1b-9426-4065-b6ee-1df70cdf7b25"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:15:02 crc kubenswrapper[4799]: I0127 08:15:02.680756 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9afe2e1b-9426-4065-b6ee-1df70cdf7b25-kube-api-access-9ksvr" (OuterVolumeSpecName: "kube-api-access-9ksvr") pod "9afe2e1b-9426-4065-b6ee-1df70cdf7b25" (UID: "9afe2e1b-9426-4065-b6ee-1df70cdf7b25"). InnerVolumeSpecName "kube-api-access-9ksvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:15:02 crc kubenswrapper[4799]: I0127 08:15:02.683549 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9afe2e1b-9426-4065-b6ee-1df70cdf7b25-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9afe2e1b-9426-4065-b6ee-1df70cdf7b25" (UID: "9afe2e1b-9426-4065-b6ee-1df70cdf7b25"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:15:02 crc kubenswrapper[4799]: I0127 08:15:02.773611 4799 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9afe2e1b-9426-4065-b6ee-1df70cdf7b25-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 08:15:02 crc kubenswrapper[4799]: I0127 08:15:02.774118 4799 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9afe2e1b-9426-4065-b6ee-1df70cdf7b25-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 08:15:02 crc kubenswrapper[4799]: I0127 08:15:02.774186 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ksvr\" (UniqueName: \"kubernetes.io/projected/9afe2e1b-9426-4065-b6ee-1df70cdf7b25-kube-api-access-9ksvr\") on node \"crc\" DevicePath \"\"" Jan 27 08:15:03 crc kubenswrapper[4799]: I0127 08:15:03.365232 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-qzn4t" event={"ID":"9afe2e1b-9426-4065-b6ee-1df70cdf7b25","Type":"ContainerDied","Data":"36f0ccb258ba93cb9987763daddda2cf6ac152607eda1063eafcaacc3f6dfa8b"} Jan 27 08:15:03 crc kubenswrapper[4799]: I0127 08:15:03.365280 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491695-qzn4t" Jan 27 08:15:03 crc kubenswrapper[4799]: I0127 08:15:03.365283 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36f0ccb258ba93cb9987763daddda2cf6ac152607eda1063eafcaacc3f6dfa8b" Jan 27 08:15:10 crc kubenswrapper[4799]: I0127 08:15:10.452250 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:15:10 crc kubenswrapper[4799]: E0127 08:15:10.453571 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:15:23 crc kubenswrapper[4799]: I0127 08:15:23.450889 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:15:23 crc kubenswrapper[4799]: E0127 08:15:23.451742 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:15:34 crc kubenswrapper[4799]: I0127 08:15:34.460361 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:15:34 crc kubenswrapper[4799]: E0127 08:15:34.461270 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:15:37 crc kubenswrapper[4799]: I0127 08:15:37.624708 4799 scope.go:117] "RemoveContainer" containerID="164609eb4caa85166487145780db42d3fb0581e57c8a8c66eac723a4f5bc2cf7" Jan 27 08:15:37 crc kubenswrapper[4799]: I0127 08:15:37.650947 4799 scope.go:117] "RemoveContainer" containerID="1a41cdc3ee1368151b407e42de5af928b3e4b5851478b9ecb8d6a356357cac0e" Jan 27 08:15:37 crc kubenswrapper[4799]: I0127 08:15:37.677715 4799 scope.go:117] "RemoveContainer" containerID="d248ad2bdeac65e2733206852796a178a421bfc4311d4c8a1d5cac10230c0f50" Jan 27 08:15:37 crc kubenswrapper[4799]: I0127 08:15:37.703531 4799 scope.go:117] "RemoveContainer" containerID="db4e67252efaac0bd08269a4ff81fe59a762095f54b31c4946a50bbae415c5ba" Jan 27 08:15:37 crc kubenswrapper[4799]: I0127 08:15:37.727663 4799 scope.go:117] "RemoveContainer" containerID="6765331be43b26f21ba17b7ff2bdfd6c2d9d758d2339f7ca8171b493a229c5ea" Jan 27 08:15:37 crc kubenswrapper[4799]: I0127 08:15:37.767211 4799 scope.go:117] "RemoveContainer" containerID="fbf81c7d67c613cc7e18d02405e57bc12861568985ce1ef5ceba2ad9fbb16599" Jan 27 08:15:37 crc kubenswrapper[4799]: I0127 08:15:37.785673 4799 scope.go:117] "RemoveContainer" containerID="012c5492e476216d5b48eea413a8c072a2d360afd90cae7702e94faae4a0cecf" Jan 27 08:15:37 crc kubenswrapper[4799]: I0127 08:15:37.830610 4799 scope.go:117] "RemoveContainer" containerID="0ac7a7f07ad4bec91f6ec2aa1d2412ca93a2b5761f8883e09f189453418e7eb4" Jan 27 08:15:46 crc kubenswrapper[4799]: I0127 08:15:46.451243 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:15:46 crc kubenswrapper[4799]: E0127 08:15:46.454423 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:15:56 crc kubenswrapper[4799]: I0127 08:15:56.783981 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v5l6l"] Jan 27 08:15:56 crc kubenswrapper[4799]: E0127 08:15:56.784818 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9afe2e1b-9426-4065-b6ee-1df70cdf7b25" containerName="collect-profiles" Jan 27 08:15:56 crc kubenswrapper[4799]: I0127 08:15:56.784833 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9afe2e1b-9426-4065-b6ee-1df70cdf7b25" containerName="collect-profiles" Jan 27 08:15:56 crc kubenswrapper[4799]: I0127 08:15:56.785022 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="9afe2e1b-9426-4065-b6ee-1df70cdf7b25" containerName="collect-profiles" Jan 27 08:15:56 crc kubenswrapper[4799]: I0127 08:15:56.786056 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v5l6l" Jan 27 08:15:56 crc kubenswrapper[4799]: I0127 08:15:56.801602 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v5l6l"] Jan 27 08:15:56 crc kubenswrapper[4799]: I0127 08:15:56.977491 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmv6b\" (UniqueName: \"kubernetes.io/projected/d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482-kube-api-access-cmv6b\") pod \"redhat-marketplace-v5l6l\" (UID: \"d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482\") " pod="openshift-marketplace/redhat-marketplace-v5l6l" Jan 27 08:15:56 crc kubenswrapper[4799]: I0127 08:15:56.977650 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482-catalog-content\") pod \"redhat-marketplace-v5l6l\" (UID: \"d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482\") " pod="openshift-marketplace/redhat-marketplace-v5l6l" Jan 27 08:15:56 crc kubenswrapper[4799]: I0127 08:15:56.977683 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482-utilities\") pod \"redhat-marketplace-v5l6l\" (UID: \"d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482\") " pod="openshift-marketplace/redhat-marketplace-v5l6l" Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.079374 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmv6b\" (UniqueName: \"kubernetes.io/projected/d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482-kube-api-access-cmv6b\") pod \"redhat-marketplace-v5l6l\" (UID: \"d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482\") " pod="openshift-marketplace/redhat-marketplace-v5l6l" Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.079478 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482-catalog-content\") pod \"redhat-marketplace-v5l6l\" (UID: \"d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482\") " pod="openshift-marketplace/redhat-marketplace-v5l6l" Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.079515 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482-utilities\") pod \"redhat-marketplace-v5l6l\" (UID: \"d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482\") " pod="openshift-marketplace/redhat-marketplace-v5l6l" Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.080109 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482-catalog-content\") pod \"redhat-marketplace-v5l6l\" (UID: \"d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482\") " pod="openshift-marketplace/redhat-marketplace-v5l6l" Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.080200 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482-utilities\") pod \"redhat-marketplace-v5l6l\" (UID: \"d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482\") " pod="openshift-marketplace/redhat-marketplace-v5l6l" Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.100538 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmv6b\" (UniqueName: \"kubernetes.io/projected/d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482-kube-api-access-cmv6b\") pod \"redhat-marketplace-v5l6l\" (UID: \"d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482\") " pod="openshift-marketplace/redhat-marketplace-v5l6l" Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.160368 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v5l6l" Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.380125 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rn5l5"] Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.381918 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rn5l5" Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.405982 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rn5l5"] Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.485104 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c88fcc7-29ee-4aaa-aaf6-19c3a942d262-utilities\") pod \"redhat-operators-rn5l5\" (UID: \"1c88fcc7-29ee-4aaa-aaf6-19c3a942d262\") " pod="openshift-marketplace/redhat-operators-rn5l5" Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.485170 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c88fcc7-29ee-4aaa-aaf6-19c3a942d262-catalog-content\") pod \"redhat-operators-rn5l5\" (UID: \"1c88fcc7-29ee-4aaa-aaf6-19c3a942d262\") " pod="openshift-marketplace/redhat-operators-rn5l5" Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.485191 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxpvn\" (UniqueName: \"kubernetes.io/projected/1c88fcc7-29ee-4aaa-aaf6-19c3a942d262-kube-api-access-fxpvn\") pod \"redhat-operators-rn5l5\" (UID: \"1c88fcc7-29ee-4aaa-aaf6-19c3a942d262\") " pod="openshift-marketplace/redhat-operators-rn5l5" Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.586663 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c88fcc7-29ee-4aaa-aaf6-19c3a942d262-utilities\") pod \"redhat-operators-rn5l5\" (UID: \"1c88fcc7-29ee-4aaa-aaf6-19c3a942d262\") " pod="openshift-marketplace/redhat-operators-rn5l5" Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.586746 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c88fcc7-29ee-4aaa-aaf6-19c3a942d262-catalog-content\") pod \"redhat-operators-rn5l5\" (UID: \"1c88fcc7-29ee-4aaa-aaf6-19c3a942d262\") " pod="openshift-marketplace/redhat-operators-rn5l5" Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.586775 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxpvn\" (UniqueName: \"kubernetes.io/projected/1c88fcc7-29ee-4aaa-aaf6-19c3a942d262-kube-api-access-fxpvn\") pod \"redhat-operators-rn5l5\" (UID: \"1c88fcc7-29ee-4aaa-aaf6-19c3a942d262\") " pod="openshift-marketplace/redhat-operators-rn5l5" Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.587886 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c88fcc7-29ee-4aaa-aaf6-19c3a942d262-catalog-content\") pod \"redhat-operators-rn5l5\" (UID: \"1c88fcc7-29ee-4aaa-aaf6-19c3a942d262\") " pod="openshift-marketplace/redhat-operators-rn5l5" Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.588027 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c88fcc7-29ee-4aaa-aaf6-19c3a942d262-utilities\") pod \"redhat-operators-rn5l5\" (UID: \"1c88fcc7-29ee-4aaa-aaf6-19c3a942d262\") " pod="openshift-marketplace/redhat-operators-rn5l5" Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.617913 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v5l6l"] Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.621068 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxpvn\" (UniqueName: \"kubernetes.io/projected/1c88fcc7-29ee-4aaa-aaf6-19c3a942d262-kube-api-access-fxpvn\") pod \"redhat-operators-rn5l5\" (UID: \"1c88fcc7-29ee-4aaa-aaf6-19c3a942d262\") " pod="openshift-marketplace/redhat-operators-rn5l5" Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.703745 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rn5l5" Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.882205 4799 generic.go:334] "Generic (PLEG): container finished" podID="d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482" containerID="9b4a04befafc385acbd8bb1722bfa34b8226df8d613b5f8a17d35c88005b7211" exitCode=0 Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.882398 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v5l6l" event={"ID":"d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482","Type":"ContainerDied","Data":"9b4a04befafc385acbd8bb1722bfa34b8226df8d613b5f8a17d35c88005b7211"} Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.883755 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v5l6l" event={"ID":"d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482","Type":"ContainerStarted","Data":"61ac4d292eb7c12faf92ea1750f343d96ff729eb7aad28fde117613fcc2ec2a0"} Jan 27 08:15:57 crc kubenswrapper[4799]: I0127 08:15:57.892818 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 08:15:58 crc kubenswrapper[4799]: I0127 08:15:58.156173 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rn5l5"] Jan 27 08:15:58 crc kubenswrapper[4799]: W0127 08:15:58.160513 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c88fcc7_29ee_4aaa_aaf6_19c3a942d262.slice/crio-329e360a08dbcd9577e9de0278b5862d087e3139474cfead199f077acb7d2a07 WatchSource:0}: Error finding container 329e360a08dbcd9577e9de0278b5862d087e3139474cfead199f077acb7d2a07: Status 404 returned error can't find the container with id 329e360a08dbcd9577e9de0278b5862d087e3139474cfead199f077acb7d2a07 Jan 27 08:15:58 crc kubenswrapper[4799]: I0127 08:15:58.892504 4799 generic.go:334] "Generic (PLEG): container finished" podID="1c88fcc7-29ee-4aaa-aaf6-19c3a942d262" containerID="ea19a01e7237aa6b4adc89ac3b1c517cce06841f968332e164aecc4ca61120d5" exitCode=0 Jan 27 08:15:58 crc kubenswrapper[4799]: I0127 08:15:58.892573 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rn5l5" event={"ID":"1c88fcc7-29ee-4aaa-aaf6-19c3a942d262","Type":"ContainerDied","Data":"ea19a01e7237aa6b4adc89ac3b1c517cce06841f968332e164aecc4ca61120d5"} Jan 27 08:15:58 crc kubenswrapper[4799]: I0127 08:15:58.892997 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rn5l5" event={"ID":"1c88fcc7-29ee-4aaa-aaf6-19c3a942d262","Type":"ContainerStarted","Data":"329e360a08dbcd9577e9de0278b5862d087e3139474cfead199f077acb7d2a07"} Jan 27 08:15:58 crc kubenswrapper[4799]: I0127 08:15:58.895662 4799 generic.go:334] "Generic (PLEG): container finished" podID="d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482" containerID="b9847d2dd97ea186db2db313a39e8bb753b49af7c55f593c71bdddedcc292fa2" exitCode=0 Jan 27 08:15:58 crc kubenswrapper[4799]: I0127 08:15:58.895690 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v5l6l" event={"ID":"d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482","Type":"ContainerDied","Data":"b9847d2dd97ea186db2db313a39e8bb753b49af7c55f593c71bdddedcc292fa2"} Jan 27 08:15:59 crc kubenswrapper[4799]: I0127 08:15:59.904198 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v5l6l" event={"ID":"d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482","Type":"ContainerStarted","Data":"aef1724287dde5996d03cd9736df6e5ef62b236830ef1a8796b560f3ff125141"} Jan 27 08:15:59 crc kubenswrapper[4799]: I0127 08:15:59.906841 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rn5l5" event={"ID":"1c88fcc7-29ee-4aaa-aaf6-19c3a942d262","Type":"ContainerStarted","Data":"d4d61aa4170f5fddd45cb162b13961a71bed58d8bec140589f1e248a9488b6ef"} Jan 27 08:15:59 crc kubenswrapper[4799]: I0127 08:15:59.935647 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v5l6l" podStartSLOduration=2.520533887 podStartE2EDuration="3.935626518s" podCreationTimestamp="2026-01-27 08:15:56 +0000 UTC" firstStartedPulling="2026-01-27 08:15:57.890054121 +0000 UTC m=+1824.201158186" lastFinishedPulling="2026-01-27 08:15:59.305146722 +0000 UTC m=+1825.616250817" observedRunningTime="2026-01-27 08:15:59.932360178 +0000 UTC m=+1826.243464263" watchObservedRunningTime="2026-01-27 08:15:59.935626518 +0000 UTC m=+1826.246730593" Jan 27 08:16:00 crc kubenswrapper[4799]: I0127 08:16:00.915368 4799 generic.go:334] "Generic (PLEG): container finished" podID="1c88fcc7-29ee-4aaa-aaf6-19c3a942d262" containerID="d4d61aa4170f5fddd45cb162b13961a71bed58d8bec140589f1e248a9488b6ef" exitCode=0 Jan 27 08:16:00 crc kubenswrapper[4799]: I0127 08:16:00.915455 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rn5l5" event={"ID":"1c88fcc7-29ee-4aaa-aaf6-19c3a942d262","Type":"ContainerDied","Data":"d4d61aa4170f5fddd45cb162b13961a71bed58d8bec140589f1e248a9488b6ef"} Jan 27 08:16:01 crc kubenswrapper[4799]: I0127 08:16:01.451345 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:16:01 crc kubenswrapper[4799]: E0127 08:16:01.451765 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:16:01 crc kubenswrapper[4799]: I0127 08:16:01.926747 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rn5l5" event={"ID":"1c88fcc7-29ee-4aaa-aaf6-19c3a942d262","Type":"ContainerStarted","Data":"ac52454533c4deccb0788abe378996a213f663829476bae8a10e20ce0cec4646"} Jan 27 08:16:01 crc kubenswrapper[4799]: I0127 08:16:01.952807 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rn5l5" podStartSLOduration=2.491298016 podStartE2EDuration="4.952787109s" podCreationTimestamp="2026-01-27 08:15:57 +0000 UTC" firstStartedPulling="2026-01-27 08:15:58.894388516 +0000 UTC m=+1825.205492581" lastFinishedPulling="2026-01-27 08:16:01.355877609 +0000 UTC m=+1827.666981674" observedRunningTime="2026-01-27 08:16:01.94584499 +0000 UTC m=+1828.256949055" watchObservedRunningTime="2026-01-27 08:16:01.952787109 +0000 UTC m=+1828.263891174" Jan 27 08:16:07 crc kubenswrapper[4799]: I0127 08:16:07.161543 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v5l6l" Jan 27 08:16:07 crc kubenswrapper[4799]: I0127 08:16:07.164073 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v5l6l" Jan 27 08:16:07 crc kubenswrapper[4799]: I0127 08:16:07.237527 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v5l6l" Jan 27 08:16:07 crc kubenswrapper[4799]: I0127 08:16:07.704760 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rn5l5" Jan 27 08:16:07 crc kubenswrapper[4799]: I0127 08:16:07.705220 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rn5l5" Jan 27 08:16:08 crc kubenswrapper[4799]: I0127 08:16:08.051672 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v5l6l" Jan 27 08:16:08 crc kubenswrapper[4799]: I0127 08:16:08.098154 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v5l6l"] Jan 27 08:16:08 crc kubenswrapper[4799]: I0127 08:16:08.767650 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rn5l5" podUID="1c88fcc7-29ee-4aaa-aaf6-19c3a942d262" containerName="registry-server" probeResult="failure" output=< Jan 27 08:16:08 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Jan 27 08:16:08 crc kubenswrapper[4799]: > Jan 27 08:16:10 crc kubenswrapper[4799]: I0127 08:16:10.001555 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v5l6l" podUID="d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482" containerName="registry-server" containerID="cri-o://aef1724287dde5996d03cd9736df6e5ef62b236830ef1a8796b560f3ff125141" gracePeriod=2 Jan 27 08:16:12 crc kubenswrapper[4799]: I0127 08:16:12.018852 4799 generic.go:334] "Generic (PLEG): container finished" podID="d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482" containerID="aef1724287dde5996d03cd9736df6e5ef62b236830ef1a8796b560f3ff125141" exitCode=0 Jan 27 08:16:12 crc kubenswrapper[4799]: I0127 08:16:12.018956 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v5l6l" event={"ID":"d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482","Type":"ContainerDied","Data":"aef1724287dde5996d03cd9736df6e5ef62b236830ef1a8796b560f3ff125141"} Jan 27 08:16:12 crc kubenswrapper[4799]: I0127 08:16:12.312508 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v5l6l" Jan 27 08:16:12 crc kubenswrapper[4799]: I0127 08:16:12.411159 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmv6b\" (UniqueName: \"kubernetes.io/projected/d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482-kube-api-access-cmv6b\") pod \"d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482\" (UID: \"d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482\") " Jan 27 08:16:12 crc kubenswrapper[4799]: I0127 08:16:12.411280 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482-catalog-content\") pod \"d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482\" (UID: \"d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482\") " Jan 27 08:16:12 crc kubenswrapper[4799]: I0127 08:16:12.411362 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482-utilities\") pod \"d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482\" (UID: \"d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482\") " Jan 27 08:16:12 crc kubenswrapper[4799]: I0127 08:16:12.413251 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482-utilities" (OuterVolumeSpecName: "utilities") pod "d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482" (UID: "d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:16:12 crc kubenswrapper[4799]: I0127 08:16:12.420955 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482-kube-api-access-cmv6b" (OuterVolumeSpecName: "kube-api-access-cmv6b") pod "d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482" (UID: "d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482"). InnerVolumeSpecName "kube-api-access-cmv6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:16:12 crc kubenswrapper[4799]: I0127 08:16:12.445252 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482" (UID: "d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:16:12 crc kubenswrapper[4799]: I0127 08:16:12.512816 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmv6b\" (UniqueName: \"kubernetes.io/projected/d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482-kube-api-access-cmv6b\") on node \"crc\" DevicePath \"\"" Jan 27 08:16:12 crc kubenswrapper[4799]: I0127 08:16:12.513079 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:16:12 crc kubenswrapper[4799]: I0127 08:16:12.513088 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:16:13 crc kubenswrapper[4799]: I0127 08:16:13.030565 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v5l6l" event={"ID":"d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482","Type":"ContainerDied","Data":"61ac4d292eb7c12faf92ea1750f343d96ff729eb7aad28fde117613fcc2ec2a0"} Jan 27 08:16:13 crc kubenswrapper[4799]: I0127 08:16:13.030643 4799 scope.go:117] "RemoveContainer" containerID="aef1724287dde5996d03cd9736df6e5ef62b236830ef1a8796b560f3ff125141" Jan 27 08:16:13 crc kubenswrapper[4799]: I0127 08:16:13.030714 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v5l6l" Jan 27 08:16:13 crc kubenswrapper[4799]: I0127 08:16:13.058063 4799 scope.go:117] "RemoveContainer" containerID="b9847d2dd97ea186db2db313a39e8bb753b49af7c55f593c71bdddedcc292fa2" Jan 27 08:16:13 crc kubenswrapper[4799]: I0127 08:16:13.070749 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v5l6l"] Jan 27 08:16:13 crc kubenswrapper[4799]: I0127 08:16:13.083212 4799 scope.go:117] "RemoveContainer" containerID="9b4a04befafc385acbd8bb1722bfa34b8226df8d613b5f8a17d35c88005b7211" Jan 27 08:16:13 crc kubenswrapper[4799]: I0127 08:16:13.085907 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v5l6l"] Jan 27 08:16:13 crc kubenswrapper[4799]: I0127 08:16:13.452137 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:16:13 crc kubenswrapper[4799]: E0127 08:16:13.452514 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:16:14 crc kubenswrapper[4799]: I0127 08:16:14.465448 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482" path="/var/lib/kubelet/pods/d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482/volumes" Jan 27 08:16:17 crc kubenswrapper[4799]: I0127 08:16:17.752539 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rn5l5" Jan 27 08:16:17 crc kubenswrapper[4799]: I0127 08:16:17.801678 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rn5l5" Jan 27 08:16:17 crc kubenswrapper[4799]: I0127 08:16:17.987964 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rn5l5"] Jan 27 08:16:19 crc kubenswrapper[4799]: I0127 08:16:19.075163 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rn5l5" podUID="1c88fcc7-29ee-4aaa-aaf6-19c3a942d262" containerName="registry-server" containerID="cri-o://ac52454533c4deccb0788abe378996a213f663829476bae8a10e20ce0cec4646" gracePeriod=2 Jan 27 08:16:19 crc kubenswrapper[4799]: I0127 08:16:19.482971 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rn5l5" Jan 27 08:16:19 crc kubenswrapper[4799]: I0127 08:16:19.627050 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxpvn\" (UniqueName: \"kubernetes.io/projected/1c88fcc7-29ee-4aaa-aaf6-19c3a942d262-kube-api-access-fxpvn\") pod \"1c88fcc7-29ee-4aaa-aaf6-19c3a942d262\" (UID: \"1c88fcc7-29ee-4aaa-aaf6-19c3a942d262\") " Jan 27 08:16:19 crc kubenswrapper[4799]: I0127 08:16:19.627485 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c88fcc7-29ee-4aaa-aaf6-19c3a942d262-catalog-content\") pod \"1c88fcc7-29ee-4aaa-aaf6-19c3a942d262\" (UID: \"1c88fcc7-29ee-4aaa-aaf6-19c3a942d262\") " Jan 27 08:16:19 crc kubenswrapper[4799]: I0127 08:16:19.627539 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c88fcc7-29ee-4aaa-aaf6-19c3a942d262-utilities\") pod \"1c88fcc7-29ee-4aaa-aaf6-19c3a942d262\" (UID: \"1c88fcc7-29ee-4aaa-aaf6-19c3a942d262\") " Jan 27 08:16:19 crc kubenswrapper[4799]: I0127 08:16:19.629096 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c88fcc7-29ee-4aaa-aaf6-19c3a942d262-utilities" (OuterVolumeSpecName: "utilities") pod "1c88fcc7-29ee-4aaa-aaf6-19c3a942d262" (UID: "1c88fcc7-29ee-4aaa-aaf6-19c3a942d262"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:16:19 crc kubenswrapper[4799]: I0127 08:16:19.640499 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c88fcc7-29ee-4aaa-aaf6-19c3a942d262-kube-api-access-fxpvn" (OuterVolumeSpecName: "kube-api-access-fxpvn") pod "1c88fcc7-29ee-4aaa-aaf6-19c3a942d262" (UID: "1c88fcc7-29ee-4aaa-aaf6-19c3a942d262"). InnerVolumeSpecName "kube-api-access-fxpvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:16:19 crc kubenswrapper[4799]: I0127 08:16:19.729338 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxpvn\" (UniqueName: \"kubernetes.io/projected/1c88fcc7-29ee-4aaa-aaf6-19c3a942d262-kube-api-access-fxpvn\") on node \"crc\" DevicePath \"\"" Jan 27 08:16:19 crc kubenswrapper[4799]: I0127 08:16:19.729386 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c88fcc7-29ee-4aaa-aaf6-19c3a942d262-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:16:19 crc kubenswrapper[4799]: I0127 08:16:19.748568 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c88fcc7-29ee-4aaa-aaf6-19c3a942d262-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c88fcc7-29ee-4aaa-aaf6-19c3a942d262" (UID: "1c88fcc7-29ee-4aaa-aaf6-19c3a942d262"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:16:19 crc kubenswrapper[4799]: I0127 08:16:19.830895 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c88fcc7-29ee-4aaa-aaf6-19c3a942d262-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:16:20 crc kubenswrapper[4799]: I0127 08:16:20.086751 4799 generic.go:334] "Generic (PLEG): container finished" podID="1c88fcc7-29ee-4aaa-aaf6-19c3a942d262" containerID="ac52454533c4deccb0788abe378996a213f663829476bae8a10e20ce0cec4646" exitCode=0 Jan 27 08:16:20 crc kubenswrapper[4799]: I0127 08:16:20.086804 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rn5l5" event={"ID":"1c88fcc7-29ee-4aaa-aaf6-19c3a942d262","Type":"ContainerDied","Data":"ac52454533c4deccb0788abe378996a213f663829476bae8a10e20ce0cec4646"} Jan 27 08:16:20 crc kubenswrapper[4799]: I0127 08:16:20.086839 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rn5l5" event={"ID":"1c88fcc7-29ee-4aaa-aaf6-19c3a942d262","Type":"ContainerDied","Data":"329e360a08dbcd9577e9de0278b5862d087e3139474cfead199f077acb7d2a07"} Jan 27 08:16:20 crc kubenswrapper[4799]: I0127 08:16:20.086859 4799 scope.go:117] "RemoveContainer" containerID="ac52454533c4deccb0788abe378996a213f663829476bae8a10e20ce0cec4646" Jan 27 08:16:20 crc kubenswrapper[4799]: I0127 08:16:20.086881 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rn5l5" Jan 27 08:16:20 crc kubenswrapper[4799]: I0127 08:16:20.108772 4799 scope.go:117] "RemoveContainer" containerID="d4d61aa4170f5fddd45cb162b13961a71bed58d8bec140589f1e248a9488b6ef" Jan 27 08:16:20 crc kubenswrapper[4799]: I0127 08:16:20.129465 4799 scope.go:117] "RemoveContainer" containerID="ea19a01e7237aa6b4adc89ac3b1c517cce06841f968332e164aecc4ca61120d5" Jan 27 08:16:20 crc kubenswrapper[4799]: I0127 08:16:20.138784 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rn5l5"] Jan 27 08:16:20 crc kubenswrapper[4799]: I0127 08:16:20.146244 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rn5l5"] Jan 27 08:16:20 crc kubenswrapper[4799]: I0127 08:16:20.158807 4799 scope.go:117] "RemoveContainer" containerID="ac52454533c4deccb0788abe378996a213f663829476bae8a10e20ce0cec4646" Jan 27 08:16:20 crc kubenswrapper[4799]: E0127 08:16:20.162911 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac52454533c4deccb0788abe378996a213f663829476bae8a10e20ce0cec4646\": container with ID starting with ac52454533c4deccb0788abe378996a213f663829476bae8a10e20ce0cec4646 not found: ID does not exist" containerID="ac52454533c4deccb0788abe378996a213f663829476bae8a10e20ce0cec4646" Jan 27 08:16:20 crc kubenswrapper[4799]: I0127 08:16:20.162966 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac52454533c4deccb0788abe378996a213f663829476bae8a10e20ce0cec4646"} err="failed to get container status \"ac52454533c4deccb0788abe378996a213f663829476bae8a10e20ce0cec4646\": rpc error: code = NotFound desc = could not find container \"ac52454533c4deccb0788abe378996a213f663829476bae8a10e20ce0cec4646\": container with ID starting with ac52454533c4deccb0788abe378996a213f663829476bae8a10e20ce0cec4646 not found: ID does not exist" Jan 27 08:16:20 crc kubenswrapper[4799]: I0127 08:16:20.163002 4799 scope.go:117] "RemoveContainer" containerID="d4d61aa4170f5fddd45cb162b13961a71bed58d8bec140589f1e248a9488b6ef" Jan 27 08:16:20 crc kubenswrapper[4799]: E0127 08:16:20.163542 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4d61aa4170f5fddd45cb162b13961a71bed58d8bec140589f1e248a9488b6ef\": container with ID starting with d4d61aa4170f5fddd45cb162b13961a71bed58d8bec140589f1e248a9488b6ef not found: ID does not exist" containerID="d4d61aa4170f5fddd45cb162b13961a71bed58d8bec140589f1e248a9488b6ef" Jan 27 08:16:20 crc kubenswrapper[4799]: I0127 08:16:20.163579 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4d61aa4170f5fddd45cb162b13961a71bed58d8bec140589f1e248a9488b6ef"} err="failed to get container status \"d4d61aa4170f5fddd45cb162b13961a71bed58d8bec140589f1e248a9488b6ef\": rpc error: code = NotFound desc = could not find container \"d4d61aa4170f5fddd45cb162b13961a71bed58d8bec140589f1e248a9488b6ef\": container with ID starting with d4d61aa4170f5fddd45cb162b13961a71bed58d8bec140589f1e248a9488b6ef not found: ID does not exist" Jan 27 08:16:20 crc kubenswrapper[4799]: I0127 08:16:20.163606 4799 scope.go:117] "RemoveContainer" containerID="ea19a01e7237aa6b4adc89ac3b1c517cce06841f968332e164aecc4ca61120d5" Jan 27 08:16:20 crc kubenswrapper[4799]: E0127 08:16:20.163944 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea19a01e7237aa6b4adc89ac3b1c517cce06841f968332e164aecc4ca61120d5\": container with ID starting with ea19a01e7237aa6b4adc89ac3b1c517cce06841f968332e164aecc4ca61120d5 not found: ID does not exist" containerID="ea19a01e7237aa6b4adc89ac3b1c517cce06841f968332e164aecc4ca61120d5" Jan 27 08:16:20 crc kubenswrapper[4799]: I0127 08:16:20.163981 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea19a01e7237aa6b4adc89ac3b1c517cce06841f968332e164aecc4ca61120d5"} err="failed to get container status \"ea19a01e7237aa6b4adc89ac3b1c517cce06841f968332e164aecc4ca61120d5\": rpc error: code = NotFound desc = could not find container \"ea19a01e7237aa6b4adc89ac3b1c517cce06841f968332e164aecc4ca61120d5\": container with ID starting with ea19a01e7237aa6b4adc89ac3b1c517cce06841f968332e164aecc4ca61120d5 not found: ID does not exist" Jan 27 08:16:20 crc kubenswrapper[4799]: I0127 08:16:20.463487 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c88fcc7-29ee-4aaa-aaf6-19c3a942d262" path="/var/lib/kubelet/pods/1c88fcc7-29ee-4aaa-aaf6-19c3a942d262/volumes" Jan 27 08:16:24 crc kubenswrapper[4799]: I0127 08:16:24.457519 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:16:24 crc kubenswrapper[4799]: E0127 08:16:24.460048 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:16:37 crc kubenswrapper[4799]: I0127 08:16:37.451710 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:16:37 crc kubenswrapper[4799]: E0127 08:16:37.452802 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:16:38 crc kubenswrapper[4799]: I0127 08:16:38.034778 4799 scope.go:117] "RemoveContainer" containerID="17dfb08fe8178c1d306515db03bb46849ecda3ce4eaa4bed4488e6fe713665e9" Jan 27 08:16:38 crc kubenswrapper[4799]: I0127 08:16:38.062577 4799 scope.go:117] "RemoveContainer" containerID="e5651c5b7379a75592901438d55a99de8bbec7d0d80b2ae8262facd046d1c1d4" Jan 27 08:16:38 crc kubenswrapper[4799]: I0127 08:16:38.079740 4799 scope.go:117] "RemoveContainer" containerID="2e49a4fc214f12566b2516479f547812997d82785b8858995acbe7d34ebe9df8" Jan 27 08:16:38 crc kubenswrapper[4799]: I0127 08:16:38.129412 4799 scope.go:117] "RemoveContainer" containerID="44bc46982a56c1d5622c1a64e37403dc64800a0b2badf4a2f6a3a1d809304011" Jan 27 08:16:38 crc kubenswrapper[4799]: I0127 08:16:38.163143 4799 scope.go:117] "RemoveContainer" containerID="63680fd5fbc7fe6d82c8a34dc3d4dfb3482e02133edccfcbed602cebabc84481" Jan 27 08:16:38 crc kubenswrapper[4799]: I0127 08:16:38.193453 4799 scope.go:117] "RemoveContainer" containerID="1a3483f254a4ccddb358a85f5075a97a4b4d9b8f0c07206e170e1566a3b7db9a" Jan 27 08:16:38 crc kubenswrapper[4799]: I0127 08:16:38.213439 4799 scope.go:117] "RemoveContainer" containerID="0196696a99d83cff567b7ccdc4fa86e6f603ead4b71c1756125b08538beeae47" Jan 27 08:16:52 crc kubenswrapper[4799]: I0127 08:16:52.452806 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:16:52 crc kubenswrapper[4799]: E0127 08:16:52.454087 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:17:05 crc kubenswrapper[4799]: I0127 08:17:05.452514 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:17:05 crc kubenswrapper[4799]: E0127 08:17:05.453571 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:17:19 crc kubenswrapper[4799]: I0127 08:17:19.451984 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:17:19 crc kubenswrapper[4799]: E0127 08:17:19.452871 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:17:33 crc kubenswrapper[4799]: I0127 08:17:33.451987 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:17:33 crc kubenswrapper[4799]: E0127 08:17:33.453071 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:17:38 crc kubenswrapper[4799]: I0127 08:17:38.325196 4799 scope.go:117] "RemoveContainer" containerID="329ac23f45fba306609c0758cbb469b51cb52b2e49c3a319f13207b93bc317c3" Jan 27 08:17:48 crc kubenswrapper[4799]: I0127 08:17:48.452063 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:17:48 crc kubenswrapper[4799]: E0127 08:17:48.452785 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:18:03 crc kubenswrapper[4799]: I0127 08:18:03.451511 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:18:03 crc kubenswrapper[4799]: E0127 08:18:03.453717 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:18:14 crc kubenswrapper[4799]: I0127 08:18:14.459518 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:18:14 crc kubenswrapper[4799]: E0127 08:18:14.460837 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:18:27 crc kubenswrapper[4799]: I0127 08:18:27.452709 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:18:27 crc kubenswrapper[4799]: E0127 08:18:27.453523 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:18:40 crc kubenswrapper[4799]: I0127 08:18:40.451331 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:18:40 crc kubenswrapper[4799]: E0127 08:18:40.452274 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:18:51 crc kubenswrapper[4799]: I0127 08:18:51.451936 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:18:51 crc kubenswrapper[4799]: E0127 08:18:51.452801 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:19:03 crc kubenswrapper[4799]: I0127 08:19:03.451053 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:19:04 crc kubenswrapper[4799]: I0127 08:19:04.374574 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"092c1a731185a1e64f1e7f6d7b8f7a4e0260b552134cd69fabc237b68b3e120c"} Jan 27 08:20:02 crc kubenswrapper[4799]: I0127 08:20:02.898037 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2j47v"] Jan 27 08:20:02 crc kubenswrapper[4799]: E0127 08:20:02.898980 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482" containerName="extract-utilities" Jan 27 08:20:02 crc kubenswrapper[4799]: I0127 08:20:02.898995 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482" containerName="extract-utilities" Jan 27 08:20:02 crc kubenswrapper[4799]: E0127 08:20:02.899010 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482" containerName="extract-content" Jan 27 08:20:02 crc kubenswrapper[4799]: I0127 08:20:02.899018 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482" containerName="extract-content" Jan 27 08:20:02 crc kubenswrapper[4799]: E0127 08:20:02.899030 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c88fcc7-29ee-4aaa-aaf6-19c3a942d262" containerName="extract-content" Jan 27 08:20:02 crc kubenswrapper[4799]: I0127 08:20:02.899037 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c88fcc7-29ee-4aaa-aaf6-19c3a942d262" containerName="extract-content" Jan 27 08:20:02 crc kubenswrapper[4799]: E0127 08:20:02.899045 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c88fcc7-29ee-4aaa-aaf6-19c3a942d262" containerName="registry-server" Jan 27 08:20:02 crc kubenswrapper[4799]: I0127 08:20:02.899051 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c88fcc7-29ee-4aaa-aaf6-19c3a942d262" containerName="registry-server" Jan 27 08:20:02 crc kubenswrapper[4799]: E0127 08:20:02.899068 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c88fcc7-29ee-4aaa-aaf6-19c3a942d262" containerName="extract-utilities" Jan 27 08:20:02 crc kubenswrapper[4799]: I0127 08:20:02.899074 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c88fcc7-29ee-4aaa-aaf6-19c3a942d262" containerName="extract-utilities" Jan 27 08:20:02 crc kubenswrapper[4799]: E0127 08:20:02.899082 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482" containerName="registry-server" Jan 27 08:20:02 crc kubenswrapper[4799]: I0127 08:20:02.899088 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482" containerName="registry-server" Jan 27 08:20:02 crc kubenswrapper[4799]: I0127 08:20:02.899282 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c88fcc7-29ee-4aaa-aaf6-19c3a942d262" containerName="registry-server" Jan 27 08:20:02 crc kubenswrapper[4799]: I0127 08:20:02.899329 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4d4f288-848a-4cd6-a5e6-d3e8ea7dd482" containerName="registry-server" Jan 27 08:20:02 crc kubenswrapper[4799]: I0127 08:20:02.900402 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2j47v" Jan 27 08:20:02 crc kubenswrapper[4799]: I0127 08:20:02.912011 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2j47v"] Jan 27 08:20:02 crc kubenswrapper[4799]: I0127 08:20:02.940450 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/312a6b49-6e52-4e29-8cee-65d3eda91657-utilities\") pod \"community-operators-2j47v\" (UID: \"312a6b49-6e52-4e29-8cee-65d3eda91657\") " pod="openshift-marketplace/community-operators-2j47v" Jan 27 08:20:02 crc kubenswrapper[4799]: I0127 08:20:02.940551 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m8zk\" (UniqueName: \"kubernetes.io/projected/312a6b49-6e52-4e29-8cee-65d3eda91657-kube-api-access-9m8zk\") pod \"community-operators-2j47v\" (UID: \"312a6b49-6e52-4e29-8cee-65d3eda91657\") " pod="openshift-marketplace/community-operators-2j47v" Jan 27 08:20:02 crc kubenswrapper[4799]: I0127 08:20:02.940605 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/312a6b49-6e52-4e29-8cee-65d3eda91657-catalog-content\") pod \"community-operators-2j47v\" (UID: \"312a6b49-6e52-4e29-8cee-65d3eda91657\") " pod="openshift-marketplace/community-operators-2j47v" Jan 27 08:20:03 crc kubenswrapper[4799]: I0127 08:20:03.041896 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/312a6b49-6e52-4e29-8cee-65d3eda91657-utilities\") pod \"community-operators-2j47v\" (UID: \"312a6b49-6e52-4e29-8cee-65d3eda91657\") " pod="openshift-marketplace/community-operators-2j47v" Jan 27 08:20:03 crc kubenswrapper[4799]: I0127 08:20:03.041989 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m8zk\" (UniqueName: \"kubernetes.io/projected/312a6b49-6e52-4e29-8cee-65d3eda91657-kube-api-access-9m8zk\") pod \"community-operators-2j47v\" (UID: \"312a6b49-6e52-4e29-8cee-65d3eda91657\") " pod="openshift-marketplace/community-operators-2j47v" Jan 27 08:20:03 crc kubenswrapper[4799]: I0127 08:20:03.042036 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/312a6b49-6e52-4e29-8cee-65d3eda91657-catalog-content\") pod \"community-operators-2j47v\" (UID: \"312a6b49-6e52-4e29-8cee-65d3eda91657\") " pod="openshift-marketplace/community-operators-2j47v" Jan 27 08:20:03 crc kubenswrapper[4799]: I0127 08:20:03.042546 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/312a6b49-6e52-4e29-8cee-65d3eda91657-utilities\") pod \"community-operators-2j47v\" (UID: \"312a6b49-6e52-4e29-8cee-65d3eda91657\") " pod="openshift-marketplace/community-operators-2j47v" Jan 27 08:20:03 crc kubenswrapper[4799]: I0127 08:20:03.042594 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/312a6b49-6e52-4e29-8cee-65d3eda91657-catalog-content\") pod \"community-operators-2j47v\" (UID: \"312a6b49-6e52-4e29-8cee-65d3eda91657\") " pod="openshift-marketplace/community-operators-2j47v" Jan 27 08:20:03 crc kubenswrapper[4799]: I0127 08:20:03.062118 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m8zk\" (UniqueName: \"kubernetes.io/projected/312a6b49-6e52-4e29-8cee-65d3eda91657-kube-api-access-9m8zk\") pod \"community-operators-2j47v\" (UID: \"312a6b49-6e52-4e29-8cee-65d3eda91657\") " pod="openshift-marketplace/community-operators-2j47v" Jan 27 08:20:03 crc kubenswrapper[4799]: I0127 08:20:03.221051 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2j47v" Jan 27 08:20:03 crc kubenswrapper[4799]: I0127 08:20:03.580244 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2j47v"] Jan 27 08:20:03 crc kubenswrapper[4799]: I0127 08:20:03.862731 4799 generic.go:334] "Generic (PLEG): container finished" podID="312a6b49-6e52-4e29-8cee-65d3eda91657" containerID="bfa5be5064cc41c59588595ccea077229e5af2d78fbbe2d750ee25981c1430d7" exitCode=0 Jan 27 08:20:03 crc kubenswrapper[4799]: I0127 08:20:03.862782 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j47v" event={"ID":"312a6b49-6e52-4e29-8cee-65d3eda91657","Type":"ContainerDied","Data":"bfa5be5064cc41c59588595ccea077229e5af2d78fbbe2d750ee25981c1430d7"} Jan 27 08:20:03 crc kubenswrapper[4799]: I0127 08:20:03.862850 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j47v" event={"ID":"312a6b49-6e52-4e29-8cee-65d3eda91657","Type":"ContainerStarted","Data":"d5e87468207123f2675069e30dd1d313e5c898c6a2fe7b626d9b2c23759a4718"} Jan 27 08:20:04 crc kubenswrapper[4799]: I0127 08:20:04.873040 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j47v" event={"ID":"312a6b49-6e52-4e29-8cee-65d3eda91657","Type":"ContainerStarted","Data":"829f121052f2b484f343c2d00a2d752737df805b56c9fad8f9f1524600d475f9"} Jan 27 08:20:05 crc kubenswrapper[4799]: I0127 08:20:05.884749 4799 generic.go:334] "Generic (PLEG): container finished" podID="312a6b49-6e52-4e29-8cee-65d3eda91657" containerID="829f121052f2b484f343c2d00a2d752737df805b56c9fad8f9f1524600d475f9" exitCode=0 Jan 27 08:20:05 crc kubenswrapper[4799]: I0127 08:20:05.884816 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j47v" event={"ID":"312a6b49-6e52-4e29-8cee-65d3eda91657","Type":"ContainerDied","Data":"829f121052f2b484f343c2d00a2d752737df805b56c9fad8f9f1524600d475f9"} Jan 27 08:20:06 crc kubenswrapper[4799]: I0127 08:20:06.897564 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j47v" event={"ID":"312a6b49-6e52-4e29-8cee-65d3eda91657","Type":"ContainerStarted","Data":"0fc4cb753f9dbd1c2a2b91e618f18e16a51d2d68dfb352724a2ce61581611f10"} Jan 27 08:20:06 crc kubenswrapper[4799]: I0127 08:20:06.922982 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2j47v" podStartSLOduration=2.49808911 podStartE2EDuration="4.922963692s" podCreationTimestamp="2026-01-27 08:20:02 +0000 UTC" firstStartedPulling="2026-01-27 08:20:03.864318555 +0000 UTC m=+2070.175422620" lastFinishedPulling="2026-01-27 08:20:06.289193097 +0000 UTC m=+2072.600297202" observedRunningTime="2026-01-27 08:20:06.91771681 +0000 UTC m=+2073.228820885" watchObservedRunningTime="2026-01-27 08:20:06.922963692 +0000 UTC m=+2073.234067757" Jan 27 08:20:13 crc kubenswrapper[4799]: I0127 08:20:13.221879 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2j47v" Jan 27 08:20:13 crc kubenswrapper[4799]: I0127 08:20:13.222534 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2j47v" Jan 27 08:20:13 crc kubenswrapper[4799]: I0127 08:20:13.284576 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2j47v" Jan 27 08:20:13 crc kubenswrapper[4799]: I0127 08:20:13.987947 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2j47v" Jan 27 08:20:14 crc kubenswrapper[4799]: I0127 08:20:14.032197 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2j47v"] Jan 27 08:20:15 crc kubenswrapper[4799]: I0127 08:20:15.963654 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2j47v" podUID="312a6b49-6e52-4e29-8cee-65d3eda91657" containerName="registry-server" containerID="cri-o://0fc4cb753f9dbd1c2a2b91e618f18e16a51d2d68dfb352724a2ce61581611f10" gracePeriod=2 Jan 27 08:20:16 crc kubenswrapper[4799]: I0127 08:20:16.404041 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2j47v" Jan 27 08:20:16 crc kubenswrapper[4799]: I0127 08:20:16.543377 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/312a6b49-6e52-4e29-8cee-65d3eda91657-catalog-content\") pod \"312a6b49-6e52-4e29-8cee-65d3eda91657\" (UID: \"312a6b49-6e52-4e29-8cee-65d3eda91657\") " Jan 27 08:20:16 crc kubenswrapper[4799]: I0127 08:20:16.543491 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/312a6b49-6e52-4e29-8cee-65d3eda91657-utilities\") pod \"312a6b49-6e52-4e29-8cee-65d3eda91657\" (UID: \"312a6b49-6e52-4e29-8cee-65d3eda91657\") " Jan 27 08:20:16 crc kubenswrapper[4799]: I0127 08:20:16.543618 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m8zk\" (UniqueName: \"kubernetes.io/projected/312a6b49-6e52-4e29-8cee-65d3eda91657-kube-api-access-9m8zk\") pod \"312a6b49-6e52-4e29-8cee-65d3eda91657\" (UID: \"312a6b49-6e52-4e29-8cee-65d3eda91657\") " Jan 27 08:20:16 crc kubenswrapper[4799]: I0127 08:20:16.546511 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/312a6b49-6e52-4e29-8cee-65d3eda91657-utilities" (OuterVolumeSpecName: "utilities") pod "312a6b49-6e52-4e29-8cee-65d3eda91657" (UID: "312a6b49-6e52-4e29-8cee-65d3eda91657"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:20:16 crc kubenswrapper[4799]: I0127 08:20:16.556543 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/312a6b49-6e52-4e29-8cee-65d3eda91657-kube-api-access-9m8zk" (OuterVolumeSpecName: "kube-api-access-9m8zk") pod "312a6b49-6e52-4e29-8cee-65d3eda91657" (UID: "312a6b49-6e52-4e29-8cee-65d3eda91657"). InnerVolumeSpecName "kube-api-access-9m8zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:20:16 crc kubenswrapper[4799]: I0127 08:20:16.607621 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/312a6b49-6e52-4e29-8cee-65d3eda91657-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "312a6b49-6e52-4e29-8cee-65d3eda91657" (UID: "312a6b49-6e52-4e29-8cee-65d3eda91657"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:20:16 crc kubenswrapper[4799]: I0127 08:20:16.646139 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/312a6b49-6e52-4e29-8cee-65d3eda91657-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:20:16 crc kubenswrapper[4799]: I0127 08:20:16.646458 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/312a6b49-6e52-4e29-8cee-65d3eda91657-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:20:16 crc kubenswrapper[4799]: I0127 08:20:16.646542 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m8zk\" (UniqueName: \"kubernetes.io/projected/312a6b49-6e52-4e29-8cee-65d3eda91657-kube-api-access-9m8zk\") on node \"crc\" DevicePath \"\"" Jan 27 08:20:16 crc kubenswrapper[4799]: I0127 08:20:16.971014 4799 generic.go:334] "Generic (PLEG): container finished" podID="312a6b49-6e52-4e29-8cee-65d3eda91657" containerID="0fc4cb753f9dbd1c2a2b91e618f18e16a51d2d68dfb352724a2ce61581611f10" exitCode=0 Jan 27 08:20:16 crc kubenswrapper[4799]: I0127 08:20:16.971087 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2j47v" Jan 27 08:20:16 crc kubenswrapper[4799]: I0127 08:20:16.971105 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j47v" event={"ID":"312a6b49-6e52-4e29-8cee-65d3eda91657","Type":"ContainerDied","Data":"0fc4cb753f9dbd1c2a2b91e618f18e16a51d2d68dfb352724a2ce61581611f10"} Jan 27 08:20:16 crc kubenswrapper[4799]: I0127 08:20:16.971982 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2j47v" event={"ID":"312a6b49-6e52-4e29-8cee-65d3eda91657","Type":"ContainerDied","Data":"d5e87468207123f2675069e30dd1d313e5c898c6a2fe7b626d9b2c23759a4718"} Jan 27 08:20:16 crc kubenswrapper[4799]: I0127 08:20:16.972029 4799 scope.go:117] "RemoveContainer" containerID="0fc4cb753f9dbd1c2a2b91e618f18e16a51d2d68dfb352724a2ce61581611f10" Jan 27 08:20:16 crc kubenswrapper[4799]: I0127 08:20:16.989890 4799 scope.go:117] "RemoveContainer" containerID="829f121052f2b484f343c2d00a2d752737df805b56c9fad8f9f1524600d475f9" Jan 27 08:20:17 crc kubenswrapper[4799]: I0127 08:20:17.012864 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2j47v"] Jan 27 08:20:17 crc kubenswrapper[4799]: I0127 08:20:17.019225 4799 scope.go:117] "RemoveContainer" containerID="bfa5be5064cc41c59588595ccea077229e5af2d78fbbe2d750ee25981c1430d7" Jan 27 08:20:17 crc kubenswrapper[4799]: I0127 08:20:17.021157 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2j47v"] Jan 27 08:20:17 crc kubenswrapper[4799]: I0127 08:20:17.044507 4799 scope.go:117] "RemoveContainer" containerID="0fc4cb753f9dbd1c2a2b91e618f18e16a51d2d68dfb352724a2ce61581611f10" Jan 27 08:20:17 crc kubenswrapper[4799]: E0127 08:20:17.044959 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fc4cb753f9dbd1c2a2b91e618f18e16a51d2d68dfb352724a2ce61581611f10\": container with ID starting with 0fc4cb753f9dbd1c2a2b91e618f18e16a51d2d68dfb352724a2ce61581611f10 not found: ID does not exist" containerID="0fc4cb753f9dbd1c2a2b91e618f18e16a51d2d68dfb352724a2ce61581611f10" Jan 27 08:20:17 crc kubenswrapper[4799]: I0127 08:20:17.044990 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fc4cb753f9dbd1c2a2b91e618f18e16a51d2d68dfb352724a2ce61581611f10"} err="failed to get container status \"0fc4cb753f9dbd1c2a2b91e618f18e16a51d2d68dfb352724a2ce61581611f10\": rpc error: code = NotFound desc = could not find container \"0fc4cb753f9dbd1c2a2b91e618f18e16a51d2d68dfb352724a2ce61581611f10\": container with ID starting with 0fc4cb753f9dbd1c2a2b91e618f18e16a51d2d68dfb352724a2ce61581611f10 not found: ID does not exist" Jan 27 08:20:17 crc kubenswrapper[4799]: I0127 08:20:17.045015 4799 scope.go:117] "RemoveContainer" containerID="829f121052f2b484f343c2d00a2d752737df805b56c9fad8f9f1524600d475f9" Jan 27 08:20:17 crc kubenswrapper[4799]: E0127 08:20:17.045251 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"829f121052f2b484f343c2d00a2d752737df805b56c9fad8f9f1524600d475f9\": container with ID starting with 829f121052f2b484f343c2d00a2d752737df805b56c9fad8f9f1524600d475f9 not found: ID does not exist" containerID="829f121052f2b484f343c2d00a2d752737df805b56c9fad8f9f1524600d475f9" Jan 27 08:20:17 crc kubenswrapper[4799]: I0127 08:20:17.045291 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"829f121052f2b484f343c2d00a2d752737df805b56c9fad8f9f1524600d475f9"} err="failed to get container status \"829f121052f2b484f343c2d00a2d752737df805b56c9fad8f9f1524600d475f9\": rpc error: code = NotFound desc = could not find container \"829f121052f2b484f343c2d00a2d752737df805b56c9fad8f9f1524600d475f9\": container with ID starting with 829f121052f2b484f343c2d00a2d752737df805b56c9fad8f9f1524600d475f9 not found: ID does not exist" Jan 27 08:20:17 crc kubenswrapper[4799]: I0127 08:20:17.045318 4799 scope.go:117] "RemoveContainer" containerID="bfa5be5064cc41c59588595ccea077229e5af2d78fbbe2d750ee25981c1430d7" Jan 27 08:20:17 crc kubenswrapper[4799]: E0127 08:20:17.045549 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfa5be5064cc41c59588595ccea077229e5af2d78fbbe2d750ee25981c1430d7\": container with ID starting with bfa5be5064cc41c59588595ccea077229e5af2d78fbbe2d750ee25981c1430d7 not found: ID does not exist" containerID="bfa5be5064cc41c59588595ccea077229e5af2d78fbbe2d750ee25981c1430d7" Jan 27 08:20:17 crc kubenswrapper[4799]: I0127 08:20:17.045570 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfa5be5064cc41c59588595ccea077229e5af2d78fbbe2d750ee25981c1430d7"} err="failed to get container status \"bfa5be5064cc41c59588595ccea077229e5af2d78fbbe2d750ee25981c1430d7\": rpc error: code = NotFound desc = could not find container \"bfa5be5064cc41c59588595ccea077229e5af2d78fbbe2d750ee25981c1430d7\": container with ID starting with bfa5be5064cc41c59588595ccea077229e5af2d78fbbe2d750ee25981c1430d7 not found: ID does not exist" Jan 27 08:20:18 crc kubenswrapper[4799]: I0127 08:20:18.463158 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="312a6b49-6e52-4e29-8cee-65d3eda91657" path="/var/lib/kubelet/pods/312a6b49-6e52-4e29-8cee-65d3eda91657/volumes" Jan 27 08:21:23 crc kubenswrapper[4799]: I0127 08:21:23.731319 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:21:23 crc kubenswrapper[4799]: I0127 08:21:23.731964 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:21:53 crc kubenswrapper[4799]: I0127 08:21:53.731055 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:21:53 crc kubenswrapper[4799]: I0127 08:21:53.731835 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:22:23 crc kubenswrapper[4799]: I0127 08:22:23.731362 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:22:23 crc kubenswrapper[4799]: I0127 08:22:23.732020 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:22:23 crc kubenswrapper[4799]: I0127 08:22:23.732076 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 08:22:23 crc kubenswrapper[4799]: I0127 08:22:23.732761 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"092c1a731185a1e64f1e7f6d7b8f7a4e0260b552134cd69fabc237b68b3e120c"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 08:22:23 crc kubenswrapper[4799]: I0127 08:22:23.732826 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://092c1a731185a1e64f1e7f6d7b8f7a4e0260b552134cd69fabc237b68b3e120c" gracePeriod=600 Jan 27 08:22:23 crc kubenswrapper[4799]: I0127 08:22:23.990344 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="092c1a731185a1e64f1e7f6d7b8f7a4e0260b552134cd69fabc237b68b3e120c" exitCode=0 Jan 27 08:22:23 crc kubenswrapper[4799]: I0127 08:22:23.990383 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"092c1a731185a1e64f1e7f6d7b8f7a4e0260b552134cd69fabc237b68b3e120c"} Jan 27 08:22:23 crc kubenswrapper[4799]: I0127 08:22:23.990667 4799 scope.go:117] "RemoveContainer" containerID="10ad08b873391e987fe0c64bd4dd052c6500dc0604d09043dca802bf464c2c6f" Jan 27 08:22:24 crc kubenswrapper[4799]: I0127 08:22:24.997695 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd"} Jan 27 08:23:03 crc kubenswrapper[4799]: I0127 08:23:03.364080 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cwdl4"] Jan 27 08:23:03 crc kubenswrapper[4799]: E0127 08:23:03.365047 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312a6b49-6e52-4e29-8cee-65d3eda91657" containerName="registry-server" Jan 27 08:23:03 crc kubenswrapper[4799]: I0127 08:23:03.365063 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="312a6b49-6e52-4e29-8cee-65d3eda91657" containerName="registry-server" Jan 27 08:23:03 crc kubenswrapper[4799]: E0127 08:23:03.365080 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312a6b49-6e52-4e29-8cee-65d3eda91657" containerName="extract-utilities" Jan 27 08:23:03 crc kubenswrapper[4799]: I0127 08:23:03.365088 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="312a6b49-6e52-4e29-8cee-65d3eda91657" containerName="extract-utilities" Jan 27 08:23:03 crc kubenswrapper[4799]: E0127 08:23:03.365104 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312a6b49-6e52-4e29-8cee-65d3eda91657" containerName="extract-content" Jan 27 08:23:03 crc kubenswrapper[4799]: I0127 08:23:03.365113 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="312a6b49-6e52-4e29-8cee-65d3eda91657" containerName="extract-content" Jan 27 08:23:03 crc kubenswrapper[4799]: I0127 08:23:03.365292 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="312a6b49-6e52-4e29-8cee-65d3eda91657" containerName="registry-server" Jan 27 08:23:03 crc kubenswrapper[4799]: I0127 08:23:03.366494 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwdl4" Jan 27 08:23:03 crc kubenswrapper[4799]: I0127 08:23:03.382192 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cwdl4"] Jan 27 08:23:03 crc kubenswrapper[4799]: I0127 08:23:03.476176 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ea707dd-ba2d-41cd-b182-277f524d63bf-utilities\") pod \"certified-operators-cwdl4\" (UID: \"9ea707dd-ba2d-41cd-b182-277f524d63bf\") " pod="openshift-marketplace/certified-operators-cwdl4" Jan 27 08:23:03 crc kubenswrapper[4799]: I0127 08:23:03.476251 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ea707dd-ba2d-41cd-b182-277f524d63bf-catalog-content\") pod \"certified-operators-cwdl4\" (UID: \"9ea707dd-ba2d-41cd-b182-277f524d63bf\") " pod="openshift-marketplace/certified-operators-cwdl4" Jan 27 08:23:03 crc kubenswrapper[4799]: I0127 08:23:03.476391 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv8xv\" (UniqueName: \"kubernetes.io/projected/9ea707dd-ba2d-41cd-b182-277f524d63bf-kube-api-access-fv8xv\") pod \"certified-operators-cwdl4\" (UID: \"9ea707dd-ba2d-41cd-b182-277f524d63bf\") " pod="openshift-marketplace/certified-operators-cwdl4" Jan 27 08:23:03 crc kubenswrapper[4799]: I0127 08:23:03.578258 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ea707dd-ba2d-41cd-b182-277f524d63bf-utilities\") pod \"certified-operators-cwdl4\" (UID: \"9ea707dd-ba2d-41cd-b182-277f524d63bf\") " pod="openshift-marketplace/certified-operators-cwdl4" Jan 27 08:23:03 crc kubenswrapper[4799]: I0127 08:23:03.578704 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ea707dd-ba2d-41cd-b182-277f524d63bf-catalog-content\") pod \"certified-operators-cwdl4\" (UID: \"9ea707dd-ba2d-41cd-b182-277f524d63bf\") " pod="openshift-marketplace/certified-operators-cwdl4" Jan 27 08:23:03 crc kubenswrapper[4799]: I0127 08:23:03.578843 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv8xv\" (UniqueName: \"kubernetes.io/projected/9ea707dd-ba2d-41cd-b182-277f524d63bf-kube-api-access-fv8xv\") pod \"certified-operators-cwdl4\" (UID: \"9ea707dd-ba2d-41cd-b182-277f524d63bf\") " pod="openshift-marketplace/certified-operators-cwdl4" Jan 27 08:23:03 crc kubenswrapper[4799]: I0127 08:23:03.579184 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ea707dd-ba2d-41cd-b182-277f524d63bf-utilities\") pod \"certified-operators-cwdl4\" (UID: \"9ea707dd-ba2d-41cd-b182-277f524d63bf\") " pod="openshift-marketplace/certified-operators-cwdl4" Jan 27 08:23:03 crc kubenswrapper[4799]: I0127 08:23:03.579501 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ea707dd-ba2d-41cd-b182-277f524d63bf-catalog-content\") pod \"certified-operators-cwdl4\" (UID: \"9ea707dd-ba2d-41cd-b182-277f524d63bf\") " pod="openshift-marketplace/certified-operators-cwdl4" Jan 27 08:23:03 crc kubenswrapper[4799]: I0127 08:23:03.598373 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv8xv\" (UniqueName: \"kubernetes.io/projected/9ea707dd-ba2d-41cd-b182-277f524d63bf-kube-api-access-fv8xv\") pod \"certified-operators-cwdl4\" (UID: \"9ea707dd-ba2d-41cd-b182-277f524d63bf\") " pod="openshift-marketplace/certified-operators-cwdl4" Jan 27 08:23:03 crc kubenswrapper[4799]: I0127 08:23:03.698314 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwdl4" Jan 27 08:23:04 crc kubenswrapper[4799]: I0127 08:23:04.116873 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cwdl4"] Jan 27 08:23:04 crc kubenswrapper[4799]: I0127 08:23:04.300777 4799 generic.go:334] "Generic (PLEG): container finished" podID="9ea707dd-ba2d-41cd-b182-277f524d63bf" containerID="064a2a8d25310c1716f4baae396b7e29110e09b06b6f5d66f17a0f813a5170a6" exitCode=0 Jan 27 08:23:04 crc kubenswrapper[4799]: I0127 08:23:04.300822 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwdl4" event={"ID":"9ea707dd-ba2d-41cd-b182-277f524d63bf","Type":"ContainerDied","Data":"064a2a8d25310c1716f4baae396b7e29110e09b06b6f5d66f17a0f813a5170a6"} Jan 27 08:23:04 crc kubenswrapper[4799]: I0127 08:23:04.301027 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwdl4" event={"ID":"9ea707dd-ba2d-41cd-b182-277f524d63bf","Type":"ContainerStarted","Data":"ced543f17199557cd08b5d9846974f2500b169e1468f5c48349bf7851faf9a21"} Jan 27 08:23:04 crc kubenswrapper[4799]: I0127 08:23:04.303078 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 08:23:05 crc kubenswrapper[4799]: I0127 08:23:05.311982 4799 generic.go:334] "Generic (PLEG): container finished" podID="9ea707dd-ba2d-41cd-b182-277f524d63bf" containerID="7b358682f757fef6c0d41329b19cff2cfc2ac41eed2ed77032e37325e1fb6811" exitCode=0 Jan 27 08:23:05 crc kubenswrapper[4799]: I0127 08:23:05.312047 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwdl4" event={"ID":"9ea707dd-ba2d-41cd-b182-277f524d63bf","Type":"ContainerDied","Data":"7b358682f757fef6c0d41329b19cff2cfc2ac41eed2ed77032e37325e1fb6811"} Jan 27 08:23:06 crc kubenswrapper[4799]: I0127 08:23:06.321943 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwdl4" event={"ID":"9ea707dd-ba2d-41cd-b182-277f524d63bf","Type":"ContainerStarted","Data":"6a857d146d30442b49d6bd9325038f3818212b8abcd559575e2c6b6e065bffda"} Jan 27 08:23:06 crc kubenswrapper[4799]: I0127 08:23:06.345736 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cwdl4" podStartSLOduration=1.868111925 podStartE2EDuration="3.345715987s" podCreationTimestamp="2026-01-27 08:23:03 +0000 UTC" firstStartedPulling="2026-01-27 08:23:04.302808779 +0000 UTC m=+2250.613912844" lastFinishedPulling="2026-01-27 08:23:05.780412841 +0000 UTC m=+2252.091516906" observedRunningTime="2026-01-27 08:23:06.344515135 +0000 UTC m=+2252.655619220" watchObservedRunningTime="2026-01-27 08:23:06.345715987 +0000 UTC m=+2252.656820052" Jan 27 08:23:13 crc kubenswrapper[4799]: I0127 08:23:13.699187 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cwdl4" Jan 27 08:23:13 crc kubenswrapper[4799]: I0127 08:23:13.700110 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cwdl4" Jan 27 08:23:13 crc kubenswrapper[4799]: I0127 08:23:13.780692 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cwdl4" Jan 27 08:23:14 crc kubenswrapper[4799]: I0127 08:23:14.460264 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cwdl4" Jan 27 08:23:14 crc kubenswrapper[4799]: I0127 08:23:14.511486 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cwdl4"] Jan 27 08:23:16 crc kubenswrapper[4799]: I0127 08:23:16.413171 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cwdl4" podUID="9ea707dd-ba2d-41cd-b182-277f524d63bf" containerName="registry-server" containerID="cri-o://6a857d146d30442b49d6bd9325038f3818212b8abcd559575e2c6b6e065bffda" gracePeriod=2 Jan 27 08:23:17 crc kubenswrapper[4799]: I0127 08:23:17.425569 4799 generic.go:334] "Generic (PLEG): container finished" podID="9ea707dd-ba2d-41cd-b182-277f524d63bf" containerID="6a857d146d30442b49d6bd9325038f3818212b8abcd559575e2c6b6e065bffda" exitCode=0 Jan 27 08:23:17 crc kubenswrapper[4799]: I0127 08:23:17.425700 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwdl4" event={"ID":"9ea707dd-ba2d-41cd-b182-277f524d63bf","Type":"ContainerDied","Data":"6a857d146d30442b49d6bd9325038f3818212b8abcd559575e2c6b6e065bffda"} Jan 27 08:23:17 crc kubenswrapper[4799]: I0127 08:23:17.425965 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwdl4" event={"ID":"9ea707dd-ba2d-41cd-b182-277f524d63bf","Type":"ContainerDied","Data":"ced543f17199557cd08b5d9846974f2500b169e1468f5c48349bf7851faf9a21"} Jan 27 08:23:17 crc kubenswrapper[4799]: I0127 08:23:17.425990 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ced543f17199557cd08b5d9846974f2500b169e1468f5c48349bf7851faf9a21" Jan 27 08:23:17 crc kubenswrapper[4799]: I0127 08:23:17.473919 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwdl4" Jan 27 08:23:17 crc kubenswrapper[4799]: I0127 08:23:17.603833 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ea707dd-ba2d-41cd-b182-277f524d63bf-catalog-content\") pod \"9ea707dd-ba2d-41cd-b182-277f524d63bf\" (UID: \"9ea707dd-ba2d-41cd-b182-277f524d63bf\") " Jan 27 08:23:17 crc kubenswrapper[4799]: I0127 08:23:17.603926 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv8xv\" (UniqueName: \"kubernetes.io/projected/9ea707dd-ba2d-41cd-b182-277f524d63bf-kube-api-access-fv8xv\") pod \"9ea707dd-ba2d-41cd-b182-277f524d63bf\" (UID: \"9ea707dd-ba2d-41cd-b182-277f524d63bf\") " Jan 27 08:23:17 crc kubenswrapper[4799]: I0127 08:23:17.604007 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ea707dd-ba2d-41cd-b182-277f524d63bf-utilities\") pod \"9ea707dd-ba2d-41cd-b182-277f524d63bf\" (UID: \"9ea707dd-ba2d-41cd-b182-277f524d63bf\") " Jan 27 08:23:17 crc kubenswrapper[4799]: I0127 08:23:17.605001 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ea707dd-ba2d-41cd-b182-277f524d63bf-utilities" (OuterVolumeSpecName: "utilities") pod "9ea707dd-ba2d-41cd-b182-277f524d63bf" (UID: "9ea707dd-ba2d-41cd-b182-277f524d63bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:23:17 crc kubenswrapper[4799]: I0127 08:23:17.609266 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ea707dd-ba2d-41cd-b182-277f524d63bf-kube-api-access-fv8xv" (OuterVolumeSpecName: "kube-api-access-fv8xv") pod "9ea707dd-ba2d-41cd-b182-277f524d63bf" (UID: "9ea707dd-ba2d-41cd-b182-277f524d63bf"). InnerVolumeSpecName "kube-api-access-fv8xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:23:17 crc kubenswrapper[4799]: I0127 08:23:17.662468 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ea707dd-ba2d-41cd-b182-277f524d63bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ea707dd-ba2d-41cd-b182-277f524d63bf" (UID: "9ea707dd-ba2d-41cd-b182-277f524d63bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:23:17 crc kubenswrapper[4799]: I0127 08:23:17.705424 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ea707dd-ba2d-41cd-b182-277f524d63bf-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:23:17 crc kubenswrapper[4799]: I0127 08:23:17.705453 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ea707dd-ba2d-41cd-b182-277f524d63bf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:23:17 crc kubenswrapper[4799]: I0127 08:23:17.705466 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv8xv\" (UniqueName: \"kubernetes.io/projected/9ea707dd-ba2d-41cd-b182-277f524d63bf-kube-api-access-fv8xv\") on node \"crc\" DevicePath \"\"" Jan 27 08:23:18 crc kubenswrapper[4799]: I0127 08:23:18.435525 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwdl4" Jan 27 08:23:18 crc kubenswrapper[4799]: I0127 08:23:18.487929 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cwdl4"] Jan 27 08:23:18 crc kubenswrapper[4799]: I0127 08:23:18.495484 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cwdl4"] Jan 27 08:23:20 crc kubenswrapper[4799]: I0127 08:23:20.459443 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ea707dd-ba2d-41cd-b182-277f524d63bf" path="/var/lib/kubelet/pods/9ea707dd-ba2d-41cd-b182-277f524d63bf/volumes" Jan 27 08:24:53 crc kubenswrapper[4799]: I0127 08:24:53.731810 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:24:53 crc kubenswrapper[4799]: I0127 08:24:53.732363 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:25:23 crc kubenswrapper[4799]: I0127 08:25:23.730981 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:25:23 crc kubenswrapper[4799]: I0127 08:25:23.731584 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:25:53 crc kubenswrapper[4799]: I0127 08:25:53.731424 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:25:53 crc kubenswrapper[4799]: I0127 08:25:53.731948 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:25:53 crc kubenswrapper[4799]: I0127 08:25:53.732000 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 08:25:53 crc kubenswrapper[4799]: I0127 08:25:53.732694 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 08:25:53 crc kubenswrapper[4799]: I0127 08:25:53.732760 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" gracePeriod=600 Jan 27 08:25:53 crc kubenswrapper[4799]: E0127 08:25:53.856342 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:25:54 crc kubenswrapper[4799]: I0127 08:25:54.488495 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" exitCode=0 Jan 27 08:25:54 crc kubenswrapper[4799]: I0127 08:25:54.488618 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd"} Jan 27 08:25:54 crc kubenswrapper[4799]: I0127 08:25:54.489461 4799 scope.go:117] "RemoveContainer" containerID="092c1a731185a1e64f1e7f6d7b8f7a4e0260b552134cd69fabc237b68b3e120c" Jan 27 08:25:54 crc kubenswrapper[4799]: I0127 08:25:54.490359 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:25:54 crc kubenswrapper[4799]: E0127 08:25:54.491942 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:26:05 crc kubenswrapper[4799]: I0127 08:26:05.451767 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:26:05 crc kubenswrapper[4799]: E0127 08:26:05.452722 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:26:17 crc kubenswrapper[4799]: I0127 08:26:17.451816 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:26:17 crc kubenswrapper[4799]: E0127 08:26:17.452590 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:26:31 crc kubenswrapper[4799]: I0127 08:26:31.451918 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:26:31 crc kubenswrapper[4799]: E0127 08:26:31.453838 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:26:42 crc kubenswrapper[4799]: I0127 08:26:42.452468 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:26:42 crc kubenswrapper[4799]: E0127 08:26:42.455868 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:26:57 crc kubenswrapper[4799]: I0127 08:26:57.451688 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:26:57 crc kubenswrapper[4799]: E0127 08:26:57.452667 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:27:10 crc kubenswrapper[4799]: I0127 08:27:10.452096 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:27:10 crc kubenswrapper[4799]: E0127 08:27:10.453247 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:27:21 crc kubenswrapper[4799]: I0127 08:27:21.451838 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:27:21 crc kubenswrapper[4799]: E0127 08:27:21.453097 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:27:35 crc kubenswrapper[4799]: I0127 08:27:35.451524 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:27:35 crc kubenswrapper[4799]: E0127 08:27:35.452573 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:27:47 crc kubenswrapper[4799]: I0127 08:27:47.450971 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:27:47 crc kubenswrapper[4799]: E0127 08:27:47.451759 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:27:59 crc kubenswrapper[4799]: I0127 08:27:59.452023 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:27:59 crc kubenswrapper[4799]: E0127 08:27:59.453163 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:28:12 crc kubenswrapper[4799]: I0127 08:28:12.451628 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:28:12 crc kubenswrapper[4799]: E0127 08:28:12.453952 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:28:25 crc kubenswrapper[4799]: I0127 08:28:25.451913 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:28:25 crc kubenswrapper[4799]: E0127 08:28:25.455100 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:28:39 crc kubenswrapper[4799]: I0127 08:28:39.452085 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:28:39 crc kubenswrapper[4799]: E0127 08:28:39.453081 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:28:40 crc kubenswrapper[4799]: I0127 08:28:40.186263 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rfslm"] Jan 27 08:28:40 crc kubenswrapper[4799]: E0127 08:28:40.186993 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ea707dd-ba2d-41cd-b182-277f524d63bf" containerName="registry-server" Jan 27 08:28:40 crc kubenswrapper[4799]: I0127 08:28:40.187018 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ea707dd-ba2d-41cd-b182-277f524d63bf" containerName="registry-server" Jan 27 08:28:40 crc kubenswrapper[4799]: E0127 08:28:40.187064 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ea707dd-ba2d-41cd-b182-277f524d63bf" containerName="extract-utilities" Jan 27 08:28:40 crc kubenswrapper[4799]: I0127 08:28:40.187074 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ea707dd-ba2d-41cd-b182-277f524d63bf" containerName="extract-utilities" Jan 27 08:28:40 crc kubenswrapper[4799]: E0127 08:28:40.187084 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ea707dd-ba2d-41cd-b182-277f524d63bf" containerName="extract-content" Jan 27 08:28:40 crc kubenswrapper[4799]: I0127 08:28:40.187093 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ea707dd-ba2d-41cd-b182-277f524d63bf" containerName="extract-content" Jan 27 08:28:40 crc kubenswrapper[4799]: I0127 08:28:40.187268 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ea707dd-ba2d-41cd-b182-277f524d63bf" containerName="registry-server" Jan 27 08:28:40 crc kubenswrapper[4799]: I0127 08:28:40.206716 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rfslm"] Jan 27 08:28:40 crc kubenswrapper[4799]: I0127 08:28:40.206835 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rfslm" Jan 27 08:28:40 crc kubenswrapper[4799]: I0127 08:28:40.341444 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvkbc\" (UniqueName: \"kubernetes.io/projected/61825565-6aa7-4cb9-941a-516909175646-kube-api-access-lvkbc\") pod \"redhat-operators-rfslm\" (UID: \"61825565-6aa7-4cb9-941a-516909175646\") " pod="openshift-marketplace/redhat-operators-rfslm" Jan 27 08:28:40 crc kubenswrapper[4799]: I0127 08:28:40.341528 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61825565-6aa7-4cb9-941a-516909175646-utilities\") pod \"redhat-operators-rfslm\" (UID: \"61825565-6aa7-4cb9-941a-516909175646\") " pod="openshift-marketplace/redhat-operators-rfslm" Jan 27 08:28:40 crc kubenswrapper[4799]: I0127 08:28:40.341648 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61825565-6aa7-4cb9-941a-516909175646-catalog-content\") pod \"redhat-operators-rfslm\" (UID: \"61825565-6aa7-4cb9-941a-516909175646\") " pod="openshift-marketplace/redhat-operators-rfslm" Jan 27 08:28:40 crc kubenswrapper[4799]: I0127 08:28:40.442909 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvkbc\" (UniqueName: \"kubernetes.io/projected/61825565-6aa7-4cb9-941a-516909175646-kube-api-access-lvkbc\") pod \"redhat-operators-rfslm\" (UID: \"61825565-6aa7-4cb9-941a-516909175646\") " pod="openshift-marketplace/redhat-operators-rfslm" Jan 27 08:28:40 crc kubenswrapper[4799]: I0127 08:28:40.442972 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61825565-6aa7-4cb9-941a-516909175646-utilities\") pod \"redhat-operators-rfslm\" (UID: \"61825565-6aa7-4cb9-941a-516909175646\") " pod="openshift-marketplace/redhat-operators-rfslm" Jan 27 08:28:40 crc kubenswrapper[4799]: I0127 08:28:40.443023 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61825565-6aa7-4cb9-941a-516909175646-catalog-content\") pod \"redhat-operators-rfslm\" (UID: \"61825565-6aa7-4cb9-941a-516909175646\") " pod="openshift-marketplace/redhat-operators-rfslm" Jan 27 08:28:40 crc kubenswrapper[4799]: I0127 08:28:40.443410 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61825565-6aa7-4cb9-941a-516909175646-catalog-content\") pod \"redhat-operators-rfslm\" (UID: \"61825565-6aa7-4cb9-941a-516909175646\") " pod="openshift-marketplace/redhat-operators-rfslm" Jan 27 08:28:40 crc kubenswrapper[4799]: I0127 08:28:40.443574 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61825565-6aa7-4cb9-941a-516909175646-utilities\") pod \"redhat-operators-rfslm\" (UID: \"61825565-6aa7-4cb9-941a-516909175646\") " pod="openshift-marketplace/redhat-operators-rfslm" Jan 27 08:28:40 crc kubenswrapper[4799]: I0127 08:28:40.467434 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvkbc\" (UniqueName: \"kubernetes.io/projected/61825565-6aa7-4cb9-941a-516909175646-kube-api-access-lvkbc\") pod \"redhat-operators-rfslm\" (UID: \"61825565-6aa7-4cb9-941a-516909175646\") " pod="openshift-marketplace/redhat-operators-rfslm" Jan 27 08:28:40 crc kubenswrapper[4799]: I0127 08:28:40.530001 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rfslm" Jan 27 08:28:40 crc kubenswrapper[4799]: I0127 08:28:40.988327 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rfslm"] Jan 27 08:28:41 crc kubenswrapper[4799]: I0127 08:28:41.852923 4799 generic.go:334] "Generic (PLEG): container finished" podID="61825565-6aa7-4cb9-941a-516909175646" containerID="661fb114069c1e0cb89b2d9bd517eda12b6cdf762b1f8568dd16f4e84a63bb8a" exitCode=0 Jan 27 08:28:41 crc kubenswrapper[4799]: I0127 08:28:41.852970 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rfslm" event={"ID":"61825565-6aa7-4cb9-941a-516909175646","Type":"ContainerDied","Data":"661fb114069c1e0cb89b2d9bd517eda12b6cdf762b1f8568dd16f4e84a63bb8a"} Jan 27 08:28:41 crc kubenswrapper[4799]: I0127 08:28:41.853263 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rfslm" event={"ID":"61825565-6aa7-4cb9-941a-516909175646","Type":"ContainerStarted","Data":"1a4d7119951cffbe29a2913052e2b3a19f83731be9d0dfdc4a679702a79fdc00"} Jan 27 08:28:41 crc kubenswrapper[4799]: I0127 08:28:41.854820 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 08:28:42 crc kubenswrapper[4799]: I0127 08:28:42.862459 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rfslm" event={"ID":"61825565-6aa7-4cb9-941a-516909175646","Type":"ContainerStarted","Data":"d3cba3666b37d5b29ede22432ca81b0e5fc29756c27bfa231adc78465643c8d9"} Jan 27 08:28:43 crc kubenswrapper[4799]: I0127 08:28:43.183012 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lgv2j"] Jan 27 08:28:43 crc kubenswrapper[4799]: I0127 08:28:43.184828 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lgv2j" Jan 27 08:28:43 crc kubenswrapper[4799]: I0127 08:28:43.203696 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lgv2j"] Jan 27 08:28:43 crc kubenswrapper[4799]: I0127 08:28:43.283115 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/767204f1-5d28-48f0-b6a4-a24c709b4809-catalog-content\") pod \"redhat-marketplace-lgv2j\" (UID: \"767204f1-5d28-48f0-b6a4-a24c709b4809\") " pod="openshift-marketplace/redhat-marketplace-lgv2j" Jan 27 08:28:43 crc kubenswrapper[4799]: I0127 08:28:43.283261 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/767204f1-5d28-48f0-b6a4-a24c709b4809-utilities\") pod \"redhat-marketplace-lgv2j\" (UID: \"767204f1-5d28-48f0-b6a4-a24c709b4809\") " pod="openshift-marketplace/redhat-marketplace-lgv2j" Jan 27 08:28:43 crc kubenswrapper[4799]: I0127 08:28:43.283331 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv8mm\" (UniqueName: \"kubernetes.io/projected/767204f1-5d28-48f0-b6a4-a24c709b4809-kube-api-access-bv8mm\") pod \"redhat-marketplace-lgv2j\" (UID: \"767204f1-5d28-48f0-b6a4-a24c709b4809\") " pod="openshift-marketplace/redhat-marketplace-lgv2j" Jan 27 08:28:43 crc kubenswrapper[4799]: I0127 08:28:43.384696 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/767204f1-5d28-48f0-b6a4-a24c709b4809-catalog-content\") pod \"redhat-marketplace-lgv2j\" (UID: \"767204f1-5d28-48f0-b6a4-a24c709b4809\") " pod="openshift-marketplace/redhat-marketplace-lgv2j" Jan 27 08:28:43 crc kubenswrapper[4799]: I0127 08:28:43.384833 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/767204f1-5d28-48f0-b6a4-a24c709b4809-utilities\") pod \"redhat-marketplace-lgv2j\" (UID: \"767204f1-5d28-48f0-b6a4-a24c709b4809\") " pod="openshift-marketplace/redhat-marketplace-lgv2j" Jan 27 08:28:43 crc kubenswrapper[4799]: I0127 08:28:43.384871 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv8mm\" (UniqueName: \"kubernetes.io/projected/767204f1-5d28-48f0-b6a4-a24c709b4809-kube-api-access-bv8mm\") pod \"redhat-marketplace-lgv2j\" (UID: \"767204f1-5d28-48f0-b6a4-a24c709b4809\") " pod="openshift-marketplace/redhat-marketplace-lgv2j" Jan 27 08:28:43 crc kubenswrapper[4799]: I0127 08:28:43.385444 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/767204f1-5d28-48f0-b6a4-a24c709b4809-catalog-content\") pod \"redhat-marketplace-lgv2j\" (UID: \"767204f1-5d28-48f0-b6a4-a24c709b4809\") " pod="openshift-marketplace/redhat-marketplace-lgv2j" Jan 27 08:28:43 crc kubenswrapper[4799]: I0127 08:28:43.387225 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/767204f1-5d28-48f0-b6a4-a24c709b4809-utilities\") pod \"redhat-marketplace-lgv2j\" (UID: \"767204f1-5d28-48f0-b6a4-a24c709b4809\") " pod="openshift-marketplace/redhat-marketplace-lgv2j" Jan 27 08:28:43 crc kubenswrapper[4799]: I0127 08:28:43.411407 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv8mm\" (UniqueName: \"kubernetes.io/projected/767204f1-5d28-48f0-b6a4-a24c709b4809-kube-api-access-bv8mm\") pod \"redhat-marketplace-lgv2j\" (UID: \"767204f1-5d28-48f0-b6a4-a24c709b4809\") " pod="openshift-marketplace/redhat-marketplace-lgv2j" Jan 27 08:28:43 crc kubenswrapper[4799]: I0127 08:28:43.501950 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lgv2j" Jan 27 08:28:43 crc kubenswrapper[4799]: I0127 08:28:43.873867 4799 generic.go:334] "Generic (PLEG): container finished" podID="61825565-6aa7-4cb9-941a-516909175646" containerID="d3cba3666b37d5b29ede22432ca81b0e5fc29756c27bfa231adc78465643c8d9" exitCode=0 Jan 27 08:28:43 crc kubenswrapper[4799]: I0127 08:28:43.873916 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rfslm" event={"ID":"61825565-6aa7-4cb9-941a-516909175646","Type":"ContainerDied","Data":"d3cba3666b37d5b29ede22432ca81b0e5fc29756c27bfa231adc78465643c8d9"} Jan 27 08:28:43 crc kubenswrapper[4799]: I0127 08:28:43.990578 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lgv2j"] Jan 27 08:28:43 crc kubenswrapper[4799]: W0127 08:28:43.996716 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod767204f1_5d28_48f0_b6a4_a24c709b4809.slice/crio-25bdf9db2b7f19e7aae53d5899fcce0c585afaceb609814351c9e929638a1f8d WatchSource:0}: Error finding container 25bdf9db2b7f19e7aae53d5899fcce0c585afaceb609814351c9e929638a1f8d: Status 404 returned error can't find the container with id 25bdf9db2b7f19e7aae53d5899fcce0c585afaceb609814351c9e929638a1f8d Jan 27 08:28:44 crc kubenswrapper[4799]: I0127 08:28:44.886387 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rfslm" event={"ID":"61825565-6aa7-4cb9-941a-516909175646","Type":"ContainerStarted","Data":"f8165531e184327ff1422b0e91e53e328c2b80b8837b967274c10ad59629c463"} Jan 27 08:28:44 crc kubenswrapper[4799]: I0127 08:28:44.888462 4799 generic.go:334] "Generic (PLEG): container finished" podID="767204f1-5d28-48f0-b6a4-a24c709b4809" containerID="6fff64c30486201e6c9a11d3c22a3b05ad7069d565ba165420bf9769892b8e14" exitCode=0 Jan 27 08:28:44 crc kubenswrapper[4799]: I0127 08:28:44.888521 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgv2j" event={"ID":"767204f1-5d28-48f0-b6a4-a24c709b4809","Type":"ContainerDied","Data":"6fff64c30486201e6c9a11d3c22a3b05ad7069d565ba165420bf9769892b8e14"} Jan 27 08:28:44 crc kubenswrapper[4799]: I0127 08:28:44.888554 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgv2j" event={"ID":"767204f1-5d28-48f0-b6a4-a24c709b4809","Type":"ContainerStarted","Data":"25bdf9db2b7f19e7aae53d5899fcce0c585afaceb609814351c9e929638a1f8d"} Jan 27 08:28:44 crc kubenswrapper[4799]: I0127 08:28:44.916611 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rfslm" podStartSLOduration=2.465031035 podStartE2EDuration="4.916591868s" podCreationTimestamp="2026-01-27 08:28:40 +0000 UTC" firstStartedPulling="2026-01-27 08:28:41.854576771 +0000 UTC m=+2588.165680836" lastFinishedPulling="2026-01-27 08:28:44.306137564 +0000 UTC m=+2590.617241669" observedRunningTime="2026-01-27 08:28:44.912979801 +0000 UTC m=+2591.224083906" watchObservedRunningTime="2026-01-27 08:28:44.916591868 +0000 UTC m=+2591.227695943" Jan 27 08:28:45 crc kubenswrapper[4799]: I0127 08:28:45.897457 4799 generic.go:334] "Generic (PLEG): container finished" podID="767204f1-5d28-48f0-b6a4-a24c709b4809" containerID="e2eeadea23caf55f0c64834d66516b8285f659149f5116ab1a0564c00910ff20" exitCode=0 Jan 27 08:28:45 crc kubenswrapper[4799]: I0127 08:28:45.897511 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgv2j" event={"ID":"767204f1-5d28-48f0-b6a4-a24c709b4809","Type":"ContainerDied","Data":"e2eeadea23caf55f0c64834d66516b8285f659149f5116ab1a0564c00910ff20"} Jan 27 08:28:46 crc kubenswrapper[4799]: I0127 08:28:46.907985 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgv2j" event={"ID":"767204f1-5d28-48f0-b6a4-a24c709b4809","Type":"ContainerStarted","Data":"d11711c11874386e772a0a3d9210f19380f289112ed939835bb7b6ec53aefddd"} Jan 27 08:28:46 crc kubenswrapper[4799]: I0127 08:28:46.934915 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lgv2j" podStartSLOduration=2.566931109 podStartE2EDuration="3.93489503s" podCreationTimestamp="2026-01-27 08:28:43 +0000 UTC" firstStartedPulling="2026-01-27 08:28:44.890723295 +0000 UTC m=+2591.201827370" lastFinishedPulling="2026-01-27 08:28:46.258687216 +0000 UTC m=+2592.569791291" observedRunningTime="2026-01-27 08:28:46.93050675 +0000 UTC m=+2593.241610855" watchObservedRunningTime="2026-01-27 08:28:46.93489503 +0000 UTC m=+2593.245999105" Jan 27 08:28:50 crc kubenswrapper[4799]: I0127 08:28:50.531254 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rfslm" Jan 27 08:28:50 crc kubenswrapper[4799]: I0127 08:28:50.531933 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rfslm" Jan 27 08:28:51 crc kubenswrapper[4799]: I0127 08:28:51.451489 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:28:51 crc kubenswrapper[4799]: E0127 08:28:51.451869 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:28:51 crc kubenswrapper[4799]: I0127 08:28:51.577863 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rfslm" podUID="61825565-6aa7-4cb9-941a-516909175646" containerName="registry-server" probeResult="failure" output=< Jan 27 08:28:51 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Jan 27 08:28:51 crc kubenswrapper[4799]: > Jan 27 08:28:53 crc kubenswrapper[4799]: I0127 08:28:53.502361 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lgv2j" Jan 27 08:28:53 crc kubenswrapper[4799]: I0127 08:28:53.502452 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lgv2j" Jan 27 08:28:53 crc kubenswrapper[4799]: I0127 08:28:53.559686 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lgv2j" Jan 27 08:28:54 crc kubenswrapper[4799]: I0127 08:28:54.040131 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lgv2j" Jan 27 08:28:54 crc kubenswrapper[4799]: I0127 08:28:54.094199 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lgv2j"] Jan 27 08:28:56 crc kubenswrapper[4799]: I0127 08:28:56.007806 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lgv2j" podUID="767204f1-5d28-48f0-b6a4-a24c709b4809" containerName="registry-server" containerID="cri-o://d11711c11874386e772a0a3d9210f19380f289112ed939835bb7b6ec53aefddd" gracePeriod=2 Jan 27 08:28:56 crc kubenswrapper[4799]: I0127 08:28:56.518865 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lgv2j" Jan 27 08:28:56 crc kubenswrapper[4799]: I0127 08:28:56.679243 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/767204f1-5d28-48f0-b6a4-a24c709b4809-catalog-content\") pod \"767204f1-5d28-48f0-b6a4-a24c709b4809\" (UID: \"767204f1-5d28-48f0-b6a4-a24c709b4809\") " Jan 27 08:28:56 crc kubenswrapper[4799]: I0127 08:28:56.679448 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/767204f1-5d28-48f0-b6a4-a24c709b4809-utilities\") pod \"767204f1-5d28-48f0-b6a4-a24c709b4809\" (UID: \"767204f1-5d28-48f0-b6a4-a24c709b4809\") " Jan 27 08:28:56 crc kubenswrapper[4799]: I0127 08:28:56.679483 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv8mm\" (UniqueName: \"kubernetes.io/projected/767204f1-5d28-48f0-b6a4-a24c709b4809-kube-api-access-bv8mm\") pod \"767204f1-5d28-48f0-b6a4-a24c709b4809\" (UID: \"767204f1-5d28-48f0-b6a4-a24c709b4809\") " Jan 27 08:28:56 crc kubenswrapper[4799]: I0127 08:28:56.680694 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/767204f1-5d28-48f0-b6a4-a24c709b4809-utilities" (OuterVolumeSpecName: "utilities") pod "767204f1-5d28-48f0-b6a4-a24c709b4809" (UID: "767204f1-5d28-48f0-b6a4-a24c709b4809"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:28:56 crc kubenswrapper[4799]: I0127 08:28:56.688822 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/767204f1-5d28-48f0-b6a4-a24c709b4809-kube-api-access-bv8mm" (OuterVolumeSpecName: "kube-api-access-bv8mm") pod "767204f1-5d28-48f0-b6a4-a24c709b4809" (UID: "767204f1-5d28-48f0-b6a4-a24c709b4809"). InnerVolumeSpecName "kube-api-access-bv8mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:28:56 crc kubenswrapper[4799]: I0127 08:28:56.702942 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/767204f1-5d28-48f0-b6a4-a24c709b4809-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "767204f1-5d28-48f0-b6a4-a24c709b4809" (UID: "767204f1-5d28-48f0-b6a4-a24c709b4809"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:28:56 crc kubenswrapper[4799]: I0127 08:28:56.780860 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/767204f1-5d28-48f0-b6a4-a24c709b4809-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:28:56 crc kubenswrapper[4799]: I0127 08:28:56.781618 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv8mm\" (UniqueName: \"kubernetes.io/projected/767204f1-5d28-48f0-b6a4-a24c709b4809-kube-api-access-bv8mm\") on node \"crc\" DevicePath \"\"" Jan 27 08:28:56 crc kubenswrapper[4799]: I0127 08:28:56.781679 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/767204f1-5d28-48f0-b6a4-a24c709b4809-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:28:57 crc kubenswrapper[4799]: I0127 08:28:57.019826 4799 generic.go:334] "Generic (PLEG): container finished" podID="767204f1-5d28-48f0-b6a4-a24c709b4809" containerID="d11711c11874386e772a0a3d9210f19380f289112ed939835bb7b6ec53aefddd" exitCode=0 Jan 27 08:28:57 crc kubenswrapper[4799]: I0127 08:28:57.019881 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgv2j" event={"ID":"767204f1-5d28-48f0-b6a4-a24c709b4809","Type":"ContainerDied","Data":"d11711c11874386e772a0a3d9210f19380f289112ed939835bb7b6ec53aefddd"} Jan 27 08:28:57 crc kubenswrapper[4799]: I0127 08:28:57.019994 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lgv2j" Jan 27 08:28:57 crc kubenswrapper[4799]: I0127 08:28:57.020025 4799 scope.go:117] "RemoveContainer" containerID="d11711c11874386e772a0a3d9210f19380f289112ed939835bb7b6ec53aefddd" Jan 27 08:28:57 crc kubenswrapper[4799]: I0127 08:28:57.020003 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgv2j" event={"ID":"767204f1-5d28-48f0-b6a4-a24c709b4809","Type":"ContainerDied","Data":"25bdf9db2b7f19e7aae53d5899fcce0c585afaceb609814351c9e929638a1f8d"} Jan 27 08:28:57 crc kubenswrapper[4799]: I0127 08:28:57.042400 4799 scope.go:117] "RemoveContainer" containerID="e2eeadea23caf55f0c64834d66516b8285f659149f5116ab1a0564c00910ff20" Jan 27 08:28:57 crc kubenswrapper[4799]: I0127 08:28:57.067789 4799 scope.go:117] "RemoveContainer" containerID="6fff64c30486201e6c9a11d3c22a3b05ad7069d565ba165420bf9769892b8e14" Jan 27 08:28:57 crc kubenswrapper[4799]: I0127 08:28:57.106180 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lgv2j"] Jan 27 08:28:57 crc kubenswrapper[4799]: I0127 08:28:57.107880 4799 scope.go:117] "RemoveContainer" containerID="d11711c11874386e772a0a3d9210f19380f289112ed939835bb7b6ec53aefddd" Jan 27 08:28:57 crc kubenswrapper[4799]: E0127 08:28:57.108379 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d11711c11874386e772a0a3d9210f19380f289112ed939835bb7b6ec53aefddd\": container with ID starting with d11711c11874386e772a0a3d9210f19380f289112ed939835bb7b6ec53aefddd not found: ID does not exist" containerID="d11711c11874386e772a0a3d9210f19380f289112ed939835bb7b6ec53aefddd" Jan 27 08:28:57 crc kubenswrapper[4799]: I0127 08:28:57.108412 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d11711c11874386e772a0a3d9210f19380f289112ed939835bb7b6ec53aefddd"} err="failed to get container status \"d11711c11874386e772a0a3d9210f19380f289112ed939835bb7b6ec53aefddd\": rpc error: code = NotFound desc = could not find container \"d11711c11874386e772a0a3d9210f19380f289112ed939835bb7b6ec53aefddd\": container with ID starting with d11711c11874386e772a0a3d9210f19380f289112ed939835bb7b6ec53aefddd not found: ID does not exist" Jan 27 08:28:57 crc kubenswrapper[4799]: I0127 08:28:57.108436 4799 scope.go:117] "RemoveContainer" containerID="e2eeadea23caf55f0c64834d66516b8285f659149f5116ab1a0564c00910ff20" Jan 27 08:28:57 crc kubenswrapper[4799]: E0127 08:28:57.109027 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2eeadea23caf55f0c64834d66516b8285f659149f5116ab1a0564c00910ff20\": container with ID starting with e2eeadea23caf55f0c64834d66516b8285f659149f5116ab1a0564c00910ff20 not found: ID does not exist" containerID="e2eeadea23caf55f0c64834d66516b8285f659149f5116ab1a0564c00910ff20" Jan 27 08:28:57 crc kubenswrapper[4799]: I0127 08:28:57.109087 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2eeadea23caf55f0c64834d66516b8285f659149f5116ab1a0564c00910ff20"} err="failed to get container status \"e2eeadea23caf55f0c64834d66516b8285f659149f5116ab1a0564c00910ff20\": rpc error: code = NotFound desc = could not find container \"e2eeadea23caf55f0c64834d66516b8285f659149f5116ab1a0564c00910ff20\": container with ID starting with e2eeadea23caf55f0c64834d66516b8285f659149f5116ab1a0564c00910ff20 not found: ID does not exist" Jan 27 08:28:57 crc kubenswrapper[4799]: I0127 08:28:57.109132 4799 scope.go:117] "RemoveContainer" containerID="6fff64c30486201e6c9a11d3c22a3b05ad7069d565ba165420bf9769892b8e14" Jan 27 08:28:57 crc kubenswrapper[4799]: E0127 08:28:57.109502 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fff64c30486201e6c9a11d3c22a3b05ad7069d565ba165420bf9769892b8e14\": container with ID starting with 6fff64c30486201e6c9a11d3c22a3b05ad7069d565ba165420bf9769892b8e14 not found: ID does not exist" containerID="6fff64c30486201e6c9a11d3c22a3b05ad7069d565ba165420bf9769892b8e14" Jan 27 08:28:57 crc kubenswrapper[4799]: I0127 08:28:57.109529 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fff64c30486201e6c9a11d3c22a3b05ad7069d565ba165420bf9769892b8e14"} err="failed to get container status \"6fff64c30486201e6c9a11d3c22a3b05ad7069d565ba165420bf9769892b8e14\": rpc error: code = NotFound desc = could not find container \"6fff64c30486201e6c9a11d3c22a3b05ad7069d565ba165420bf9769892b8e14\": container with ID starting with 6fff64c30486201e6c9a11d3c22a3b05ad7069d565ba165420bf9769892b8e14 not found: ID does not exist" Jan 27 08:28:57 crc kubenswrapper[4799]: I0127 08:28:57.113818 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lgv2j"] Jan 27 08:28:58 crc kubenswrapper[4799]: I0127 08:28:58.465351 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="767204f1-5d28-48f0-b6a4-a24c709b4809" path="/var/lib/kubelet/pods/767204f1-5d28-48f0-b6a4-a24c709b4809/volumes" Jan 27 08:29:00 crc kubenswrapper[4799]: I0127 08:29:00.572202 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rfslm" Jan 27 08:29:00 crc kubenswrapper[4799]: I0127 08:29:00.624607 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rfslm" Jan 27 08:29:00 crc kubenswrapper[4799]: I0127 08:29:00.819057 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rfslm"] Jan 27 08:29:02 crc kubenswrapper[4799]: I0127 08:29:02.059084 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rfslm" podUID="61825565-6aa7-4cb9-941a-516909175646" containerName="registry-server" containerID="cri-o://f8165531e184327ff1422b0e91e53e328c2b80b8837b967274c10ad59629c463" gracePeriod=2 Jan 27 08:29:02 crc kubenswrapper[4799]: I0127 08:29:02.998064 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rfslm" Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.065694 4799 generic.go:334] "Generic (PLEG): container finished" podID="61825565-6aa7-4cb9-941a-516909175646" containerID="f8165531e184327ff1422b0e91e53e328c2b80b8837b967274c10ad59629c463" exitCode=0 Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.065738 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rfslm" event={"ID":"61825565-6aa7-4cb9-941a-516909175646","Type":"ContainerDied","Data":"f8165531e184327ff1422b0e91e53e328c2b80b8837b967274c10ad59629c463"} Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.065774 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rfslm" Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.065822 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rfslm" event={"ID":"61825565-6aa7-4cb9-941a-516909175646","Type":"ContainerDied","Data":"1a4d7119951cffbe29a2913052e2b3a19f83731be9d0dfdc4a679702a79fdc00"} Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.065899 4799 scope.go:117] "RemoveContainer" containerID="f8165531e184327ff1422b0e91e53e328c2b80b8837b967274c10ad59629c463" Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.084766 4799 scope.go:117] "RemoveContainer" containerID="d3cba3666b37d5b29ede22432ca81b0e5fc29756c27bfa231adc78465643c8d9" Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.092237 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvkbc\" (UniqueName: \"kubernetes.io/projected/61825565-6aa7-4cb9-941a-516909175646-kube-api-access-lvkbc\") pod \"61825565-6aa7-4cb9-941a-516909175646\" (UID: \"61825565-6aa7-4cb9-941a-516909175646\") " Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.092406 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61825565-6aa7-4cb9-941a-516909175646-catalog-content\") pod \"61825565-6aa7-4cb9-941a-516909175646\" (UID: \"61825565-6aa7-4cb9-941a-516909175646\") " Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.092432 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61825565-6aa7-4cb9-941a-516909175646-utilities\") pod \"61825565-6aa7-4cb9-941a-516909175646\" (UID: \"61825565-6aa7-4cb9-941a-516909175646\") " Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.093787 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61825565-6aa7-4cb9-941a-516909175646-utilities" (OuterVolumeSpecName: "utilities") pod "61825565-6aa7-4cb9-941a-516909175646" (UID: "61825565-6aa7-4cb9-941a-516909175646"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.098420 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61825565-6aa7-4cb9-941a-516909175646-kube-api-access-lvkbc" (OuterVolumeSpecName: "kube-api-access-lvkbc") pod "61825565-6aa7-4cb9-941a-516909175646" (UID: "61825565-6aa7-4cb9-941a-516909175646"). InnerVolumeSpecName "kube-api-access-lvkbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.105111 4799 scope.go:117] "RemoveContainer" containerID="661fb114069c1e0cb89b2d9bd517eda12b6cdf762b1f8568dd16f4e84a63bb8a" Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.151849 4799 scope.go:117] "RemoveContainer" containerID="f8165531e184327ff1422b0e91e53e328c2b80b8837b967274c10ad59629c463" Jan 27 08:29:03 crc kubenswrapper[4799]: E0127 08:29:03.152282 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8165531e184327ff1422b0e91e53e328c2b80b8837b967274c10ad59629c463\": container with ID starting with f8165531e184327ff1422b0e91e53e328c2b80b8837b967274c10ad59629c463 not found: ID does not exist" containerID="f8165531e184327ff1422b0e91e53e328c2b80b8837b967274c10ad59629c463" Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.152365 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8165531e184327ff1422b0e91e53e328c2b80b8837b967274c10ad59629c463"} err="failed to get container status \"f8165531e184327ff1422b0e91e53e328c2b80b8837b967274c10ad59629c463\": rpc error: code = NotFound desc = could not find container \"f8165531e184327ff1422b0e91e53e328c2b80b8837b967274c10ad59629c463\": container with ID starting with f8165531e184327ff1422b0e91e53e328c2b80b8837b967274c10ad59629c463 not found: ID does not exist" Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.152397 4799 scope.go:117] "RemoveContainer" containerID="d3cba3666b37d5b29ede22432ca81b0e5fc29756c27bfa231adc78465643c8d9" Jan 27 08:29:03 crc kubenswrapper[4799]: E0127 08:29:03.152821 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3cba3666b37d5b29ede22432ca81b0e5fc29756c27bfa231adc78465643c8d9\": container with ID starting with d3cba3666b37d5b29ede22432ca81b0e5fc29756c27bfa231adc78465643c8d9 not found: ID does not exist" containerID="d3cba3666b37d5b29ede22432ca81b0e5fc29756c27bfa231adc78465643c8d9" Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.152877 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3cba3666b37d5b29ede22432ca81b0e5fc29756c27bfa231adc78465643c8d9"} err="failed to get container status \"d3cba3666b37d5b29ede22432ca81b0e5fc29756c27bfa231adc78465643c8d9\": rpc error: code = NotFound desc = could not find container \"d3cba3666b37d5b29ede22432ca81b0e5fc29756c27bfa231adc78465643c8d9\": container with ID starting with d3cba3666b37d5b29ede22432ca81b0e5fc29756c27bfa231adc78465643c8d9 not found: ID does not exist" Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.152910 4799 scope.go:117] "RemoveContainer" containerID="661fb114069c1e0cb89b2d9bd517eda12b6cdf762b1f8568dd16f4e84a63bb8a" Jan 27 08:29:03 crc kubenswrapper[4799]: E0127 08:29:03.153874 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"661fb114069c1e0cb89b2d9bd517eda12b6cdf762b1f8568dd16f4e84a63bb8a\": container with ID starting with 661fb114069c1e0cb89b2d9bd517eda12b6cdf762b1f8568dd16f4e84a63bb8a not found: ID does not exist" containerID="661fb114069c1e0cb89b2d9bd517eda12b6cdf762b1f8568dd16f4e84a63bb8a" Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.153902 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"661fb114069c1e0cb89b2d9bd517eda12b6cdf762b1f8568dd16f4e84a63bb8a"} err="failed to get container status \"661fb114069c1e0cb89b2d9bd517eda12b6cdf762b1f8568dd16f4e84a63bb8a\": rpc error: code = NotFound desc = could not find container \"661fb114069c1e0cb89b2d9bd517eda12b6cdf762b1f8568dd16f4e84a63bb8a\": container with ID starting with 661fb114069c1e0cb89b2d9bd517eda12b6cdf762b1f8568dd16f4e84a63bb8a not found: ID does not exist" Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.193850 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61825565-6aa7-4cb9-941a-516909175646-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.193902 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvkbc\" (UniqueName: \"kubernetes.io/projected/61825565-6aa7-4cb9-941a-516909175646-kube-api-access-lvkbc\") on node \"crc\" DevicePath \"\"" Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.216974 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61825565-6aa7-4cb9-941a-516909175646-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61825565-6aa7-4cb9-941a-516909175646" (UID: "61825565-6aa7-4cb9-941a-516909175646"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.295751 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61825565-6aa7-4cb9-941a-516909175646-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.398664 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rfslm"] Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.403087 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rfslm"] Jan 27 08:29:03 crc kubenswrapper[4799]: I0127 08:29:03.451633 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:29:03 crc kubenswrapper[4799]: E0127 08:29:03.451958 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:29:04 crc kubenswrapper[4799]: I0127 08:29:04.463778 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61825565-6aa7-4cb9-941a-516909175646" path="/var/lib/kubelet/pods/61825565-6aa7-4cb9-941a-516909175646/volumes" Jan 27 08:29:18 crc kubenswrapper[4799]: I0127 08:29:18.451535 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:29:18 crc kubenswrapper[4799]: E0127 08:29:18.452437 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:29:29 crc kubenswrapper[4799]: I0127 08:29:29.451720 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:29:29 crc kubenswrapper[4799]: E0127 08:29:29.453595 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:29:38 crc kubenswrapper[4799]: I0127 08:29:38.637124 4799 scope.go:117] "RemoveContainer" containerID="7b358682f757fef6c0d41329b19cff2cfc2ac41eed2ed77032e37325e1fb6811" Jan 27 08:29:38 crc kubenswrapper[4799]: I0127 08:29:38.672625 4799 scope.go:117] "RemoveContainer" containerID="6a857d146d30442b49d6bd9325038f3818212b8abcd559575e2c6b6e065bffda" Jan 27 08:29:38 crc kubenswrapper[4799]: I0127 08:29:38.686498 4799 scope.go:117] "RemoveContainer" containerID="064a2a8d25310c1716f4baae396b7e29110e09b06b6f5d66f17a0f813a5170a6" Jan 27 08:29:40 crc kubenswrapper[4799]: I0127 08:29:40.452757 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:29:40 crc kubenswrapper[4799]: E0127 08:29:40.453578 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:29:51 crc kubenswrapper[4799]: I0127 08:29:51.451454 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:29:51 crc kubenswrapper[4799]: E0127 08:29:51.453956 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.147078 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491710-7qljf"] Jan 27 08:30:00 crc kubenswrapper[4799]: E0127 08:30:00.147941 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="767204f1-5d28-48f0-b6a4-a24c709b4809" containerName="registry-server" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.147955 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="767204f1-5d28-48f0-b6a4-a24c709b4809" containerName="registry-server" Jan 27 08:30:00 crc kubenswrapper[4799]: E0127 08:30:00.147970 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61825565-6aa7-4cb9-941a-516909175646" containerName="extract-content" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.147976 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="61825565-6aa7-4cb9-941a-516909175646" containerName="extract-content" Jan 27 08:30:00 crc kubenswrapper[4799]: E0127 08:30:00.147985 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="767204f1-5d28-48f0-b6a4-a24c709b4809" containerName="extract-utilities" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.147992 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="767204f1-5d28-48f0-b6a4-a24c709b4809" containerName="extract-utilities" Jan 27 08:30:00 crc kubenswrapper[4799]: E0127 08:30:00.148006 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61825565-6aa7-4cb9-941a-516909175646" containerName="extract-utilities" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.148012 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="61825565-6aa7-4cb9-941a-516909175646" containerName="extract-utilities" Jan 27 08:30:00 crc kubenswrapper[4799]: E0127 08:30:00.148032 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61825565-6aa7-4cb9-941a-516909175646" containerName="registry-server" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.148039 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="61825565-6aa7-4cb9-941a-516909175646" containerName="registry-server" Jan 27 08:30:00 crc kubenswrapper[4799]: E0127 08:30:00.148048 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="767204f1-5d28-48f0-b6a4-a24c709b4809" containerName="extract-content" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.148053 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="767204f1-5d28-48f0-b6a4-a24c709b4809" containerName="extract-content" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.148198 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="767204f1-5d28-48f0-b6a4-a24c709b4809" containerName="registry-server" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.148221 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="61825565-6aa7-4cb9-941a-516909175646" containerName="registry-server" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.149057 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491710-7qljf" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.152789 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.153590 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.161072 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491710-7qljf"] Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.338874 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb776\" (UniqueName: \"kubernetes.io/projected/d8451b63-3252-4186-9a88-3d831e1a66fc-kube-api-access-kb776\") pod \"collect-profiles-29491710-7qljf\" (UID: \"d8451b63-3252-4186-9a88-3d831e1a66fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491710-7qljf" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.338944 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8451b63-3252-4186-9a88-3d831e1a66fc-config-volume\") pod \"collect-profiles-29491710-7qljf\" (UID: \"d8451b63-3252-4186-9a88-3d831e1a66fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491710-7qljf" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.339064 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8451b63-3252-4186-9a88-3d831e1a66fc-secret-volume\") pod \"collect-profiles-29491710-7qljf\" (UID: \"d8451b63-3252-4186-9a88-3d831e1a66fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491710-7qljf" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.440988 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb776\" (UniqueName: \"kubernetes.io/projected/d8451b63-3252-4186-9a88-3d831e1a66fc-kube-api-access-kb776\") pod \"collect-profiles-29491710-7qljf\" (UID: \"d8451b63-3252-4186-9a88-3d831e1a66fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491710-7qljf" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.441111 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8451b63-3252-4186-9a88-3d831e1a66fc-config-volume\") pod \"collect-profiles-29491710-7qljf\" (UID: \"d8451b63-3252-4186-9a88-3d831e1a66fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491710-7qljf" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.441203 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8451b63-3252-4186-9a88-3d831e1a66fc-secret-volume\") pod \"collect-profiles-29491710-7qljf\" (UID: \"d8451b63-3252-4186-9a88-3d831e1a66fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491710-7qljf" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.443271 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8451b63-3252-4186-9a88-3d831e1a66fc-config-volume\") pod \"collect-profiles-29491710-7qljf\" (UID: \"d8451b63-3252-4186-9a88-3d831e1a66fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491710-7qljf" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.453626 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8451b63-3252-4186-9a88-3d831e1a66fc-secret-volume\") pod \"collect-profiles-29491710-7qljf\" (UID: \"d8451b63-3252-4186-9a88-3d831e1a66fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491710-7qljf" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.460877 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb776\" (UniqueName: \"kubernetes.io/projected/d8451b63-3252-4186-9a88-3d831e1a66fc-kube-api-access-kb776\") pod \"collect-profiles-29491710-7qljf\" (UID: \"d8451b63-3252-4186-9a88-3d831e1a66fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491710-7qljf" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.471323 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491710-7qljf" Jan 27 08:30:00 crc kubenswrapper[4799]: I0127 08:30:00.864603 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491710-7qljf"] Jan 27 08:30:01 crc kubenswrapper[4799]: I0127 08:30:01.560114 4799 generic.go:334] "Generic (PLEG): container finished" podID="d8451b63-3252-4186-9a88-3d831e1a66fc" containerID="9a5c27ce1e96d191a5129160d83742a898be6a93ece214e63952542f67845c16" exitCode=0 Jan 27 08:30:01 crc kubenswrapper[4799]: I0127 08:30:01.560206 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491710-7qljf" event={"ID":"d8451b63-3252-4186-9a88-3d831e1a66fc","Type":"ContainerDied","Data":"9a5c27ce1e96d191a5129160d83742a898be6a93ece214e63952542f67845c16"} Jan 27 08:30:01 crc kubenswrapper[4799]: I0127 08:30:01.561336 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491710-7qljf" event={"ID":"d8451b63-3252-4186-9a88-3d831e1a66fc","Type":"ContainerStarted","Data":"d197e26b925464a240aaef0341687d3978c7009bf2b652c9f568b2a52872dd4f"} Jan 27 08:30:02 crc kubenswrapper[4799]: I0127 08:30:02.796421 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491710-7qljf" Jan 27 08:30:02 crc kubenswrapper[4799]: I0127 08:30:02.972708 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8451b63-3252-4186-9a88-3d831e1a66fc-config-volume\") pod \"d8451b63-3252-4186-9a88-3d831e1a66fc\" (UID: \"d8451b63-3252-4186-9a88-3d831e1a66fc\") " Jan 27 08:30:02 crc kubenswrapper[4799]: I0127 08:30:02.972806 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8451b63-3252-4186-9a88-3d831e1a66fc-secret-volume\") pod \"d8451b63-3252-4186-9a88-3d831e1a66fc\" (UID: \"d8451b63-3252-4186-9a88-3d831e1a66fc\") " Jan 27 08:30:02 crc kubenswrapper[4799]: I0127 08:30:02.972861 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb776\" (UniqueName: \"kubernetes.io/projected/d8451b63-3252-4186-9a88-3d831e1a66fc-kube-api-access-kb776\") pod \"d8451b63-3252-4186-9a88-3d831e1a66fc\" (UID: \"d8451b63-3252-4186-9a88-3d831e1a66fc\") " Jan 27 08:30:02 crc kubenswrapper[4799]: I0127 08:30:02.974951 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8451b63-3252-4186-9a88-3d831e1a66fc-config-volume" (OuterVolumeSpecName: "config-volume") pod "d8451b63-3252-4186-9a88-3d831e1a66fc" (UID: "d8451b63-3252-4186-9a88-3d831e1a66fc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:30:02 crc kubenswrapper[4799]: I0127 08:30:02.979693 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8451b63-3252-4186-9a88-3d831e1a66fc-kube-api-access-kb776" (OuterVolumeSpecName: "kube-api-access-kb776") pod "d8451b63-3252-4186-9a88-3d831e1a66fc" (UID: "d8451b63-3252-4186-9a88-3d831e1a66fc"). InnerVolumeSpecName "kube-api-access-kb776". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:30:02 crc kubenswrapper[4799]: I0127 08:30:02.980844 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8451b63-3252-4186-9a88-3d831e1a66fc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d8451b63-3252-4186-9a88-3d831e1a66fc" (UID: "d8451b63-3252-4186-9a88-3d831e1a66fc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:30:03 crc kubenswrapper[4799]: I0127 08:30:03.074660 4799 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8451b63-3252-4186-9a88-3d831e1a66fc-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 08:30:03 crc kubenswrapper[4799]: I0127 08:30:03.074703 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb776\" (UniqueName: \"kubernetes.io/projected/d8451b63-3252-4186-9a88-3d831e1a66fc-kube-api-access-kb776\") on node \"crc\" DevicePath \"\"" Jan 27 08:30:03 crc kubenswrapper[4799]: I0127 08:30:03.074723 4799 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8451b63-3252-4186-9a88-3d831e1a66fc-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 08:30:03 crc kubenswrapper[4799]: I0127 08:30:03.578125 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491710-7qljf" event={"ID":"d8451b63-3252-4186-9a88-3d831e1a66fc","Type":"ContainerDied","Data":"d197e26b925464a240aaef0341687d3978c7009bf2b652c9f568b2a52872dd4f"} Jan 27 08:30:03 crc kubenswrapper[4799]: I0127 08:30:03.578437 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d197e26b925464a240aaef0341687d3978c7009bf2b652c9f568b2a52872dd4f" Jan 27 08:30:03 crc kubenswrapper[4799]: I0127 08:30:03.578184 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491710-7qljf" Jan 27 08:30:03 crc kubenswrapper[4799]: I0127 08:30:03.864818 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc"] Jan 27 08:30:03 crc kubenswrapper[4799]: I0127 08:30:03.869734 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491665-4rrsc"] Jan 27 08:30:04 crc kubenswrapper[4799]: I0127 08:30:04.472125 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de07a2d4-e916-4c2d-bb3b-b8a268461a71" path="/var/lib/kubelet/pods/de07a2d4-e916-4c2d-bb3b-b8a268461a71/volumes" Jan 27 08:30:06 crc kubenswrapper[4799]: I0127 08:30:06.452446 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:30:06 crc kubenswrapper[4799]: E0127 08:30:06.452861 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:30:19 crc kubenswrapper[4799]: I0127 08:30:19.453032 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:30:19 crc kubenswrapper[4799]: E0127 08:30:19.454584 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:30:30 crc kubenswrapper[4799]: I0127 08:30:30.451097 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:30:30 crc kubenswrapper[4799]: E0127 08:30:30.452023 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:30:32 crc kubenswrapper[4799]: I0127 08:30:32.766630 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-49hhd"] Jan 27 08:30:32 crc kubenswrapper[4799]: E0127 08:30:32.767001 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8451b63-3252-4186-9a88-3d831e1a66fc" containerName="collect-profiles" Jan 27 08:30:32 crc kubenswrapper[4799]: I0127 08:30:32.767015 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8451b63-3252-4186-9a88-3d831e1a66fc" containerName="collect-profiles" Jan 27 08:30:32 crc kubenswrapper[4799]: I0127 08:30:32.767209 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8451b63-3252-4186-9a88-3d831e1a66fc" containerName="collect-profiles" Jan 27 08:30:32 crc kubenswrapper[4799]: I0127 08:30:32.768663 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49hhd" Jan 27 08:30:32 crc kubenswrapper[4799]: I0127 08:30:32.774793 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-49hhd"] Jan 27 08:30:32 crc kubenswrapper[4799]: I0127 08:30:32.925946 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk8rp\" (UniqueName: \"kubernetes.io/projected/4f54db27-5f7f-4094-83d6-9d373515a147-kube-api-access-gk8rp\") pod \"community-operators-49hhd\" (UID: \"4f54db27-5f7f-4094-83d6-9d373515a147\") " pod="openshift-marketplace/community-operators-49hhd" Jan 27 08:30:32 crc kubenswrapper[4799]: I0127 08:30:32.926310 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f54db27-5f7f-4094-83d6-9d373515a147-catalog-content\") pod \"community-operators-49hhd\" (UID: \"4f54db27-5f7f-4094-83d6-9d373515a147\") " pod="openshift-marketplace/community-operators-49hhd" Jan 27 08:30:32 crc kubenswrapper[4799]: I0127 08:30:32.926387 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f54db27-5f7f-4094-83d6-9d373515a147-utilities\") pod \"community-operators-49hhd\" (UID: \"4f54db27-5f7f-4094-83d6-9d373515a147\") " pod="openshift-marketplace/community-operators-49hhd" Jan 27 08:30:33 crc kubenswrapper[4799]: I0127 08:30:33.027573 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk8rp\" (UniqueName: \"kubernetes.io/projected/4f54db27-5f7f-4094-83d6-9d373515a147-kube-api-access-gk8rp\") pod \"community-operators-49hhd\" (UID: \"4f54db27-5f7f-4094-83d6-9d373515a147\") " pod="openshift-marketplace/community-operators-49hhd" Jan 27 08:30:33 crc kubenswrapper[4799]: I0127 08:30:33.027622 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f54db27-5f7f-4094-83d6-9d373515a147-catalog-content\") pod \"community-operators-49hhd\" (UID: \"4f54db27-5f7f-4094-83d6-9d373515a147\") " pod="openshift-marketplace/community-operators-49hhd" Jan 27 08:30:33 crc kubenswrapper[4799]: I0127 08:30:33.027703 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f54db27-5f7f-4094-83d6-9d373515a147-utilities\") pod \"community-operators-49hhd\" (UID: \"4f54db27-5f7f-4094-83d6-9d373515a147\") " pod="openshift-marketplace/community-operators-49hhd" Jan 27 08:30:33 crc kubenswrapper[4799]: I0127 08:30:33.028119 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f54db27-5f7f-4094-83d6-9d373515a147-catalog-content\") pod \"community-operators-49hhd\" (UID: \"4f54db27-5f7f-4094-83d6-9d373515a147\") " pod="openshift-marketplace/community-operators-49hhd" Jan 27 08:30:33 crc kubenswrapper[4799]: I0127 08:30:33.028133 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f54db27-5f7f-4094-83d6-9d373515a147-utilities\") pod \"community-operators-49hhd\" (UID: \"4f54db27-5f7f-4094-83d6-9d373515a147\") " pod="openshift-marketplace/community-operators-49hhd" Jan 27 08:30:33 crc kubenswrapper[4799]: I0127 08:30:33.056119 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk8rp\" (UniqueName: \"kubernetes.io/projected/4f54db27-5f7f-4094-83d6-9d373515a147-kube-api-access-gk8rp\") pod \"community-operators-49hhd\" (UID: \"4f54db27-5f7f-4094-83d6-9d373515a147\") " pod="openshift-marketplace/community-operators-49hhd" Jan 27 08:30:33 crc kubenswrapper[4799]: I0127 08:30:33.106282 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49hhd" Jan 27 08:30:33 crc kubenswrapper[4799]: I0127 08:30:33.596394 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-49hhd"] Jan 27 08:30:33 crc kubenswrapper[4799]: I0127 08:30:33.807250 4799 generic.go:334] "Generic (PLEG): container finished" podID="4f54db27-5f7f-4094-83d6-9d373515a147" containerID="d6b86126a00abc79d533545dff5ec3a393c52810f93861b4e9a87b9a07623541" exitCode=0 Jan 27 08:30:33 crc kubenswrapper[4799]: I0127 08:30:33.807394 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49hhd" event={"ID":"4f54db27-5f7f-4094-83d6-9d373515a147","Type":"ContainerDied","Data":"d6b86126a00abc79d533545dff5ec3a393c52810f93861b4e9a87b9a07623541"} Jan 27 08:30:33 crc kubenswrapper[4799]: I0127 08:30:33.807590 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49hhd" event={"ID":"4f54db27-5f7f-4094-83d6-9d373515a147","Type":"ContainerStarted","Data":"da6207c75be236c30c759eba9315363ad29cd7dc9bf4128ab004af25bd00d445"} Jan 27 08:30:34 crc kubenswrapper[4799]: I0127 08:30:34.827140 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49hhd" event={"ID":"4f54db27-5f7f-4094-83d6-9d373515a147","Type":"ContainerStarted","Data":"100b83576414422cb8260b72008e2fa189cd7c4966f8c854744c7ea970b3203c"} Jan 27 08:30:35 crc kubenswrapper[4799]: I0127 08:30:35.835858 4799 generic.go:334] "Generic (PLEG): container finished" podID="4f54db27-5f7f-4094-83d6-9d373515a147" containerID="100b83576414422cb8260b72008e2fa189cd7c4966f8c854744c7ea970b3203c" exitCode=0 Jan 27 08:30:35 crc kubenswrapper[4799]: I0127 08:30:35.835920 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49hhd" event={"ID":"4f54db27-5f7f-4094-83d6-9d373515a147","Type":"ContainerDied","Data":"100b83576414422cb8260b72008e2fa189cd7c4966f8c854744c7ea970b3203c"} Jan 27 08:30:36 crc kubenswrapper[4799]: I0127 08:30:36.849407 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49hhd" event={"ID":"4f54db27-5f7f-4094-83d6-9d373515a147","Type":"ContainerStarted","Data":"7912147ba2da26d1dbf3da84ee017843987b57c4f710c8c263644997763cf7cf"} Jan 27 08:30:36 crc kubenswrapper[4799]: I0127 08:30:36.875234 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-49hhd" podStartSLOduration=2.341445938 podStartE2EDuration="4.875213627s" podCreationTimestamp="2026-01-27 08:30:32 +0000 UTC" firstStartedPulling="2026-01-27 08:30:33.809368886 +0000 UTC m=+2700.120472951" lastFinishedPulling="2026-01-27 08:30:36.343136535 +0000 UTC m=+2702.654240640" observedRunningTime="2026-01-27 08:30:36.872135783 +0000 UTC m=+2703.183239868" watchObservedRunningTime="2026-01-27 08:30:36.875213627 +0000 UTC m=+2703.186317702" Jan 27 08:30:38 crc kubenswrapper[4799]: I0127 08:30:38.741699 4799 scope.go:117] "RemoveContainer" containerID="2b9bdf16be7602c152391c3f4392da1ce116663c6453dcd9991c2f2de697ea9a" Jan 27 08:30:43 crc kubenswrapper[4799]: I0127 08:30:43.107842 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-49hhd" Jan 27 08:30:43 crc kubenswrapper[4799]: I0127 08:30:43.108618 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-49hhd" Jan 27 08:30:43 crc kubenswrapper[4799]: I0127 08:30:43.194522 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-49hhd" Jan 27 08:30:43 crc kubenswrapper[4799]: I0127 08:30:43.972576 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-49hhd" Jan 27 08:30:44 crc kubenswrapper[4799]: I0127 08:30:44.033685 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-49hhd"] Jan 27 08:30:44 crc kubenswrapper[4799]: I0127 08:30:44.460132 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:30:44 crc kubenswrapper[4799]: E0127 08:30:44.460854 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:30:45 crc kubenswrapper[4799]: I0127 08:30:45.928392 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-49hhd" podUID="4f54db27-5f7f-4094-83d6-9d373515a147" containerName="registry-server" containerID="cri-o://7912147ba2da26d1dbf3da84ee017843987b57c4f710c8c263644997763cf7cf" gracePeriod=2 Jan 27 08:30:46 crc kubenswrapper[4799]: I0127 08:30:46.362293 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49hhd" Jan 27 08:30:46 crc kubenswrapper[4799]: I0127 08:30:46.542825 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f54db27-5f7f-4094-83d6-9d373515a147-utilities\") pod \"4f54db27-5f7f-4094-83d6-9d373515a147\" (UID: \"4f54db27-5f7f-4094-83d6-9d373515a147\") " Jan 27 08:30:46 crc kubenswrapper[4799]: I0127 08:30:46.542891 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk8rp\" (UniqueName: \"kubernetes.io/projected/4f54db27-5f7f-4094-83d6-9d373515a147-kube-api-access-gk8rp\") pod \"4f54db27-5f7f-4094-83d6-9d373515a147\" (UID: \"4f54db27-5f7f-4094-83d6-9d373515a147\") " Jan 27 08:30:46 crc kubenswrapper[4799]: I0127 08:30:46.542913 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f54db27-5f7f-4094-83d6-9d373515a147-catalog-content\") pod \"4f54db27-5f7f-4094-83d6-9d373515a147\" (UID: \"4f54db27-5f7f-4094-83d6-9d373515a147\") " Jan 27 08:30:46 crc kubenswrapper[4799]: I0127 08:30:46.544768 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f54db27-5f7f-4094-83d6-9d373515a147-utilities" (OuterVolumeSpecName: "utilities") pod "4f54db27-5f7f-4094-83d6-9d373515a147" (UID: "4f54db27-5f7f-4094-83d6-9d373515a147"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:30:46 crc kubenswrapper[4799]: I0127 08:30:46.549153 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f54db27-5f7f-4094-83d6-9d373515a147-kube-api-access-gk8rp" (OuterVolumeSpecName: "kube-api-access-gk8rp") pod "4f54db27-5f7f-4094-83d6-9d373515a147" (UID: "4f54db27-5f7f-4094-83d6-9d373515a147"). InnerVolumeSpecName "kube-api-access-gk8rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:30:46 crc kubenswrapper[4799]: I0127 08:30:46.598270 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f54db27-5f7f-4094-83d6-9d373515a147-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f54db27-5f7f-4094-83d6-9d373515a147" (UID: "4f54db27-5f7f-4094-83d6-9d373515a147"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:30:46 crc kubenswrapper[4799]: I0127 08:30:46.644625 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gk8rp\" (UniqueName: \"kubernetes.io/projected/4f54db27-5f7f-4094-83d6-9d373515a147-kube-api-access-gk8rp\") on node \"crc\" DevicePath \"\"" Jan 27 08:30:46 crc kubenswrapper[4799]: I0127 08:30:46.644966 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f54db27-5f7f-4094-83d6-9d373515a147-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:30:46 crc kubenswrapper[4799]: I0127 08:30:46.645115 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f54db27-5f7f-4094-83d6-9d373515a147-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:30:46 crc kubenswrapper[4799]: I0127 08:30:46.938592 4799 generic.go:334] "Generic (PLEG): container finished" podID="4f54db27-5f7f-4094-83d6-9d373515a147" containerID="7912147ba2da26d1dbf3da84ee017843987b57c4f710c8c263644997763cf7cf" exitCode=0 Jan 27 08:30:46 crc kubenswrapper[4799]: I0127 08:30:46.938656 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49hhd" event={"ID":"4f54db27-5f7f-4094-83d6-9d373515a147","Type":"ContainerDied","Data":"7912147ba2da26d1dbf3da84ee017843987b57c4f710c8c263644997763cf7cf"} Jan 27 08:30:46 crc kubenswrapper[4799]: I0127 08:30:46.938667 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49hhd" Jan 27 08:30:46 crc kubenswrapper[4799]: I0127 08:30:46.938699 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49hhd" event={"ID":"4f54db27-5f7f-4094-83d6-9d373515a147","Type":"ContainerDied","Data":"da6207c75be236c30c759eba9315363ad29cd7dc9bf4128ab004af25bd00d445"} Jan 27 08:30:46 crc kubenswrapper[4799]: I0127 08:30:46.938729 4799 scope.go:117] "RemoveContainer" containerID="7912147ba2da26d1dbf3da84ee017843987b57c4f710c8c263644997763cf7cf" Jan 27 08:30:46 crc kubenswrapper[4799]: I0127 08:30:46.973293 4799 scope.go:117] "RemoveContainer" containerID="100b83576414422cb8260b72008e2fa189cd7c4966f8c854744c7ea970b3203c" Jan 27 08:30:46 crc kubenswrapper[4799]: I0127 08:30:46.985691 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-49hhd"] Jan 27 08:30:46 crc kubenswrapper[4799]: I0127 08:30:46.994201 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-49hhd"] Jan 27 08:30:47 crc kubenswrapper[4799]: I0127 08:30:47.009536 4799 scope.go:117] "RemoveContainer" containerID="d6b86126a00abc79d533545dff5ec3a393c52810f93861b4e9a87b9a07623541" Jan 27 08:30:47 crc kubenswrapper[4799]: I0127 08:30:47.028041 4799 scope.go:117] "RemoveContainer" containerID="7912147ba2da26d1dbf3da84ee017843987b57c4f710c8c263644997763cf7cf" Jan 27 08:30:47 crc kubenswrapper[4799]: E0127 08:30:47.029578 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7912147ba2da26d1dbf3da84ee017843987b57c4f710c8c263644997763cf7cf\": container with ID starting with 7912147ba2da26d1dbf3da84ee017843987b57c4f710c8c263644997763cf7cf not found: ID does not exist" containerID="7912147ba2da26d1dbf3da84ee017843987b57c4f710c8c263644997763cf7cf" Jan 27 08:30:47 crc kubenswrapper[4799]: I0127 08:30:47.029613 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7912147ba2da26d1dbf3da84ee017843987b57c4f710c8c263644997763cf7cf"} err="failed to get container status \"7912147ba2da26d1dbf3da84ee017843987b57c4f710c8c263644997763cf7cf\": rpc error: code = NotFound desc = could not find container \"7912147ba2da26d1dbf3da84ee017843987b57c4f710c8c263644997763cf7cf\": container with ID starting with 7912147ba2da26d1dbf3da84ee017843987b57c4f710c8c263644997763cf7cf not found: ID does not exist" Jan 27 08:30:47 crc kubenswrapper[4799]: I0127 08:30:47.029633 4799 scope.go:117] "RemoveContainer" containerID="100b83576414422cb8260b72008e2fa189cd7c4966f8c854744c7ea970b3203c" Jan 27 08:30:47 crc kubenswrapper[4799]: E0127 08:30:47.030332 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"100b83576414422cb8260b72008e2fa189cd7c4966f8c854744c7ea970b3203c\": container with ID starting with 100b83576414422cb8260b72008e2fa189cd7c4966f8c854744c7ea970b3203c not found: ID does not exist" containerID="100b83576414422cb8260b72008e2fa189cd7c4966f8c854744c7ea970b3203c" Jan 27 08:30:47 crc kubenswrapper[4799]: I0127 08:30:47.030388 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"100b83576414422cb8260b72008e2fa189cd7c4966f8c854744c7ea970b3203c"} err="failed to get container status \"100b83576414422cb8260b72008e2fa189cd7c4966f8c854744c7ea970b3203c\": rpc error: code = NotFound desc = could not find container \"100b83576414422cb8260b72008e2fa189cd7c4966f8c854744c7ea970b3203c\": container with ID starting with 100b83576414422cb8260b72008e2fa189cd7c4966f8c854744c7ea970b3203c not found: ID does not exist" Jan 27 08:30:47 crc kubenswrapper[4799]: I0127 08:30:47.030427 4799 scope.go:117] "RemoveContainer" containerID="d6b86126a00abc79d533545dff5ec3a393c52810f93861b4e9a87b9a07623541" Jan 27 08:30:47 crc kubenswrapper[4799]: E0127 08:30:47.030842 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6b86126a00abc79d533545dff5ec3a393c52810f93861b4e9a87b9a07623541\": container with ID starting with d6b86126a00abc79d533545dff5ec3a393c52810f93861b4e9a87b9a07623541 not found: ID does not exist" containerID="d6b86126a00abc79d533545dff5ec3a393c52810f93861b4e9a87b9a07623541" Jan 27 08:30:47 crc kubenswrapper[4799]: I0127 08:30:47.030872 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6b86126a00abc79d533545dff5ec3a393c52810f93861b4e9a87b9a07623541"} err="failed to get container status \"d6b86126a00abc79d533545dff5ec3a393c52810f93861b4e9a87b9a07623541\": rpc error: code = NotFound desc = could not find container \"d6b86126a00abc79d533545dff5ec3a393c52810f93861b4e9a87b9a07623541\": container with ID starting with d6b86126a00abc79d533545dff5ec3a393c52810f93861b4e9a87b9a07623541 not found: ID does not exist" Jan 27 08:30:48 crc kubenswrapper[4799]: I0127 08:30:48.469181 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f54db27-5f7f-4094-83d6-9d373515a147" path="/var/lib/kubelet/pods/4f54db27-5f7f-4094-83d6-9d373515a147/volumes" Jan 27 08:30:59 crc kubenswrapper[4799]: I0127 08:30:59.451561 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:31:00 crc kubenswrapper[4799]: I0127 08:31:00.034817 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"c92fa07fd02fdf0916080dd3758fc3bc5f616979c45295085ad23cc3c26e6460"} Jan 27 08:33:23 crc kubenswrapper[4799]: I0127 08:33:23.731510 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:33:23 crc kubenswrapper[4799]: I0127 08:33:23.732153 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:33:53 crc kubenswrapper[4799]: I0127 08:33:53.731114 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:33:53 crc kubenswrapper[4799]: I0127 08:33:53.731936 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:33:59 crc kubenswrapper[4799]: I0127 08:33:59.518268 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-srhld"] Jan 27 08:33:59 crc kubenswrapper[4799]: E0127 08:33:59.519161 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f54db27-5f7f-4094-83d6-9d373515a147" containerName="extract-content" Jan 27 08:33:59 crc kubenswrapper[4799]: I0127 08:33:59.519173 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f54db27-5f7f-4094-83d6-9d373515a147" containerName="extract-content" Jan 27 08:33:59 crc kubenswrapper[4799]: E0127 08:33:59.519192 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f54db27-5f7f-4094-83d6-9d373515a147" containerName="extract-utilities" Jan 27 08:33:59 crc kubenswrapper[4799]: I0127 08:33:59.519198 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f54db27-5f7f-4094-83d6-9d373515a147" containerName="extract-utilities" Jan 27 08:33:59 crc kubenswrapper[4799]: E0127 08:33:59.519223 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f54db27-5f7f-4094-83d6-9d373515a147" containerName="registry-server" Jan 27 08:33:59 crc kubenswrapper[4799]: I0127 08:33:59.519230 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f54db27-5f7f-4094-83d6-9d373515a147" containerName="registry-server" Jan 27 08:33:59 crc kubenswrapper[4799]: I0127 08:33:59.519369 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f54db27-5f7f-4094-83d6-9d373515a147" containerName="registry-server" Jan 27 08:33:59 crc kubenswrapper[4799]: I0127 08:33:59.520663 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-srhld" Jan 27 08:33:59 crc kubenswrapper[4799]: I0127 08:33:59.532060 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-srhld"] Jan 27 08:33:59 crc kubenswrapper[4799]: I0127 08:33:59.629737 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58b8a20d-d7f1-4ed0-b589-61cd210913a7-utilities\") pod \"certified-operators-srhld\" (UID: \"58b8a20d-d7f1-4ed0-b589-61cd210913a7\") " pod="openshift-marketplace/certified-operators-srhld" Jan 27 08:33:59 crc kubenswrapper[4799]: I0127 08:33:59.629822 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7klnh\" (UniqueName: \"kubernetes.io/projected/58b8a20d-d7f1-4ed0-b589-61cd210913a7-kube-api-access-7klnh\") pod \"certified-operators-srhld\" (UID: \"58b8a20d-d7f1-4ed0-b589-61cd210913a7\") " pod="openshift-marketplace/certified-operators-srhld" Jan 27 08:33:59 crc kubenswrapper[4799]: I0127 08:33:59.629909 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58b8a20d-d7f1-4ed0-b589-61cd210913a7-catalog-content\") pod \"certified-operators-srhld\" (UID: \"58b8a20d-d7f1-4ed0-b589-61cd210913a7\") " pod="openshift-marketplace/certified-operators-srhld" Jan 27 08:33:59 crc kubenswrapper[4799]: I0127 08:33:59.731039 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7klnh\" (UniqueName: \"kubernetes.io/projected/58b8a20d-d7f1-4ed0-b589-61cd210913a7-kube-api-access-7klnh\") pod \"certified-operators-srhld\" (UID: \"58b8a20d-d7f1-4ed0-b589-61cd210913a7\") " pod="openshift-marketplace/certified-operators-srhld" Jan 27 08:33:59 crc kubenswrapper[4799]: I0127 08:33:59.731125 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58b8a20d-d7f1-4ed0-b589-61cd210913a7-catalog-content\") pod \"certified-operators-srhld\" (UID: \"58b8a20d-d7f1-4ed0-b589-61cd210913a7\") " pod="openshift-marketplace/certified-operators-srhld" Jan 27 08:33:59 crc kubenswrapper[4799]: I0127 08:33:59.731174 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58b8a20d-d7f1-4ed0-b589-61cd210913a7-utilities\") pod \"certified-operators-srhld\" (UID: \"58b8a20d-d7f1-4ed0-b589-61cd210913a7\") " pod="openshift-marketplace/certified-operators-srhld" Jan 27 08:33:59 crc kubenswrapper[4799]: I0127 08:33:59.731603 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58b8a20d-d7f1-4ed0-b589-61cd210913a7-utilities\") pod \"certified-operators-srhld\" (UID: \"58b8a20d-d7f1-4ed0-b589-61cd210913a7\") " pod="openshift-marketplace/certified-operators-srhld" Jan 27 08:33:59 crc kubenswrapper[4799]: I0127 08:33:59.731825 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58b8a20d-d7f1-4ed0-b589-61cd210913a7-catalog-content\") pod \"certified-operators-srhld\" (UID: \"58b8a20d-d7f1-4ed0-b589-61cd210913a7\") " pod="openshift-marketplace/certified-operators-srhld" Jan 27 08:33:59 crc kubenswrapper[4799]: I0127 08:33:59.748891 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7klnh\" (UniqueName: \"kubernetes.io/projected/58b8a20d-d7f1-4ed0-b589-61cd210913a7-kube-api-access-7klnh\") pod \"certified-operators-srhld\" (UID: \"58b8a20d-d7f1-4ed0-b589-61cd210913a7\") " pod="openshift-marketplace/certified-operators-srhld" Jan 27 08:33:59 crc kubenswrapper[4799]: I0127 08:33:59.842623 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-srhld" Jan 27 08:34:00 crc kubenswrapper[4799]: I0127 08:34:00.338584 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-srhld"] Jan 27 08:34:00 crc kubenswrapper[4799]: I0127 08:34:00.491439 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srhld" event={"ID":"58b8a20d-d7f1-4ed0-b589-61cd210913a7","Type":"ContainerStarted","Data":"2a13acdcd6f186c6fa2840dd5f8a812112fea556c04dbc3eebad6a16014bc8af"} Jan 27 08:34:01 crc kubenswrapper[4799]: I0127 08:34:01.502245 4799 generic.go:334] "Generic (PLEG): container finished" podID="58b8a20d-d7f1-4ed0-b589-61cd210913a7" containerID="e5abdb2d26e35cb8f3d90429d0670cbcb386d184b759b74440cd27eb8b686e71" exitCode=0 Jan 27 08:34:01 crc kubenswrapper[4799]: I0127 08:34:01.502326 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srhld" event={"ID":"58b8a20d-d7f1-4ed0-b589-61cd210913a7","Type":"ContainerDied","Data":"e5abdb2d26e35cb8f3d90429d0670cbcb386d184b759b74440cd27eb8b686e71"} Jan 27 08:34:01 crc kubenswrapper[4799]: I0127 08:34:01.504489 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 08:34:02 crc kubenswrapper[4799]: I0127 08:34:02.509436 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srhld" event={"ID":"58b8a20d-d7f1-4ed0-b589-61cd210913a7","Type":"ContainerStarted","Data":"b699b5f523d69bb82fcaf379057141d5be4901eebcdb1ec09bcab185c88647c0"} Jan 27 08:34:03 crc kubenswrapper[4799]: I0127 08:34:03.519504 4799 generic.go:334] "Generic (PLEG): container finished" podID="58b8a20d-d7f1-4ed0-b589-61cd210913a7" containerID="b699b5f523d69bb82fcaf379057141d5be4901eebcdb1ec09bcab185c88647c0" exitCode=0 Jan 27 08:34:03 crc kubenswrapper[4799]: I0127 08:34:03.519602 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srhld" event={"ID":"58b8a20d-d7f1-4ed0-b589-61cd210913a7","Type":"ContainerDied","Data":"b699b5f523d69bb82fcaf379057141d5be4901eebcdb1ec09bcab185c88647c0"} Jan 27 08:34:04 crc kubenswrapper[4799]: I0127 08:34:04.533121 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srhld" event={"ID":"58b8a20d-d7f1-4ed0-b589-61cd210913a7","Type":"ContainerStarted","Data":"e128762062acfbf431fddb6bf3e235afc475720ebe7508bc085c710d1d5fa763"} Jan 27 08:34:04 crc kubenswrapper[4799]: I0127 08:34:04.551241 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-srhld" podStartSLOduration=3.109181235 podStartE2EDuration="5.551219921s" podCreationTimestamp="2026-01-27 08:33:59 +0000 UTC" firstStartedPulling="2026-01-27 08:34:01.504067888 +0000 UTC m=+2907.815171963" lastFinishedPulling="2026-01-27 08:34:03.946106584 +0000 UTC m=+2910.257210649" observedRunningTime="2026-01-27 08:34:04.548333053 +0000 UTC m=+2910.859437118" watchObservedRunningTime="2026-01-27 08:34:04.551219921 +0000 UTC m=+2910.862323986" Jan 27 08:34:09 crc kubenswrapper[4799]: I0127 08:34:09.843808 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-srhld" Jan 27 08:34:09 crc kubenswrapper[4799]: I0127 08:34:09.844172 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-srhld" Jan 27 08:34:09 crc kubenswrapper[4799]: I0127 08:34:09.923109 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-srhld" Jan 27 08:34:10 crc kubenswrapper[4799]: I0127 08:34:10.637061 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-srhld" Jan 27 08:34:10 crc kubenswrapper[4799]: I0127 08:34:10.692019 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-srhld"] Jan 27 08:34:12 crc kubenswrapper[4799]: I0127 08:34:12.598348 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-srhld" podUID="58b8a20d-d7f1-4ed0-b589-61cd210913a7" containerName="registry-server" containerID="cri-o://e128762062acfbf431fddb6bf3e235afc475720ebe7508bc085c710d1d5fa763" gracePeriod=2 Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.147928 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-srhld" Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.268783 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58b8a20d-d7f1-4ed0-b589-61cd210913a7-utilities\") pod \"58b8a20d-d7f1-4ed0-b589-61cd210913a7\" (UID: \"58b8a20d-d7f1-4ed0-b589-61cd210913a7\") " Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.268910 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7klnh\" (UniqueName: \"kubernetes.io/projected/58b8a20d-d7f1-4ed0-b589-61cd210913a7-kube-api-access-7klnh\") pod \"58b8a20d-d7f1-4ed0-b589-61cd210913a7\" (UID: \"58b8a20d-d7f1-4ed0-b589-61cd210913a7\") " Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.268936 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58b8a20d-d7f1-4ed0-b589-61cd210913a7-catalog-content\") pod \"58b8a20d-d7f1-4ed0-b589-61cd210913a7\" (UID: \"58b8a20d-d7f1-4ed0-b589-61cd210913a7\") " Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.270109 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58b8a20d-d7f1-4ed0-b589-61cd210913a7-utilities" (OuterVolumeSpecName: "utilities") pod "58b8a20d-d7f1-4ed0-b589-61cd210913a7" (UID: "58b8a20d-d7f1-4ed0-b589-61cd210913a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.281774 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58b8a20d-d7f1-4ed0-b589-61cd210913a7-kube-api-access-7klnh" (OuterVolumeSpecName: "kube-api-access-7klnh") pod "58b8a20d-d7f1-4ed0-b589-61cd210913a7" (UID: "58b8a20d-d7f1-4ed0-b589-61cd210913a7"). InnerVolumeSpecName "kube-api-access-7klnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.369827 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58b8a20d-d7f1-4ed0-b589-61cd210913a7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.369855 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7klnh\" (UniqueName: \"kubernetes.io/projected/58b8a20d-d7f1-4ed0-b589-61cd210913a7-kube-api-access-7klnh\") on node \"crc\" DevicePath \"\"" Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.601857 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58b8a20d-d7f1-4ed0-b589-61cd210913a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58b8a20d-d7f1-4ed0-b589-61cd210913a7" (UID: "58b8a20d-d7f1-4ed0-b589-61cd210913a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.612238 4799 generic.go:334] "Generic (PLEG): container finished" podID="58b8a20d-d7f1-4ed0-b589-61cd210913a7" containerID="e128762062acfbf431fddb6bf3e235afc475720ebe7508bc085c710d1d5fa763" exitCode=0 Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.612287 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srhld" event={"ID":"58b8a20d-d7f1-4ed0-b589-61cd210913a7","Type":"ContainerDied","Data":"e128762062acfbf431fddb6bf3e235afc475720ebe7508bc085c710d1d5fa763"} Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.612344 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srhld" event={"ID":"58b8a20d-d7f1-4ed0-b589-61cd210913a7","Type":"ContainerDied","Data":"2a13acdcd6f186c6fa2840dd5f8a812112fea556c04dbc3eebad6a16014bc8af"} Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.612367 4799 scope.go:117] "RemoveContainer" containerID="e128762062acfbf431fddb6bf3e235afc475720ebe7508bc085c710d1d5fa763" Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.612538 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-srhld" Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.649492 4799 scope.go:117] "RemoveContainer" containerID="b699b5f523d69bb82fcaf379057141d5be4901eebcdb1ec09bcab185c88647c0" Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.664092 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-srhld"] Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.671280 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-srhld"] Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.674929 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58b8a20d-d7f1-4ed0-b589-61cd210913a7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.686639 4799 scope.go:117] "RemoveContainer" containerID="e5abdb2d26e35cb8f3d90429d0670cbcb386d184b759b74440cd27eb8b686e71" Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.703915 4799 scope.go:117] "RemoveContainer" containerID="e128762062acfbf431fddb6bf3e235afc475720ebe7508bc085c710d1d5fa763" Jan 27 08:34:13 crc kubenswrapper[4799]: E0127 08:34:13.704636 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e128762062acfbf431fddb6bf3e235afc475720ebe7508bc085c710d1d5fa763\": container with ID starting with e128762062acfbf431fddb6bf3e235afc475720ebe7508bc085c710d1d5fa763 not found: ID does not exist" containerID="e128762062acfbf431fddb6bf3e235afc475720ebe7508bc085c710d1d5fa763" Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.704676 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e128762062acfbf431fddb6bf3e235afc475720ebe7508bc085c710d1d5fa763"} err="failed to get container status \"e128762062acfbf431fddb6bf3e235afc475720ebe7508bc085c710d1d5fa763\": rpc error: code = NotFound desc = could not find container \"e128762062acfbf431fddb6bf3e235afc475720ebe7508bc085c710d1d5fa763\": container with ID starting with e128762062acfbf431fddb6bf3e235afc475720ebe7508bc085c710d1d5fa763 not found: ID does not exist" Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.704701 4799 scope.go:117] "RemoveContainer" containerID="b699b5f523d69bb82fcaf379057141d5be4901eebcdb1ec09bcab185c88647c0" Jan 27 08:34:13 crc kubenswrapper[4799]: E0127 08:34:13.705040 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b699b5f523d69bb82fcaf379057141d5be4901eebcdb1ec09bcab185c88647c0\": container with ID starting with b699b5f523d69bb82fcaf379057141d5be4901eebcdb1ec09bcab185c88647c0 not found: ID does not exist" containerID="b699b5f523d69bb82fcaf379057141d5be4901eebcdb1ec09bcab185c88647c0" Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.705074 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b699b5f523d69bb82fcaf379057141d5be4901eebcdb1ec09bcab185c88647c0"} err="failed to get container status \"b699b5f523d69bb82fcaf379057141d5be4901eebcdb1ec09bcab185c88647c0\": rpc error: code = NotFound desc = could not find container \"b699b5f523d69bb82fcaf379057141d5be4901eebcdb1ec09bcab185c88647c0\": container with ID starting with b699b5f523d69bb82fcaf379057141d5be4901eebcdb1ec09bcab185c88647c0 not found: ID does not exist" Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.705094 4799 scope.go:117] "RemoveContainer" containerID="e5abdb2d26e35cb8f3d90429d0670cbcb386d184b759b74440cd27eb8b686e71" Jan 27 08:34:13 crc kubenswrapper[4799]: E0127 08:34:13.705706 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5abdb2d26e35cb8f3d90429d0670cbcb386d184b759b74440cd27eb8b686e71\": container with ID starting with e5abdb2d26e35cb8f3d90429d0670cbcb386d184b759b74440cd27eb8b686e71 not found: ID does not exist" containerID="e5abdb2d26e35cb8f3d90429d0670cbcb386d184b759b74440cd27eb8b686e71" Jan 27 08:34:13 crc kubenswrapper[4799]: I0127 08:34:13.705756 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5abdb2d26e35cb8f3d90429d0670cbcb386d184b759b74440cd27eb8b686e71"} err="failed to get container status \"e5abdb2d26e35cb8f3d90429d0670cbcb386d184b759b74440cd27eb8b686e71\": rpc error: code = NotFound desc = could not find container \"e5abdb2d26e35cb8f3d90429d0670cbcb386d184b759b74440cd27eb8b686e71\": container with ID starting with e5abdb2d26e35cb8f3d90429d0670cbcb386d184b759b74440cd27eb8b686e71 not found: ID does not exist" Jan 27 08:34:14 crc kubenswrapper[4799]: I0127 08:34:14.466937 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58b8a20d-d7f1-4ed0-b589-61cd210913a7" path="/var/lib/kubelet/pods/58b8a20d-d7f1-4ed0-b589-61cd210913a7/volumes" Jan 27 08:34:23 crc kubenswrapper[4799]: I0127 08:34:23.731049 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:34:23 crc kubenswrapper[4799]: I0127 08:34:23.731676 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:34:23 crc kubenswrapper[4799]: I0127 08:34:23.731723 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 08:34:23 crc kubenswrapper[4799]: I0127 08:34:23.732544 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c92fa07fd02fdf0916080dd3758fc3bc5f616979c45295085ad23cc3c26e6460"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 08:34:23 crc kubenswrapper[4799]: I0127 08:34:23.732620 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://c92fa07fd02fdf0916080dd3758fc3bc5f616979c45295085ad23cc3c26e6460" gracePeriod=600 Jan 27 08:34:24 crc kubenswrapper[4799]: I0127 08:34:24.714488 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="c92fa07fd02fdf0916080dd3758fc3bc5f616979c45295085ad23cc3c26e6460" exitCode=0 Jan 27 08:34:24 crc kubenswrapper[4799]: I0127 08:34:24.714552 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"c92fa07fd02fdf0916080dd3758fc3bc5f616979c45295085ad23cc3c26e6460"} Jan 27 08:34:24 crc kubenswrapper[4799]: I0127 08:34:24.715229 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41"} Jan 27 08:34:24 crc kubenswrapper[4799]: I0127 08:34:24.715258 4799 scope.go:117] "RemoveContainer" containerID="e3cbfcf86ac8433fd2878ed5d78ccb34fe10d6b95823778028594ed14ed9dafd" Jan 27 08:36:53 crc kubenswrapper[4799]: I0127 08:36:53.731053 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:36:53 crc kubenswrapper[4799]: I0127 08:36:53.731843 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:37:23 crc kubenswrapper[4799]: I0127 08:37:23.731659 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:37:23 crc kubenswrapper[4799]: I0127 08:37:23.732568 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:37:53 crc kubenswrapper[4799]: I0127 08:37:53.730942 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:37:53 crc kubenswrapper[4799]: I0127 08:37:53.731643 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:37:53 crc kubenswrapper[4799]: I0127 08:37:53.731712 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 08:37:53 crc kubenswrapper[4799]: I0127 08:37:53.732607 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 08:37:53 crc kubenswrapper[4799]: I0127 08:37:53.732709 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" gracePeriod=600 Jan 27 08:37:53 crc kubenswrapper[4799]: E0127 08:37:53.876386 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:37:54 crc kubenswrapper[4799]: I0127 08:37:54.449162 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" exitCode=0 Jan 27 08:37:54 crc kubenswrapper[4799]: I0127 08:37:54.449213 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41"} Jan 27 08:37:54 crc kubenswrapper[4799]: I0127 08:37:54.449285 4799 scope.go:117] "RemoveContainer" containerID="c92fa07fd02fdf0916080dd3758fc3bc5f616979c45295085ad23cc3c26e6460" Jan 27 08:37:54 crc kubenswrapper[4799]: I0127 08:37:54.449946 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:37:54 crc kubenswrapper[4799]: E0127 08:37:54.450250 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:38:06 crc kubenswrapper[4799]: I0127 08:38:06.451855 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:38:06 crc kubenswrapper[4799]: E0127 08:38:06.452851 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:38:21 crc kubenswrapper[4799]: I0127 08:38:21.451208 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:38:21 crc kubenswrapper[4799]: E0127 08:38:21.452359 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:38:32 crc kubenswrapper[4799]: I0127 08:38:32.453291 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:38:32 crc kubenswrapper[4799]: E0127 08:38:32.454385 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:38:45 crc kubenswrapper[4799]: I0127 08:38:45.451706 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:38:45 crc kubenswrapper[4799]: E0127 08:38:45.453267 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:38:58 crc kubenswrapper[4799]: I0127 08:38:58.452052 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:38:58 crc kubenswrapper[4799]: E0127 08:38:58.454043 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:39:09 crc kubenswrapper[4799]: I0127 08:39:09.454793 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:39:09 crc kubenswrapper[4799]: E0127 08:39:09.455795 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:39:24 crc kubenswrapper[4799]: I0127 08:39:24.459552 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:39:24 crc kubenswrapper[4799]: E0127 08:39:24.460255 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:39:36 crc kubenswrapper[4799]: I0127 08:39:36.451719 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:39:36 crc kubenswrapper[4799]: E0127 08:39:36.452683 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:39:42 crc kubenswrapper[4799]: I0127 08:39:42.262983 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jrv7w"] Jan 27 08:39:42 crc kubenswrapper[4799]: E0127 08:39:42.264551 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b8a20d-d7f1-4ed0-b589-61cd210913a7" containerName="extract-utilities" Jan 27 08:39:42 crc kubenswrapper[4799]: I0127 08:39:42.264568 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b8a20d-d7f1-4ed0-b589-61cd210913a7" containerName="extract-utilities" Jan 27 08:39:42 crc kubenswrapper[4799]: E0127 08:39:42.264591 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b8a20d-d7f1-4ed0-b589-61cd210913a7" containerName="registry-server" Jan 27 08:39:42 crc kubenswrapper[4799]: I0127 08:39:42.264600 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b8a20d-d7f1-4ed0-b589-61cd210913a7" containerName="registry-server" Jan 27 08:39:42 crc kubenswrapper[4799]: E0127 08:39:42.264620 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b8a20d-d7f1-4ed0-b589-61cd210913a7" containerName="extract-content" Jan 27 08:39:42 crc kubenswrapper[4799]: I0127 08:39:42.264628 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b8a20d-d7f1-4ed0-b589-61cd210913a7" containerName="extract-content" Jan 27 08:39:42 crc kubenswrapper[4799]: I0127 08:39:42.264799 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b8a20d-d7f1-4ed0-b589-61cd210913a7" containerName="registry-server" Jan 27 08:39:42 crc kubenswrapper[4799]: I0127 08:39:42.266073 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrv7w" Jan 27 08:39:42 crc kubenswrapper[4799]: I0127 08:39:42.276338 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrv7w"] Jan 27 08:39:42 crc kubenswrapper[4799]: I0127 08:39:42.443653 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzqvx\" (UniqueName: \"kubernetes.io/projected/450b659a-612b-4c75-b500-3e1eb64c3167-kube-api-access-zzqvx\") pod \"redhat-marketplace-jrv7w\" (UID: \"450b659a-612b-4c75-b500-3e1eb64c3167\") " pod="openshift-marketplace/redhat-marketplace-jrv7w" Jan 27 08:39:42 crc kubenswrapper[4799]: I0127 08:39:42.443717 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/450b659a-612b-4c75-b500-3e1eb64c3167-utilities\") pod \"redhat-marketplace-jrv7w\" (UID: \"450b659a-612b-4c75-b500-3e1eb64c3167\") " pod="openshift-marketplace/redhat-marketplace-jrv7w" Jan 27 08:39:42 crc kubenswrapper[4799]: I0127 08:39:42.443795 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/450b659a-612b-4c75-b500-3e1eb64c3167-catalog-content\") pod \"redhat-marketplace-jrv7w\" (UID: \"450b659a-612b-4c75-b500-3e1eb64c3167\") " pod="openshift-marketplace/redhat-marketplace-jrv7w" Jan 27 08:39:42 crc kubenswrapper[4799]: I0127 08:39:42.544748 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzqvx\" (UniqueName: \"kubernetes.io/projected/450b659a-612b-4c75-b500-3e1eb64c3167-kube-api-access-zzqvx\") pod \"redhat-marketplace-jrv7w\" (UID: \"450b659a-612b-4c75-b500-3e1eb64c3167\") " pod="openshift-marketplace/redhat-marketplace-jrv7w" Jan 27 08:39:42 crc kubenswrapper[4799]: I0127 08:39:42.544803 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/450b659a-612b-4c75-b500-3e1eb64c3167-utilities\") pod \"redhat-marketplace-jrv7w\" (UID: \"450b659a-612b-4c75-b500-3e1eb64c3167\") " pod="openshift-marketplace/redhat-marketplace-jrv7w" Jan 27 08:39:42 crc kubenswrapper[4799]: I0127 08:39:42.544912 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/450b659a-612b-4c75-b500-3e1eb64c3167-catalog-content\") pod \"redhat-marketplace-jrv7w\" (UID: \"450b659a-612b-4c75-b500-3e1eb64c3167\") " pod="openshift-marketplace/redhat-marketplace-jrv7w" Jan 27 08:39:42 crc kubenswrapper[4799]: I0127 08:39:42.545747 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/450b659a-612b-4c75-b500-3e1eb64c3167-catalog-content\") pod \"redhat-marketplace-jrv7w\" (UID: \"450b659a-612b-4c75-b500-3e1eb64c3167\") " pod="openshift-marketplace/redhat-marketplace-jrv7w" Jan 27 08:39:42 crc kubenswrapper[4799]: I0127 08:39:42.545810 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/450b659a-612b-4c75-b500-3e1eb64c3167-utilities\") pod \"redhat-marketplace-jrv7w\" (UID: \"450b659a-612b-4c75-b500-3e1eb64c3167\") " pod="openshift-marketplace/redhat-marketplace-jrv7w" Jan 27 08:39:42 crc kubenswrapper[4799]: I0127 08:39:42.564559 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzqvx\" (UniqueName: \"kubernetes.io/projected/450b659a-612b-4c75-b500-3e1eb64c3167-kube-api-access-zzqvx\") pod \"redhat-marketplace-jrv7w\" (UID: \"450b659a-612b-4c75-b500-3e1eb64c3167\") " pod="openshift-marketplace/redhat-marketplace-jrv7w" Jan 27 08:39:42 crc kubenswrapper[4799]: I0127 08:39:42.591161 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrv7w" Jan 27 08:39:43 crc kubenswrapper[4799]: I0127 08:39:43.012774 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrv7w"] Jan 27 08:39:43 crc kubenswrapper[4799]: I0127 08:39:43.320641 4799 generic.go:334] "Generic (PLEG): container finished" podID="450b659a-612b-4c75-b500-3e1eb64c3167" containerID="48289823d68e91f4b2b8fb82154d5d88de69d4cbbe412b0682bbfc3448605c95" exitCode=0 Jan 27 08:39:43 crc kubenswrapper[4799]: I0127 08:39:43.320811 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrv7w" event={"ID":"450b659a-612b-4c75-b500-3e1eb64c3167","Type":"ContainerDied","Data":"48289823d68e91f4b2b8fb82154d5d88de69d4cbbe412b0682bbfc3448605c95"} Jan 27 08:39:43 crc kubenswrapper[4799]: I0127 08:39:43.320985 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrv7w" event={"ID":"450b659a-612b-4c75-b500-3e1eb64c3167","Type":"ContainerStarted","Data":"e37c83377747ff8236529f3e19900e00443379ad3799e959427d00227ce0a63d"} Jan 27 08:39:43 crc kubenswrapper[4799]: I0127 08:39:43.324199 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 08:39:45 crc kubenswrapper[4799]: I0127 08:39:45.339613 4799 generic.go:334] "Generic (PLEG): container finished" podID="450b659a-612b-4c75-b500-3e1eb64c3167" containerID="2fc4f537d5f767b2688747d7eb08d0bbafb42f2578663f5dedba64ec4a6c3324" exitCode=0 Jan 27 08:39:45 crc kubenswrapper[4799]: I0127 08:39:45.339725 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrv7w" event={"ID":"450b659a-612b-4c75-b500-3e1eb64c3167","Type":"ContainerDied","Data":"2fc4f537d5f767b2688747d7eb08d0bbafb42f2578663f5dedba64ec4a6c3324"} Jan 27 08:39:46 crc kubenswrapper[4799]: I0127 08:39:46.347615 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrv7w" event={"ID":"450b659a-612b-4c75-b500-3e1eb64c3167","Type":"ContainerStarted","Data":"3a7d3f582e1f4826b293a18427f77e930c2bcbfa18a3282cf46edc0a8e0d2d38"} Jan 27 08:39:46 crc kubenswrapper[4799]: I0127 08:39:46.366219 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jrv7w" podStartSLOduration=1.958352412 podStartE2EDuration="4.366192087s" podCreationTimestamp="2026-01-27 08:39:42 +0000 UTC" firstStartedPulling="2026-01-27 08:39:43.323968988 +0000 UTC m=+3249.635073053" lastFinishedPulling="2026-01-27 08:39:45.731808663 +0000 UTC m=+3252.042912728" observedRunningTime="2026-01-27 08:39:46.362026394 +0000 UTC m=+3252.673130499" watchObservedRunningTime="2026-01-27 08:39:46.366192087 +0000 UTC m=+3252.677296192" Jan 27 08:39:51 crc kubenswrapper[4799]: I0127 08:39:51.451757 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:39:51 crc kubenswrapper[4799]: E0127 08:39:51.452345 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:39:52 crc kubenswrapper[4799]: I0127 08:39:52.592448 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jrv7w" Jan 27 08:39:52 crc kubenswrapper[4799]: I0127 08:39:52.592632 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jrv7w" Jan 27 08:39:52 crc kubenswrapper[4799]: I0127 08:39:52.668757 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jrv7w" Jan 27 08:39:53 crc kubenswrapper[4799]: I0127 08:39:53.485513 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jrv7w" Jan 27 08:39:53 crc kubenswrapper[4799]: I0127 08:39:53.546178 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrv7w"] Jan 27 08:39:55 crc kubenswrapper[4799]: I0127 08:39:55.427769 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jrv7w" podUID="450b659a-612b-4c75-b500-3e1eb64c3167" containerName="registry-server" containerID="cri-o://3a7d3f582e1f4826b293a18427f77e930c2bcbfa18a3282cf46edc0a8e0d2d38" gracePeriod=2 Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.322830 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrv7w" Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.435190 4799 generic.go:334] "Generic (PLEG): container finished" podID="450b659a-612b-4c75-b500-3e1eb64c3167" containerID="3a7d3f582e1f4826b293a18427f77e930c2bcbfa18a3282cf46edc0a8e0d2d38" exitCode=0 Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.435244 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrv7w" event={"ID":"450b659a-612b-4c75-b500-3e1eb64c3167","Type":"ContainerDied","Data":"3a7d3f582e1f4826b293a18427f77e930c2bcbfa18a3282cf46edc0a8e0d2d38"} Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.435275 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrv7w" Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.435337 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrv7w" event={"ID":"450b659a-612b-4c75-b500-3e1eb64c3167","Type":"ContainerDied","Data":"e37c83377747ff8236529f3e19900e00443379ad3799e959427d00227ce0a63d"} Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.435369 4799 scope.go:117] "RemoveContainer" containerID="3a7d3f582e1f4826b293a18427f77e930c2bcbfa18a3282cf46edc0a8e0d2d38" Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.453059 4799 scope.go:117] "RemoveContainer" containerID="2fc4f537d5f767b2688747d7eb08d0bbafb42f2578663f5dedba64ec4a6c3324" Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.464786 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/450b659a-612b-4c75-b500-3e1eb64c3167-utilities\") pod \"450b659a-612b-4c75-b500-3e1eb64c3167\" (UID: \"450b659a-612b-4c75-b500-3e1eb64c3167\") " Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.464845 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzqvx\" (UniqueName: \"kubernetes.io/projected/450b659a-612b-4c75-b500-3e1eb64c3167-kube-api-access-zzqvx\") pod \"450b659a-612b-4c75-b500-3e1eb64c3167\" (UID: \"450b659a-612b-4c75-b500-3e1eb64c3167\") " Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.464878 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/450b659a-612b-4c75-b500-3e1eb64c3167-catalog-content\") pod \"450b659a-612b-4c75-b500-3e1eb64c3167\" (UID: \"450b659a-612b-4c75-b500-3e1eb64c3167\") " Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.466370 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/450b659a-612b-4c75-b500-3e1eb64c3167-utilities" (OuterVolumeSpecName: "utilities") pod "450b659a-612b-4c75-b500-3e1eb64c3167" (UID: "450b659a-612b-4c75-b500-3e1eb64c3167"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.471527 4799 scope.go:117] "RemoveContainer" containerID="48289823d68e91f4b2b8fb82154d5d88de69d4cbbe412b0682bbfc3448605c95" Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.475453 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/450b659a-612b-4c75-b500-3e1eb64c3167-kube-api-access-zzqvx" (OuterVolumeSpecName: "kube-api-access-zzqvx") pod "450b659a-612b-4c75-b500-3e1eb64c3167" (UID: "450b659a-612b-4c75-b500-3e1eb64c3167"). InnerVolumeSpecName "kube-api-access-zzqvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.487127 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/450b659a-612b-4c75-b500-3e1eb64c3167-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "450b659a-612b-4c75-b500-3e1eb64c3167" (UID: "450b659a-612b-4c75-b500-3e1eb64c3167"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.512439 4799 scope.go:117] "RemoveContainer" containerID="3a7d3f582e1f4826b293a18427f77e930c2bcbfa18a3282cf46edc0a8e0d2d38" Jan 27 08:39:56 crc kubenswrapper[4799]: E0127 08:39:56.512889 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a7d3f582e1f4826b293a18427f77e930c2bcbfa18a3282cf46edc0a8e0d2d38\": container with ID starting with 3a7d3f582e1f4826b293a18427f77e930c2bcbfa18a3282cf46edc0a8e0d2d38 not found: ID does not exist" containerID="3a7d3f582e1f4826b293a18427f77e930c2bcbfa18a3282cf46edc0a8e0d2d38" Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.512923 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a7d3f582e1f4826b293a18427f77e930c2bcbfa18a3282cf46edc0a8e0d2d38"} err="failed to get container status \"3a7d3f582e1f4826b293a18427f77e930c2bcbfa18a3282cf46edc0a8e0d2d38\": rpc error: code = NotFound desc = could not find container \"3a7d3f582e1f4826b293a18427f77e930c2bcbfa18a3282cf46edc0a8e0d2d38\": container with ID starting with 3a7d3f582e1f4826b293a18427f77e930c2bcbfa18a3282cf46edc0a8e0d2d38 not found: ID does not exist" Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.513006 4799 scope.go:117] "RemoveContainer" containerID="2fc4f537d5f767b2688747d7eb08d0bbafb42f2578663f5dedba64ec4a6c3324" Jan 27 08:39:56 crc kubenswrapper[4799]: E0127 08:39:56.513434 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fc4f537d5f767b2688747d7eb08d0bbafb42f2578663f5dedba64ec4a6c3324\": container with ID starting with 2fc4f537d5f767b2688747d7eb08d0bbafb42f2578663f5dedba64ec4a6c3324 not found: ID does not exist" containerID="2fc4f537d5f767b2688747d7eb08d0bbafb42f2578663f5dedba64ec4a6c3324" Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.513456 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fc4f537d5f767b2688747d7eb08d0bbafb42f2578663f5dedba64ec4a6c3324"} err="failed to get container status \"2fc4f537d5f767b2688747d7eb08d0bbafb42f2578663f5dedba64ec4a6c3324\": rpc error: code = NotFound desc = could not find container \"2fc4f537d5f767b2688747d7eb08d0bbafb42f2578663f5dedba64ec4a6c3324\": container with ID starting with 2fc4f537d5f767b2688747d7eb08d0bbafb42f2578663f5dedba64ec4a6c3324 not found: ID does not exist" Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.513475 4799 scope.go:117] "RemoveContainer" containerID="48289823d68e91f4b2b8fb82154d5d88de69d4cbbe412b0682bbfc3448605c95" Jan 27 08:39:56 crc kubenswrapper[4799]: E0127 08:39:56.513843 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48289823d68e91f4b2b8fb82154d5d88de69d4cbbe412b0682bbfc3448605c95\": container with ID starting with 48289823d68e91f4b2b8fb82154d5d88de69d4cbbe412b0682bbfc3448605c95 not found: ID does not exist" containerID="48289823d68e91f4b2b8fb82154d5d88de69d4cbbe412b0682bbfc3448605c95" Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.513895 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48289823d68e91f4b2b8fb82154d5d88de69d4cbbe412b0682bbfc3448605c95"} err="failed to get container status \"48289823d68e91f4b2b8fb82154d5d88de69d4cbbe412b0682bbfc3448605c95\": rpc error: code = NotFound desc = could not find container \"48289823d68e91f4b2b8fb82154d5d88de69d4cbbe412b0682bbfc3448605c95\": container with ID starting with 48289823d68e91f4b2b8fb82154d5d88de69d4cbbe412b0682bbfc3448605c95 not found: ID does not exist" Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.566045 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/450b659a-612b-4c75-b500-3e1eb64c3167-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.566075 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzqvx\" (UniqueName: \"kubernetes.io/projected/450b659a-612b-4c75-b500-3e1eb64c3167-kube-api-access-zzqvx\") on node \"crc\" DevicePath \"\"" Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.566087 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/450b659a-612b-4c75-b500-3e1eb64c3167-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.765468 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrv7w"] Jan 27 08:39:56 crc kubenswrapper[4799]: I0127 08:39:56.773066 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrv7w"] Jan 27 08:39:58 crc kubenswrapper[4799]: I0127 08:39:58.464980 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="450b659a-612b-4c75-b500-3e1eb64c3167" path="/var/lib/kubelet/pods/450b659a-612b-4c75-b500-3e1eb64c3167/volumes" Jan 27 08:40:05 crc kubenswrapper[4799]: I0127 08:40:05.452355 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:40:05 crc kubenswrapper[4799]: E0127 08:40:05.453374 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:40:20 crc kubenswrapper[4799]: I0127 08:40:20.452729 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:40:20 crc kubenswrapper[4799]: E0127 08:40:20.453655 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:40:35 crc kubenswrapper[4799]: I0127 08:40:35.451176 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:40:35 crc kubenswrapper[4799]: E0127 08:40:35.452138 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:40:48 crc kubenswrapper[4799]: I0127 08:40:48.451720 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:40:48 crc kubenswrapper[4799]: E0127 08:40:48.452607 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:41:01 crc kubenswrapper[4799]: I0127 08:41:01.451290 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:41:01 crc kubenswrapper[4799]: E0127 08:41:01.452179 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:41:10 crc kubenswrapper[4799]: I0127 08:41:10.747924 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hfckx"] Jan 27 08:41:10 crc kubenswrapper[4799]: E0127 08:41:10.749614 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="450b659a-612b-4c75-b500-3e1eb64c3167" containerName="extract-utilities" Jan 27 08:41:10 crc kubenswrapper[4799]: I0127 08:41:10.749632 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="450b659a-612b-4c75-b500-3e1eb64c3167" containerName="extract-utilities" Jan 27 08:41:10 crc kubenswrapper[4799]: E0127 08:41:10.749646 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="450b659a-612b-4c75-b500-3e1eb64c3167" containerName="registry-server" Jan 27 08:41:10 crc kubenswrapper[4799]: I0127 08:41:10.749654 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="450b659a-612b-4c75-b500-3e1eb64c3167" containerName="registry-server" Jan 27 08:41:10 crc kubenswrapper[4799]: E0127 08:41:10.749671 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="450b659a-612b-4c75-b500-3e1eb64c3167" containerName="extract-content" Jan 27 08:41:10 crc kubenswrapper[4799]: I0127 08:41:10.749680 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="450b659a-612b-4c75-b500-3e1eb64c3167" containerName="extract-content" Jan 27 08:41:10 crc kubenswrapper[4799]: I0127 08:41:10.749839 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="450b659a-612b-4c75-b500-3e1eb64c3167" containerName="registry-server" Jan 27 08:41:10 crc kubenswrapper[4799]: I0127 08:41:10.751028 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hfckx" Jan 27 08:41:10 crc kubenswrapper[4799]: I0127 08:41:10.768243 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hfckx"] Jan 27 08:41:10 crc kubenswrapper[4799]: I0127 08:41:10.935248 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/267a1e23-b0ef-4963-ab20-7cc43ecade06-utilities\") pod \"community-operators-hfckx\" (UID: \"267a1e23-b0ef-4963-ab20-7cc43ecade06\") " pod="openshift-marketplace/community-operators-hfckx" Jan 27 08:41:10 crc kubenswrapper[4799]: I0127 08:41:10.935368 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjnmn\" (UniqueName: \"kubernetes.io/projected/267a1e23-b0ef-4963-ab20-7cc43ecade06-kube-api-access-wjnmn\") pod \"community-operators-hfckx\" (UID: \"267a1e23-b0ef-4963-ab20-7cc43ecade06\") " pod="openshift-marketplace/community-operators-hfckx" Jan 27 08:41:10 crc kubenswrapper[4799]: I0127 08:41:10.935904 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/267a1e23-b0ef-4963-ab20-7cc43ecade06-catalog-content\") pod \"community-operators-hfckx\" (UID: \"267a1e23-b0ef-4963-ab20-7cc43ecade06\") " pod="openshift-marketplace/community-operators-hfckx" Jan 27 08:41:11 crc kubenswrapper[4799]: I0127 08:41:11.036710 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjnmn\" (UniqueName: \"kubernetes.io/projected/267a1e23-b0ef-4963-ab20-7cc43ecade06-kube-api-access-wjnmn\") pod \"community-operators-hfckx\" (UID: \"267a1e23-b0ef-4963-ab20-7cc43ecade06\") " pod="openshift-marketplace/community-operators-hfckx" Jan 27 08:41:11 crc kubenswrapper[4799]: I0127 08:41:11.036812 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/267a1e23-b0ef-4963-ab20-7cc43ecade06-catalog-content\") pod \"community-operators-hfckx\" (UID: \"267a1e23-b0ef-4963-ab20-7cc43ecade06\") " pod="openshift-marketplace/community-operators-hfckx" Jan 27 08:41:11 crc kubenswrapper[4799]: I0127 08:41:11.036915 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/267a1e23-b0ef-4963-ab20-7cc43ecade06-utilities\") pod \"community-operators-hfckx\" (UID: \"267a1e23-b0ef-4963-ab20-7cc43ecade06\") " pod="openshift-marketplace/community-operators-hfckx" Jan 27 08:41:11 crc kubenswrapper[4799]: I0127 08:41:11.037228 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/267a1e23-b0ef-4963-ab20-7cc43ecade06-catalog-content\") pod \"community-operators-hfckx\" (UID: \"267a1e23-b0ef-4963-ab20-7cc43ecade06\") " pod="openshift-marketplace/community-operators-hfckx" Jan 27 08:41:11 crc kubenswrapper[4799]: I0127 08:41:11.037397 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/267a1e23-b0ef-4963-ab20-7cc43ecade06-utilities\") pod \"community-operators-hfckx\" (UID: \"267a1e23-b0ef-4963-ab20-7cc43ecade06\") " pod="openshift-marketplace/community-operators-hfckx" Jan 27 08:41:11 crc kubenswrapper[4799]: I0127 08:41:11.058328 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjnmn\" (UniqueName: \"kubernetes.io/projected/267a1e23-b0ef-4963-ab20-7cc43ecade06-kube-api-access-wjnmn\") pod \"community-operators-hfckx\" (UID: \"267a1e23-b0ef-4963-ab20-7cc43ecade06\") " pod="openshift-marketplace/community-operators-hfckx" Jan 27 08:41:11 crc kubenswrapper[4799]: I0127 08:41:11.077211 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hfckx" Jan 27 08:41:11 crc kubenswrapper[4799]: I0127 08:41:11.562504 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hfckx"] Jan 27 08:41:12 crc kubenswrapper[4799]: I0127 08:41:12.306130 4799 generic.go:334] "Generic (PLEG): container finished" podID="267a1e23-b0ef-4963-ab20-7cc43ecade06" containerID="10fd7162d470aaaa95efbe97b945b091cfebaeec789848b5e72f53c64905e486" exitCode=0 Jan 27 08:41:12 crc kubenswrapper[4799]: I0127 08:41:12.306260 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfckx" event={"ID":"267a1e23-b0ef-4963-ab20-7cc43ecade06","Type":"ContainerDied","Data":"10fd7162d470aaaa95efbe97b945b091cfebaeec789848b5e72f53c64905e486"} Jan 27 08:41:12 crc kubenswrapper[4799]: I0127 08:41:12.306742 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfckx" event={"ID":"267a1e23-b0ef-4963-ab20-7cc43ecade06","Type":"ContainerStarted","Data":"136a36d8730ed2c242ed829fc551e5da35819a67bae3d30402c219d7104b58c1"} Jan 27 08:41:12 crc kubenswrapper[4799]: I0127 08:41:12.451274 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:41:12 crc kubenswrapper[4799]: E0127 08:41:12.451565 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:41:13 crc kubenswrapper[4799]: I0127 08:41:13.316576 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfckx" event={"ID":"267a1e23-b0ef-4963-ab20-7cc43ecade06","Type":"ContainerStarted","Data":"0d469ced58ad53588b35314db514c4d62954122c0a1d14954f992c4a8185fda2"} Jan 27 08:41:14 crc kubenswrapper[4799]: I0127 08:41:14.324256 4799 generic.go:334] "Generic (PLEG): container finished" podID="267a1e23-b0ef-4963-ab20-7cc43ecade06" containerID="0d469ced58ad53588b35314db514c4d62954122c0a1d14954f992c4a8185fda2" exitCode=0 Jan 27 08:41:14 crc kubenswrapper[4799]: I0127 08:41:14.324335 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfckx" event={"ID":"267a1e23-b0ef-4963-ab20-7cc43ecade06","Type":"ContainerDied","Data":"0d469ced58ad53588b35314db514c4d62954122c0a1d14954f992c4a8185fda2"} Jan 27 08:41:17 crc kubenswrapper[4799]: I0127 08:41:17.346140 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfckx" event={"ID":"267a1e23-b0ef-4963-ab20-7cc43ecade06","Type":"ContainerStarted","Data":"a851cc09ded22175f09a79a3f644ce367fe69adfe8504a635f712ab51b9565d3"} Jan 27 08:41:17 crc kubenswrapper[4799]: I0127 08:41:17.364624 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hfckx" podStartSLOduration=3.393822244 podStartE2EDuration="7.364591204s" podCreationTimestamp="2026-01-27 08:41:10 +0000 UTC" firstStartedPulling="2026-01-27 08:41:12.30827147 +0000 UTC m=+3338.619375545" lastFinishedPulling="2026-01-27 08:41:16.27904043 +0000 UTC m=+3342.590144505" observedRunningTime="2026-01-27 08:41:17.361867561 +0000 UTC m=+3343.672971626" watchObservedRunningTime="2026-01-27 08:41:17.364591204 +0000 UTC m=+3343.675695269" Jan 27 08:41:21 crc kubenswrapper[4799]: I0127 08:41:21.077742 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hfckx" Jan 27 08:41:21 crc kubenswrapper[4799]: I0127 08:41:21.078000 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hfckx" Jan 27 08:41:21 crc kubenswrapper[4799]: I0127 08:41:21.120642 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hfckx" Jan 27 08:41:21 crc kubenswrapper[4799]: I0127 08:41:21.413799 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hfckx" Jan 27 08:41:21 crc kubenswrapper[4799]: I0127 08:41:21.463711 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hfckx"] Jan 27 08:41:23 crc kubenswrapper[4799]: I0127 08:41:23.389813 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hfckx" podUID="267a1e23-b0ef-4963-ab20-7cc43ecade06" containerName="registry-server" containerID="cri-o://a851cc09ded22175f09a79a3f644ce367fe69adfe8504a635f712ab51b9565d3" gracePeriod=2 Jan 27 08:41:23 crc kubenswrapper[4799]: I0127 08:41:23.830647 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hfckx" Jan 27 08:41:23 crc kubenswrapper[4799]: I0127 08:41:23.915501 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/267a1e23-b0ef-4963-ab20-7cc43ecade06-utilities\") pod \"267a1e23-b0ef-4963-ab20-7cc43ecade06\" (UID: \"267a1e23-b0ef-4963-ab20-7cc43ecade06\") " Jan 27 08:41:23 crc kubenswrapper[4799]: I0127 08:41:23.916063 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjnmn\" (UniqueName: \"kubernetes.io/projected/267a1e23-b0ef-4963-ab20-7cc43ecade06-kube-api-access-wjnmn\") pod \"267a1e23-b0ef-4963-ab20-7cc43ecade06\" (UID: \"267a1e23-b0ef-4963-ab20-7cc43ecade06\") " Jan 27 08:41:23 crc kubenswrapper[4799]: I0127 08:41:23.916483 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/267a1e23-b0ef-4963-ab20-7cc43ecade06-utilities" (OuterVolumeSpecName: "utilities") pod "267a1e23-b0ef-4963-ab20-7cc43ecade06" (UID: "267a1e23-b0ef-4963-ab20-7cc43ecade06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:41:23 crc kubenswrapper[4799]: I0127 08:41:23.917027 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/267a1e23-b0ef-4963-ab20-7cc43ecade06-catalog-content\") pod \"267a1e23-b0ef-4963-ab20-7cc43ecade06\" (UID: \"267a1e23-b0ef-4963-ab20-7cc43ecade06\") " Jan 27 08:41:23 crc kubenswrapper[4799]: I0127 08:41:23.917421 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/267a1e23-b0ef-4963-ab20-7cc43ecade06-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:41:23 crc kubenswrapper[4799]: I0127 08:41:23.926264 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/267a1e23-b0ef-4963-ab20-7cc43ecade06-kube-api-access-wjnmn" (OuterVolumeSpecName: "kube-api-access-wjnmn") pod "267a1e23-b0ef-4963-ab20-7cc43ecade06" (UID: "267a1e23-b0ef-4963-ab20-7cc43ecade06"). InnerVolumeSpecName "kube-api-access-wjnmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:41:24 crc kubenswrapper[4799]: I0127 08:41:24.018339 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjnmn\" (UniqueName: \"kubernetes.io/projected/267a1e23-b0ef-4963-ab20-7cc43ecade06-kube-api-access-wjnmn\") on node \"crc\" DevicePath \"\"" Jan 27 08:41:24 crc kubenswrapper[4799]: I0127 08:41:24.309599 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/267a1e23-b0ef-4963-ab20-7cc43ecade06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "267a1e23-b0ef-4963-ab20-7cc43ecade06" (UID: "267a1e23-b0ef-4963-ab20-7cc43ecade06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:41:24 crc kubenswrapper[4799]: I0127 08:41:24.322716 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/267a1e23-b0ef-4963-ab20-7cc43ecade06-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:41:24 crc kubenswrapper[4799]: I0127 08:41:24.399405 4799 generic.go:334] "Generic (PLEG): container finished" podID="267a1e23-b0ef-4963-ab20-7cc43ecade06" containerID="a851cc09ded22175f09a79a3f644ce367fe69adfe8504a635f712ab51b9565d3" exitCode=0 Jan 27 08:41:24 crc kubenswrapper[4799]: I0127 08:41:24.399472 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfckx" event={"ID":"267a1e23-b0ef-4963-ab20-7cc43ecade06","Type":"ContainerDied","Data":"a851cc09ded22175f09a79a3f644ce367fe69adfe8504a635f712ab51b9565d3"} Jan 27 08:41:24 crc kubenswrapper[4799]: I0127 08:41:24.399520 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfckx" event={"ID":"267a1e23-b0ef-4963-ab20-7cc43ecade06","Type":"ContainerDied","Data":"136a36d8730ed2c242ed829fc551e5da35819a67bae3d30402c219d7104b58c1"} Jan 27 08:41:24 crc kubenswrapper[4799]: I0127 08:41:24.399534 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hfckx" Jan 27 08:41:24 crc kubenswrapper[4799]: I0127 08:41:24.399545 4799 scope.go:117] "RemoveContainer" containerID="a851cc09ded22175f09a79a3f644ce367fe69adfe8504a635f712ab51b9565d3" Jan 27 08:41:24 crc kubenswrapper[4799]: I0127 08:41:24.421201 4799 scope.go:117] "RemoveContainer" containerID="0d469ced58ad53588b35314db514c4d62954122c0a1d14954f992c4a8185fda2" Jan 27 08:41:24 crc kubenswrapper[4799]: I0127 08:41:24.439468 4799 scope.go:117] "RemoveContainer" containerID="10fd7162d470aaaa95efbe97b945b091cfebaeec789848b5e72f53c64905e486" Jan 27 08:41:24 crc kubenswrapper[4799]: I0127 08:41:24.457359 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:41:24 crc kubenswrapper[4799]: E0127 08:41:24.457671 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:41:24 crc kubenswrapper[4799]: I0127 08:41:24.462713 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hfckx"] Jan 27 08:41:24 crc kubenswrapper[4799]: I0127 08:41:24.466290 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hfckx"] Jan 27 08:41:24 crc kubenswrapper[4799]: I0127 08:41:24.469978 4799 scope.go:117] "RemoveContainer" containerID="a851cc09ded22175f09a79a3f644ce367fe69adfe8504a635f712ab51b9565d3" Jan 27 08:41:24 crc kubenswrapper[4799]: E0127 08:41:24.470507 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a851cc09ded22175f09a79a3f644ce367fe69adfe8504a635f712ab51b9565d3\": container with ID starting with a851cc09ded22175f09a79a3f644ce367fe69adfe8504a635f712ab51b9565d3 not found: ID does not exist" containerID="a851cc09ded22175f09a79a3f644ce367fe69adfe8504a635f712ab51b9565d3" Jan 27 08:41:24 crc kubenswrapper[4799]: I0127 08:41:24.470539 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a851cc09ded22175f09a79a3f644ce367fe69adfe8504a635f712ab51b9565d3"} err="failed to get container status \"a851cc09ded22175f09a79a3f644ce367fe69adfe8504a635f712ab51b9565d3\": rpc error: code = NotFound desc = could not find container \"a851cc09ded22175f09a79a3f644ce367fe69adfe8504a635f712ab51b9565d3\": container with ID starting with a851cc09ded22175f09a79a3f644ce367fe69adfe8504a635f712ab51b9565d3 not found: ID does not exist" Jan 27 08:41:24 crc kubenswrapper[4799]: I0127 08:41:24.470559 4799 scope.go:117] "RemoveContainer" containerID="0d469ced58ad53588b35314db514c4d62954122c0a1d14954f992c4a8185fda2" Jan 27 08:41:24 crc kubenswrapper[4799]: E0127 08:41:24.471248 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d469ced58ad53588b35314db514c4d62954122c0a1d14954f992c4a8185fda2\": container with ID starting with 0d469ced58ad53588b35314db514c4d62954122c0a1d14954f992c4a8185fda2 not found: ID does not exist" containerID="0d469ced58ad53588b35314db514c4d62954122c0a1d14954f992c4a8185fda2" Jan 27 08:41:24 crc kubenswrapper[4799]: I0127 08:41:24.471291 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d469ced58ad53588b35314db514c4d62954122c0a1d14954f992c4a8185fda2"} err="failed to get container status \"0d469ced58ad53588b35314db514c4d62954122c0a1d14954f992c4a8185fda2\": rpc error: code = NotFound desc = could not find container \"0d469ced58ad53588b35314db514c4d62954122c0a1d14954f992c4a8185fda2\": container with ID starting with 0d469ced58ad53588b35314db514c4d62954122c0a1d14954f992c4a8185fda2 not found: ID does not exist" Jan 27 08:41:24 crc kubenswrapper[4799]: I0127 08:41:24.471346 4799 scope.go:117] "RemoveContainer" containerID="10fd7162d470aaaa95efbe97b945b091cfebaeec789848b5e72f53c64905e486" Jan 27 08:41:24 crc kubenswrapper[4799]: E0127 08:41:24.472524 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10fd7162d470aaaa95efbe97b945b091cfebaeec789848b5e72f53c64905e486\": container with ID starting with 10fd7162d470aaaa95efbe97b945b091cfebaeec789848b5e72f53c64905e486 not found: ID does not exist" containerID="10fd7162d470aaaa95efbe97b945b091cfebaeec789848b5e72f53c64905e486" Jan 27 08:41:24 crc kubenswrapper[4799]: I0127 08:41:24.472551 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10fd7162d470aaaa95efbe97b945b091cfebaeec789848b5e72f53c64905e486"} err="failed to get container status \"10fd7162d470aaaa95efbe97b945b091cfebaeec789848b5e72f53c64905e486\": rpc error: code = NotFound desc = could not find container \"10fd7162d470aaaa95efbe97b945b091cfebaeec789848b5e72f53c64905e486\": container with ID starting with 10fd7162d470aaaa95efbe97b945b091cfebaeec789848b5e72f53c64905e486 not found: ID does not exist" Jan 27 08:41:26 crc kubenswrapper[4799]: I0127 08:41:26.461492 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="267a1e23-b0ef-4963-ab20-7cc43ecade06" path="/var/lib/kubelet/pods/267a1e23-b0ef-4963-ab20-7cc43ecade06/volumes" Jan 27 08:41:26 crc kubenswrapper[4799]: I0127 08:41:26.773157 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kr5vm"] Jan 27 08:41:26 crc kubenswrapper[4799]: E0127 08:41:26.773796 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="267a1e23-b0ef-4963-ab20-7cc43ecade06" containerName="registry-server" Jan 27 08:41:26 crc kubenswrapper[4799]: I0127 08:41:26.773847 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="267a1e23-b0ef-4963-ab20-7cc43ecade06" containerName="registry-server" Jan 27 08:41:26 crc kubenswrapper[4799]: E0127 08:41:26.773897 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="267a1e23-b0ef-4963-ab20-7cc43ecade06" containerName="extract-content" Jan 27 08:41:26 crc kubenswrapper[4799]: I0127 08:41:26.773918 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="267a1e23-b0ef-4963-ab20-7cc43ecade06" containerName="extract-content" Jan 27 08:41:26 crc kubenswrapper[4799]: E0127 08:41:26.773984 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="267a1e23-b0ef-4963-ab20-7cc43ecade06" containerName="extract-utilities" Jan 27 08:41:26 crc kubenswrapper[4799]: I0127 08:41:26.774004 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="267a1e23-b0ef-4963-ab20-7cc43ecade06" containerName="extract-utilities" Jan 27 08:41:26 crc kubenswrapper[4799]: I0127 08:41:26.774557 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="267a1e23-b0ef-4963-ab20-7cc43ecade06" containerName="registry-server" Jan 27 08:41:26 crc kubenswrapper[4799]: I0127 08:41:26.777066 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kr5vm" Jan 27 08:41:26 crc kubenswrapper[4799]: I0127 08:41:26.780649 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kr5vm"] Jan 27 08:41:26 crc kubenswrapper[4799]: I0127 08:41:26.857025 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80239c9c-4591-4f29-93be-59285c80abf6-utilities\") pod \"redhat-operators-kr5vm\" (UID: \"80239c9c-4591-4f29-93be-59285c80abf6\") " pod="openshift-marketplace/redhat-operators-kr5vm" Jan 27 08:41:26 crc kubenswrapper[4799]: I0127 08:41:26.857169 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80239c9c-4591-4f29-93be-59285c80abf6-catalog-content\") pod \"redhat-operators-kr5vm\" (UID: \"80239c9c-4591-4f29-93be-59285c80abf6\") " pod="openshift-marketplace/redhat-operators-kr5vm" Jan 27 08:41:26 crc kubenswrapper[4799]: I0127 08:41:26.857232 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxcb2\" (UniqueName: \"kubernetes.io/projected/80239c9c-4591-4f29-93be-59285c80abf6-kube-api-access-kxcb2\") pod \"redhat-operators-kr5vm\" (UID: \"80239c9c-4591-4f29-93be-59285c80abf6\") " pod="openshift-marketplace/redhat-operators-kr5vm" Jan 27 08:41:26 crc kubenswrapper[4799]: I0127 08:41:26.958386 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80239c9c-4591-4f29-93be-59285c80abf6-catalog-content\") pod \"redhat-operators-kr5vm\" (UID: \"80239c9c-4591-4f29-93be-59285c80abf6\") " pod="openshift-marketplace/redhat-operators-kr5vm" Jan 27 08:41:26 crc kubenswrapper[4799]: I0127 08:41:26.958463 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxcb2\" (UniqueName: \"kubernetes.io/projected/80239c9c-4591-4f29-93be-59285c80abf6-kube-api-access-kxcb2\") pod \"redhat-operators-kr5vm\" (UID: \"80239c9c-4591-4f29-93be-59285c80abf6\") " pod="openshift-marketplace/redhat-operators-kr5vm" Jan 27 08:41:26 crc kubenswrapper[4799]: I0127 08:41:26.958506 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80239c9c-4591-4f29-93be-59285c80abf6-utilities\") pod \"redhat-operators-kr5vm\" (UID: \"80239c9c-4591-4f29-93be-59285c80abf6\") " pod="openshift-marketplace/redhat-operators-kr5vm" Jan 27 08:41:26 crc kubenswrapper[4799]: I0127 08:41:26.958996 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80239c9c-4591-4f29-93be-59285c80abf6-catalog-content\") pod \"redhat-operators-kr5vm\" (UID: \"80239c9c-4591-4f29-93be-59285c80abf6\") " pod="openshift-marketplace/redhat-operators-kr5vm" Jan 27 08:41:26 crc kubenswrapper[4799]: I0127 08:41:26.959015 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80239c9c-4591-4f29-93be-59285c80abf6-utilities\") pod \"redhat-operators-kr5vm\" (UID: \"80239c9c-4591-4f29-93be-59285c80abf6\") " pod="openshift-marketplace/redhat-operators-kr5vm" Jan 27 08:41:26 crc kubenswrapper[4799]: I0127 08:41:26.982660 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxcb2\" (UniqueName: \"kubernetes.io/projected/80239c9c-4591-4f29-93be-59285c80abf6-kube-api-access-kxcb2\") pod \"redhat-operators-kr5vm\" (UID: \"80239c9c-4591-4f29-93be-59285c80abf6\") " pod="openshift-marketplace/redhat-operators-kr5vm" Jan 27 08:41:27 crc kubenswrapper[4799]: I0127 08:41:27.099413 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kr5vm" Jan 27 08:41:27 crc kubenswrapper[4799]: I0127 08:41:27.529910 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kr5vm"] Jan 27 08:41:28 crc kubenswrapper[4799]: I0127 08:41:28.428861 4799 generic.go:334] "Generic (PLEG): container finished" podID="80239c9c-4591-4f29-93be-59285c80abf6" containerID="fc1d49a67fc7a3aefad83290fd16e3d605e80388c984f63988625e17c95e41f3" exitCode=0 Jan 27 08:41:28 crc kubenswrapper[4799]: I0127 08:41:28.428965 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kr5vm" event={"ID":"80239c9c-4591-4f29-93be-59285c80abf6","Type":"ContainerDied","Data":"fc1d49a67fc7a3aefad83290fd16e3d605e80388c984f63988625e17c95e41f3"} Jan 27 08:41:28 crc kubenswrapper[4799]: I0127 08:41:28.429390 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kr5vm" event={"ID":"80239c9c-4591-4f29-93be-59285c80abf6","Type":"ContainerStarted","Data":"b18eee152060ce0d65f37d95a20cc796b6ff4f3e5e80b780107a0faeaabcf89e"} Jan 27 08:41:29 crc kubenswrapper[4799]: I0127 08:41:29.439665 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kr5vm" event={"ID":"80239c9c-4591-4f29-93be-59285c80abf6","Type":"ContainerStarted","Data":"1aebd5797ede174e1bfcbc2a8901ade352c0fe09858b9ec4031e2c199d8f96b9"} Jan 27 08:41:30 crc kubenswrapper[4799]: I0127 08:41:30.450200 4799 generic.go:334] "Generic (PLEG): container finished" podID="80239c9c-4591-4f29-93be-59285c80abf6" containerID="1aebd5797ede174e1bfcbc2a8901ade352c0fe09858b9ec4031e2c199d8f96b9" exitCode=0 Jan 27 08:41:30 crc kubenswrapper[4799]: I0127 08:41:30.450590 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kr5vm" event={"ID":"80239c9c-4591-4f29-93be-59285c80abf6","Type":"ContainerDied","Data":"1aebd5797ede174e1bfcbc2a8901ade352c0fe09858b9ec4031e2c199d8f96b9"} Jan 27 08:41:31 crc kubenswrapper[4799]: I0127 08:41:31.460932 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kr5vm" event={"ID":"80239c9c-4591-4f29-93be-59285c80abf6","Type":"ContainerStarted","Data":"f5c7ad883e788437bf50a44a0a11ce74dce5b8229fd97e4b8198a16cde8839a2"} Jan 27 08:41:31 crc kubenswrapper[4799]: I0127 08:41:31.489383 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kr5vm" podStartSLOduration=3.021425239 podStartE2EDuration="5.489364819s" podCreationTimestamp="2026-01-27 08:41:26 +0000 UTC" firstStartedPulling="2026-01-27 08:41:28.431026863 +0000 UTC m=+3354.742130928" lastFinishedPulling="2026-01-27 08:41:30.898966433 +0000 UTC m=+3357.210070508" observedRunningTime="2026-01-27 08:41:31.482326939 +0000 UTC m=+3357.793431004" watchObservedRunningTime="2026-01-27 08:41:31.489364819 +0000 UTC m=+3357.800468874" Jan 27 08:41:37 crc kubenswrapper[4799]: I0127 08:41:37.100382 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kr5vm" Jan 27 08:41:37 crc kubenswrapper[4799]: I0127 08:41:37.101065 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kr5vm" Jan 27 08:41:37 crc kubenswrapper[4799]: I0127 08:41:37.451666 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:41:37 crc kubenswrapper[4799]: E0127 08:41:37.452083 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:41:38 crc kubenswrapper[4799]: I0127 08:41:38.163124 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kr5vm" podUID="80239c9c-4591-4f29-93be-59285c80abf6" containerName="registry-server" probeResult="failure" output=< Jan 27 08:41:38 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Jan 27 08:41:38 crc kubenswrapper[4799]: > Jan 27 08:41:47 crc kubenswrapper[4799]: I0127 08:41:47.175864 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kr5vm" Jan 27 08:41:47 crc kubenswrapper[4799]: I0127 08:41:47.230559 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kr5vm" Jan 27 08:41:47 crc kubenswrapper[4799]: I0127 08:41:47.426247 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kr5vm"] Jan 27 08:41:48 crc kubenswrapper[4799]: I0127 08:41:48.583497 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kr5vm" podUID="80239c9c-4591-4f29-93be-59285c80abf6" containerName="registry-server" containerID="cri-o://f5c7ad883e788437bf50a44a0a11ce74dce5b8229fd97e4b8198a16cde8839a2" gracePeriod=2 Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.020478 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kr5vm" Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.100573 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxcb2\" (UniqueName: \"kubernetes.io/projected/80239c9c-4591-4f29-93be-59285c80abf6-kube-api-access-kxcb2\") pod \"80239c9c-4591-4f29-93be-59285c80abf6\" (UID: \"80239c9c-4591-4f29-93be-59285c80abf6\") " Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.100666 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80239c9c-4591-4f29-93be-59285c80abf6-catalog-content\") pod \"80239c9c-4591-4f29-93be-59285c80abf6\" (UID: \"80239c9c-4591-4f29-93be-59285c80abf6\") " Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.100720 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80239c9c-4591-4f29-93be-59285c80abf6-utilities\") pod \"80239c9c-4591-4f29-93be-59285c80abf6\" (UID: \"80239c9c-4591-4f29-93be-59285c80abf6\") " Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.102476 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80239c9c-4591-4f29-93be-59285c80abf6-utilities" (OuterVolumeSpecName: "utilities") pod "80239c9c-4591-4f29-93be-59285c80abf6" (UID: "80239c9c-4591-4f29-93be-59285c80abf6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.102719 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80239c9c-4591-4f29-93be-59285c80abf6-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.106236 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80239c9c-4591-4f29-93be-59285c80abf6-kube-api-access-kxcb2" (OuterVolumeSpecName: "kube-api-access-kxcb2") pod "80239c9c-4591-4f29-93be-59285c80abf6" (UID: "80239c9c-4591-4f29-93be-59285c80abf6"). InnerVolumeSpecName "kube-api-access-kxcb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.203854 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxcb2\" (UniqueName: \"kubernetes.io/projected/80239c9c-4591-4f29-93be-59285c80abf6-kube-api-access-kxcb2\") on node \"crc\" DevicePath \"\"" Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.244747 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80239c9c-4591-4f29-93be-59285c80abf6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80239c9c-4591-4f29-93be-59285c80abf6" (UID: "80239c9c-4591-4f29-93be-59285c80abf6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.305440 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80239c9c-4591-4f29-93be-59285c80abf6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.594567 4799 generic.go:334] "Generic (PLEG): container finished" podID="80239c9c-4591-4f29-93be-59285c80abf6" containerID="f5c7ad883e788437bf50a44a0a11ce74dce5b8229fd97e4b8198a16cde8839a2" exitCode=0 Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.594621 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kr5vm" event={"ID":"80239c9c-4591-4f29-93be-59285c80abf6","Type":"ContainerDied","Data":"f5c7ad883e788437bf50a44a0a11ce74dce5b8229fd97e4b8198a16cde8839a2"} Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.594669 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kr5vm" event={"ID":"80239c9c-4591-4f29-93be-59285c80abf6","Type":"ContainerDied","Data":"b18eee152060ce0d65f37d95a20cc796b6ff4f3e5e80b780107a0faeaabcf89e"} Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.594687 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kr5vm" Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.594696 4799 scope.go:117] "RemoveContainer" containerID="f5c7ad883e788437bf50a44a0a11ce74dce5b8229fd97e4b8198a16cde8839a2" Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.623771 4799 scope.go:117] "RemoveContainer" containerID="1aebd5797ede174e1bfcbc2a8901ade352c0fe09858b9ec4031e2c199d8f96b9" Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.634344 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kr5vm"] Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.640957 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kr5vm"] Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.656138 4799 scope.go:117] "RemoveContainer" containerID="fc1d49a67fc7a3aefad83290fd16e3d605e80388c984f63988625e17c95e41f3" Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.702097 4799 scope.go:117] "RemoveContainer" containerID="f5c7ad883e788437bf50a44a0a11ce74dce5b8229fd97e4b8198a16cde8839a2" Jan 27 08:41:49 crc kubenswrapper[4799]: E0127 08:41:49.702660 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5c7ad883e788437bf50a44a0a11ce74dce5b8229fd97e4b8198a16cde8839a2\": container with ID starting with f5c7ad883e788437bf50a44a0a11ce74dce5b8229fd97e4b8198a16cde8839a2 not found: ID does not exist" containerID="f5c7ad883e788437bf50a44a0a11ce74dce5b8229fd97e4b8198a16cde8839a2" Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.702718 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5c7ad883e788437bf50a44a0a11ce74dce5b8229fd97e4b8198a16cde8839a2"} err="failed to get container status \"f5c7ad883e788437bf50a44a0a11ce74dce5b8229fd97e4b8198a16cde8839a2\": rpc error: code = NotFound desc = could not find container \"f5c7ad883e788437bf50a44a0a11ce74dce5b8229fd97e4b8198a16cde8839a2\": container with ID starting with f5c7ad883e788437bf50a44a0a11ce74dce5b8229fd97e4b8198a16cde8839a2 not found: ID does not exist" Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.702749 4799 scope.go:117] "RemoveContainer" containerID="1aebd5797ede174e1bfcbc2a8901ade352c0fe09858b9ec4031e2c199d8f96b9" Jan 27 08:41:49 crc kubenswrapper[4799]: E0127 08:41:49.703243 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aebd5797ede174e1bfcbc2a8901ade352c0fe09858b9ec4031e2c199d8f96b9\": container with ID starting with 1aebd5797ede174e1bfcbc2a8901ade352c0fe09858b9ec4031e2c199d8f96b9 not found: ID does not exist" containerID="1aebd5797ede174e1bfcbc2a8901ade352c0fe09858b9ec4031e2c199d8f96b9" Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.703284 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aebd5797ede174e1bfcbc2a8901ade352c0fe09858b9ec4031e2c199d8f96b9"} err="failed to get container status \"1aebd5797ede174e1bfcbc2a8901ade352c0fe09858b9ec4031e2c199d8f96b9\": rpc error: code = NotFound desc = could not find container \"1aebd5797ede174e1bfcbc2a8901ade352c0fe09858b9ec4031e2c199d8f96b9\": container with ID starting with 1aebd5797ede174e1bfcbc2a8901ade352c0fe09858b9ec4031e2c199d8f96b9 not found: ID does not exist" Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.703334 4799 scope.go:117] "RemoveContainer" containerID="fc1d49a67fc7a3aefad83290fd16e3d605e80388c984f63988625e17c95e41f3" Jan 27 08:41:49 crc kubenswrapper[4799]: E0127 08:41:49.703609 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc1d49a67fc7a3aefad83290fd16e3d605e80388c984f63988625e17c95e41f3\": container with ID starting with fc1d49a67fc7a3aefad83290fd16e3d605e80388c984f63988625e17c95e41f3 not found: ID does not exist" containerID="fc1d49a67fc7a3aefad83290fd16e3d605e80388c984f63988625e17c95e41f3" Jan 27 08:41:49 crc kubenswrapper[4799]: I0127 08:41:49.703647 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc1d49a67fc7a3aefad83290fd16e3d605e80388c984f63988625e17c95e41f3"} err="failed to get container status \"fc1d49a67fc7a3aefad83290fd16e3d605e80388c984f63988625e17c95e41f3\": rpc error: code = NotFound desc = could not find container \"fc1d49a67fc7a3aefad83290fd16e3d605e80388c984f63988625e17c95e41f3\": container with ID starting with fc1d49a67fc7a3aefad83290fd16e3d605e80388c984f63988625e17c95e41f3 not found: ID does not exist" Jan 27 08:41:50 crc kubenswrapper[4799]: I0127 08:41:50.452075 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:41:50 crc kubenswrapper[4799]: E0127 08:41:50.452871 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:41:50 crc kubenswrapper[4799]: I0127 08:41:50.468160 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80239c9c-4591-4f29-93be-59285c80abf6" path="/var/lib/kubelet/pods/80239c9c-4591-4f29-93be-59285c80abf6/volumes" Jan 27 08:42:02 crc kubenswrapper[4799]: I0127 08:42:02.451648 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:42:02 crc kubenswrapper[4799]: E0127 08:42:02.452584 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:42:17 crc kubenswrapper[4799]: I0127 08:42:17.451877 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:42:17 crc kubenswrapper[4799]: E0127 08:42:17.453076 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:42:32 crc kubenswrapper[4799]: I0127 08:42:32.451479 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:42:32 crc kubenswrapper[4799]: E0127 08:42:32.452165 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:42:47 crc kubenswrapper[4799]: I0127 08:42:47.451053 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:42:47 crc kubenswrapper[4799]: E0127 08:42:47.452071 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:43:01 crc kubenswrapper[4799]: I0127 08:43:01.451570 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:43:01 crc kubenswrapper[4799]: I0127 08:43:01.723027 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"abff62e88fd30bda88592867caac75625d6007d843bce4d6830c760f14da5a65"} Jan 27 08:45:00 crc kubenswrapper[4799]: I0127 08:45:00.139957 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491725-pwqdw"] Jan 27 08:45:00 crc kubenswrapper[4799]: E0127 08:45:00.140998 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80239c9c-4591-4f29-93be-59285c80abf6" containerName="extract-content" Jan 27 08:45:00 crc kubenswrapper[4799]: I0127 08:45:00.141017 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="80239c9c-4591-4f29-93be-59285c80abf6" containerName="extract-content" Jan 27 08:45:00 crc kubenswrapper[4799]: E0127 08:45:00.141039 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80239c9c-4591-4f29-93be-59285c80abf6" containerName="registry-server" Jan 27 08:45:00 crc kubenswrapper[4799]: I0127 08:45:00.141048 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="80239c9c-4591-4f29-93be-59285c80abf6" containerName="registry-server" Jan 27 08:45:00 crc kubenswrapper[4799]: E0127 08:45:00.141065 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80239c9c-4591-4f29-93be-59285c80abf6" containerName="extract-utilities" Jan 27 08:45:00 crc kubenswrapper[4799]: I0127 08:45:00.141074 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="80239c9c-4591-4f29-93be-59285c80abf6" containerName="extract-utilities" Jan 27 08:45:00 crc kubenswrapper[4799]: I0127 08:45:00.141256 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="80239c9c-4591-4f29-93be-59285c80abf6" containerName="registry-server" Jan 27 08:45:00 crc kubenswrapper[4799]: I0127 08:45:00.141826 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-pwqdw" Jan 27 08:45:00 crc kubenswrapper[4799]: I0127 08:45:00.143750 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 08:45:00 crc kubenswrapper[4799]: I0127 08:45:00.149563 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 08:45:00 crc kubenswrapper[4799]: I0127 08:45:00.156970 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491725-pwqdw"] Jan 27 08:45:00 crc kubenswrapper[4799]: I0127 08:45:00.304021 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7-secret-volume\") pod \"collect-profiles-29491725-pwqdw\" (UID: \"2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-pwqdw" Jan 27 08:45:00 crc kubenswrapper[4799]: I0127 08:45:00.304134 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7-config-volume\") pod \"collect-profiles-29491725-pwqdw\" (UID: \"2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-pwqdw" Jan 27 08:45:00 crc kubenswrapper[4799]: I0127 08:45:00.304346 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djbkv\" (UniqueName: \"kubernetes.io/projected/2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7-kube-api-access-djbkv\") pod \"collect-profiles-29491725-pwqdw\" (UID: \"2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-pwqdw" Jan 27 08:45:00 crc kubenswrapper[4799]: I0127 08:45:00.406171 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7-secret-volume\") pod \"collect-profiles-29491725-pwqdw\" (UID: \"2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-pwqdw" Jan 27 08:45:00 crc kubenswrapper[4799]: I0127 08:45:00.406731 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7-config-volume\") pod \"collect-profiles-29491725-pwqdw\" (UID: \"2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-pwqdw" Jan 27 08:45:00 crc kubenswrapper[4799]: I0127 08:45:00.406917 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djbkv\" (UniqueName: \"kubernetes.io/projected/2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7-kube-api-access-djbkv\") pod \"collect-profiles-29491725-pwqdw\" (UID: \"2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-pwqdw" Jan 27 08:45:00 crc kubenswrapper[4799]: I0127 08:45:00.407667 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7-config-volume\") pod \"collect-profiles-29491725-pwqdw\" (UID: \"2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-pwqdw" Jan 27 08:45:00 crc kubenswrapper[4799]: I0127 08:45:00.417115 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7-secret-volume\") pod \"collect-profiles-29491725-pwqdw\" (UID: \"2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-pwqdw" Jan 27 08:45:00 crc kubenswrapper[4799]: I0127 08:45:00.423149 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djbkv\" (UniqueName: \"kubernetes.io/projected/2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7-kube-api-access-djbkv\") pod \"collect-profiles-29491725-pwqdw\" (UID: \"2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-pwqdw" Jan 27 08:45:00 crc kubenswrapper[4799]: I0127 08:45:00.464179 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-pwqdw" Jan 27 08:45:00 crc kubenswrapper[4799]: I0127 08:45:00.908018 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491725-pwqdw"] Jan 27 08:45:01 crc kubenswrapper[4799]: I0127 08:45:01.697183 4799 generic.go:334] "Generic (PLEG): container finished" podID="2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7" containerID="59504f5fa21e27601ec9dcf464434715372428419c275147f21d4409f761f926" exitCode=0 Jan 27 08:45:01 crc kubenswrapper[4799]: I0127 08:45:01.697245 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-pwqdw" event={"ID":"2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7","Type":"ContainerDied","Data":"59504f5fa21e27601ec9dcf464434715372428419c275147f21d4409f761f926"} Jan 27 08:45:01 crc kubenswrapper[4799]: I0127 08:45:01.697505 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-pwqdw" event={"ID":"2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7","Type":"ContainerStarted","Data":"afedeb7a0bc6a3bba94fd117c1a16bb37716ee934ac257a2718511b17bcfec92"} Jan 27 08:45:03 crc kubenswrapper[4799]: I0127 08:45:03.000560 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-pwqdw" Jan 27 08:45:03 crc kubenswrapper[4799]: I0127 08:45:03.141992 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djbkv\" (UniqueName: \"kubernetes.io/projected/2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7-kube-api-access-djbkv\") pod \"2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7\" (UID: \"2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7\") " Jan 27 08:45:03 crc kubenswrapper[4799]: I0127 08:45:03.142115 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7-config-volume\") pod \"2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7\" (UID: \"2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7\") " Jan 27 08:45:03 crc kubenswrapper[4799]: I0127 08:45:03.142184 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7-secret-volume\") pod \"2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7\" (UID: \"2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7\") " Jan 27 08:45:03 crc kubenswrapper[4799]: I0127 08:45:03.142942 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7-config-volume" (OuterVolumeSpecName: "config-volume") pod "2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7" (UID: "2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 08:45:03 crc kubenswrapper[4799]: I0127 08:45:03.147250 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7-kube-api-access-djbkv" (OuterVolumeSpecName: "kube-api-access-djbkv") pod "2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7" (UID: "2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7"). InnerVolumeSpecName "kube-api-access-djbkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:45:03 crc kubenswrapper[4799]: I0127 08:45:03.147705 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7" (UID: "2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 08:45:03 crc kubenswrapper[4799]: I0127 08:45:03.244266 4799 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 08:45:03 crc kubenswrapper[4799]: I0127 08:45:03.244335 4799 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 08:45:03 crc kubenswrapper[4799]: I0127 08:45:03.244351 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djbkv\" (UniqueName: \"kubernetes.io/projected/2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7-kube-api-access-djbkv\") on node \"crc\" DevicePath \"\"" Jan 27 08:45:03 crc kubenswrapper[4799]: I0127 08:45:03.723408 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-pwqdw" event={"ID":"2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7","Type":"ContainerDied","Data":"afedeb7a0bc6a3bba94fd117c1a16bb37716ee934ac257a2718511b17bcfec92"} Jan 27 08:45:03 crc kubenswrapper[4799]: I0127 08:45:03.723460 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afedeb7a0bc6a3bba94fd117c1a16bb37716ee934ac257a2718511b17bcfec92" Jan 27 08:45:03 crc kubenswrapper[4799]: I0127 08:45:03.723651 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491725-pwqdw" Jan 27 08:45:04 crc kubenswrapper[4799]: I0127 08:45:04.073440 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491680-qrwgl"] Jan 27 08:45:04 crc kubenswrapper[4799]: I0127 08:45:04.081367 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491680-qrwgl"] Jan 27 08:45:04 crc kubenswrapper[4799]: I0127 08:45:04.464539 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdb1aeee-c165-4184-8db1-f48cde66dd4b" path="/var/lib/kubelet/pods/cdb1aeee-c165-4184-8db1-f48cde66dd4b/volumes" Jan 27 08:45:23 crc kubenswrapper[4799]: I0127 08:45:23.730679 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:45:23 crc kubenswrapper[4799]: I0127 08:45:23.731248 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:45:39 crc kubenswrapper[4799]: I0127 08:45:39.068529 4799 scope.go:117] "RemoveContainer" containerID="f9a2ae69d411267c99f2ffb5b83c0a7e0eb885e3bc63db0ca6637c9a30f87fe1" Jan 27 08:45:53 crc kubenswrapper[4799]: I0127 08:45:53.731947 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:45:53 crc kubenswrapper[4799]: I0127 08:45:53.732589 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:46:23 crc kubenswrapper[4799]: I0127 08:46:23.731751 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:46:23 crc kubenswrapper[4799]: I0127 08:46:23.733508 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:46:23 crc kubenswrapper[4799]: I0127 08:46:23.733616 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 08:46:23 crc kubenswrapper[4799]: I0127 08:46:23.734718 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"abff62e88fd30bda88592867caac75625d6007d843bce4d6830c760f14da5a65"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 08:46:23 crc kubenswrapper[4799]: I0127 08:46:23.734831 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://abff62e88fd30bda88592867caac75625d6007d843bce4d6830c760f14da5a65" gracePeriod=600 Jan 27 08:46:24 crc kubenswrapper[4799]: I0127 08:46:24.366035 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="abff62e88fd30bda88592867caac75625d6007d843bce4d6830c760f14da5a65" exitCode=0 Jan 27 08:46:24 crc kubenswrapper[4799]: I0127 08:46:24.366327 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"abff62e88fd30bda88592867caac75625d6007d843bce4d6830c760f14da5a65"} Jan 27 08:46:24 crc kubenswrapper[4799]: I0127 08:46:24.366356 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234"} Jan 27 08:46:24 crc kubenswrapper[4799]: I0127 08:46:24.366372 4799 scope.go:117] "RemoveContainer" containerID="e47709584f3e4eeb154476098d462fa9d98e54cfcd889e3c5a778d64b1e3ce41" Jan 27 08:48:37 crc kubenswrapper[4799]: I0127 08:48:37.484631 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zc9pv"] Jan 27 08:48:37 crc kubenswrapper[4799]: E0127 08:48:37.486070 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7" containerName="collect-profiles" Jan 27 08:48:37 crc kubenswrapper[4799]: I0127 08:48:37.486099 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7" containerName="collect-profiles" Jan 27 08:48:37 crc kubenswrapper[4799]: I0127 08:48:37.486394 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7" containerName="collect-profiles" Jan 27 08:48:37 crc kubenswrapper[4799]: I0127 08:48:37.488620 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zc9pv" Jan 27 08:48:37 crc kubenswrapper[4799]: I0127 08:48:37.502443 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4512146-3500-4301-ad02-636c0c550a5a-utilities\") pod \"certified-operators-zc9pv\" (UID: \"c4512146-3500-4301-ad02-636c0c550a5a\") " pod="openshift-marketplace/certified-operators-zc9pv" Jan 27 08:48:37 crc kubenswrapper[4799]: I0127 08:48:37.502591 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4512146-3500-4301-ad02-636c0c550a5a-catalog-content\") pod \"certified-operators-zc9pv\" (UID: \"c4512146-3500-4301-ad02-636c0c550a5a\") " pod="openshift-marketplace/certified-operators-zc9pv" Jan 27 08:48:37 crc kubenswrapper[4799]: I0127 08:48:37.502649 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvcxv\" (UniqueName: \"kubernetes.io/projected/c4512146-3500-4301-ad02-636c0c550a5a-kube-api-access-wvcxv\") pod \"certified-operators-zc9pv\" (UID: \"c4512146-3500-4301-ad02-636c0c550a5a\") " pod="openshift-marketplace/certified-operators-zc9pv" Jan 27 08:48:37 crc kubenswrapper[4799]: I0127 08:48:37.505904 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zc9pv"] Jan 27 08:48:37 crc kubenswrapper[4799]: I0127 08:48:37.603655 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4512146-3500-4301-ad02-636c0c550a5a-utilities\") pod \"certified-operators-zc9pv\" (UID: \"c4512146-3500-4301-ad02-636c0c550a5a\") " pod="openshift-marketplace/certified-operators-zc9pv" Jan 27 08:48:37 crc kubenswrapper[4799]: I0127 08:48:37.603732 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4512146-3500-4301-ad02-636c0c550a5a-catalog-content\") pod \"certified-operators-zc9pv\" (UID: \"c4512146-3500-4301-ad02-636c0c550a5a\") " pod="openshift-marketplace/certified-operators-zc9pv" Jan 27 08:48:37 crc kubenswrapper[4799]: I0127 08:48:37.603767 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvcxv\" (UniqueName: \"kubernetes.io/projected/c4512146-3500-4301-ad02-636c0c550a5a-kube-api-access-wvcxv\") pod \"certified-operators-zc9pv\" (UID: \"c4512146-3500-4301-ad02-636c0c550a5a\") " pod="openshift-marketplace/certified-operators-zc9pv" Jan 27 08:48:37 crc kubenswrapper[4799]: I0127 08:48:37.604237 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4512146-3500-4301-ad02-636c0c550a5a-catalog-content\") pod \"certified-operators-zc9pv\" (UID: \"c4512146-3500-4301-ad02-636c0c550a5a\") " pod="openshift-marketplace/certified-operators-zc9pv" Jan 27 08:48:37 crc kubenswrapper[4799]: I0127 08:48:37.604430 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4512146-3500-4301-ad02-636c0c550a5a-utilities\") pod \"certified-operators-zc9pv\" (UID: \"c4512146-3500-4301-ad02-636c0c550a5a\") " pod="openshift-marketplace/certified-operators-zc9pv" Jan 27 08:48:37 crc kubenswrapper[4799]: I0127 08:48:37.630747 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvcxv\" (UniqueName: \"kubernetes.io/projected/c4512146-3500-4301-ad02-636c0c550a5a-kube-api-access-wvcxv\") pod \"certified-operators-zc9pv\" (UID: \"c4512146-3500-4301-ad02-636c0c550a5a\") " pod="openshift-marketplace/certified-operators-zc9pv" Jan 27 08:48:37 crc kubenswrapper[4799]: I0127 08:48:37.815579 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zc9pv" Jan 27 08:48:38 crc kubenswrapper[4799]: I0127 08:48:38.280687 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zc9pv"] Jan 27 08:48:38 crc kubenswrapper[4799]: I0127 08:48:38.381739 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zc9pv" event={"ID":"c4512146-3500-4301-ad02-636c0c550a5a","Type":"ContainerStarted","Data":"6baed84c0a17313b88222c8055d5ae384369aef731e7d98209ffbb1f8d6c6a4b"} Jan 27 08:48:39 crc kubenswrapper[4799]: I0127 08:48:39.390024 4799 generic.go:334] "Generic (PLEG): container finished" podID="c4512146-3500-4301-ad02-636c0c550a5a" containerID="e9021a8acdc9ad55f41a4b27645eaeca1388b11f691c8a46c203f4f0f952ce03" exitCode=0 Jan 27 08:48:39 crc kubenswrapper[4799]: I0127 08:48:39.390094 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zc9pv" event={"ID":"c4512146-3500-4301-ad02-636c0c550a5a","Type":"ContainerDied","Data":"e9021a8acdc9ad55f41a4b27645eaeca1388b11f691c8a46c203f4f0f952ce03"} Jan 27 08:48:39 crc kubenswrapper[4799]: I0127 08:48:39.392849 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 08:48:41 crc kubenswrapper[4799]: I0127 08:48:41.407325 4799 generic.go:334] "Generic (PLEG): container finished" podID="c4512146-3500-4301-ad02-636c0c550a5a" containerID="eaf6c10a5602a0ecdaac4644a3899b58d14122dc28eb53b8070d8306d6e64688" exitCode=0 Jan 27 08:48:41 crc kubenswrapper[4799]: I0127 08:48:41.407468 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zc9pv" event={"ID":"c4512146-3500-4301-ad02-636c0c550a5a","Type":"ContainerDied","Data":"eaf6c10a5602a0ecdaac4644a3899b58d14122dc28eb53b8070d8306d6e64688"} Jan 27 08:48:43 crc kubenswrapper[4799]: I0127 08:48:43.427684 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zc9pv" event={"ID":"c4512146-3500-4301-ad02-636c0c550a5a","Type":"ContainerStarted","Data":"a06fbcf1d5b312c96ec5024225bcb9db14cfad49dcffbbc7c374492c89f55d07"} Jan 27 08:48:47 crc kubenswrapper[4799]: I0127 08:48:47.816381 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zc9pv" Jan 27 08:48:47 crc kubenswrapper[4799]: I0127 08:48:47.816925 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zc9pv" Jan 27 08:48:47 crc kubenswrapper[4799]: I0127 08:48:47.879237 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zc9pv" Jan 27 08:48:47 crc kubenswrapper[4799]: I0127 08:48:47.902324 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zc9pv" podStartSLOduration=8.373923482 podStartE2EDuration="10.90228932s" podCreationTimestamp="2026-01-27 08:48:37 +0000 UTC" firstStartedPulling="2026-01-27 08:48:39.392275034 +0000 UTC m=+3785.703379139" lastFinishedPulling="2026-01-27 08:48:41.920640922 +0000 UTC m=+3788.231744977" observedRunningTime="2026-01-27 08:48:43.443544274 +0000 UTC m=+3789.754648349" watchObservedRunningTime="2026-01-27 08:48:47.90228932 +0000 UTC m=+3794.213393385" Jan 27 08:48:48 crc kubenswrapper[4799]: I0127 08:48:48.502259 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zc9pv" Jan 27 08:48:48 crc kubenswrapper[4799]: I0127 08:48:48.563126 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zc9pv"] Jan 27 08:48:50 crc kubenswrapper[4799]: I0127 08:48:50.474787 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zc9pv" podUID="c4512146-3500-4301-ad02-636c0c550a5a" containerName="registry-server" containerID="cri-o://a06fbcf1d5b312c96ec5024225bcb9db14cfad49dcffbbc7c374492c89f55d07" gracePeriod=2 Jan 27 08:48:50 crc kubenswrapper[4799]: I0127 08:48:50.870772 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zc9pv" Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.009476 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4512146-3500-4301-ad02-636c0c550a5a-utilities\") pod \"c4512146-3500-4301-ad02-636c0c550a5a\" (UID: \"c4512146-3500-4301-ad02-636c0c550a5a\") " Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.009594 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4512146-3500-4301-ad02-636c0c550a5a-catalog-content\") pod \"c4512146-3500-4301-ad02-636c0c550a5a\" (UID: \"c4512146-3500-4301-ad02-636c0c550a5a\") " Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.009683 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvcxv\" (UniqueName: \"kubernetes.io/projected/c4512146-3500-4301-ad02-636c0c550a5a-kube-api-access-wvcxv\") pod \"c4512146-3500-4301-ad02-636c0c550a5a\" (UID: \"c4512146-3500-4301-ad02-636c0c550a5a\") " Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.010839 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4512146-3500-4301-ad02-636c0c550a5a-utilities" (OuterVolumeSpecName: "utilities") pod "c4512146-3500-4301-ad02-636c0c550a5a" (UID: "c4512146-3500-4301-ad02-636c0c550a5a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.015610 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4512146-3500-4301-ad02-636c0c550a5a-kube-api-access-wvcxv" (OuterVolumeSpecName: "kube-api-access-wvcxv") pod "c4512146-3500-4301-ad02-636c0c550a5a" (UID: "c4512146-3500-4301-ad02-636c0c550a5a"). InnerVolumeSpecName "kube-api-access-wvcxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.060857 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4512146-3500-4301-ad02-636c0c550a5a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4512146-3500-4301-ad02-636c0c550a5a" (UID: "c4512146-3500-4301-ad02-636c0c550a5a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.110563 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4512146-3500-4301-ad02-636c0c550a5a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.110593 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4512146-3500-4301-ad02-636c0c550a5a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.110605 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvcxv\" (UniqueName: \"kubernetes.io/projected/c4512146-3500-4301-ad02-636c0c550a5a-kube-api-access-wvcxv\") on node \"crc\" DevicePath \"\"" Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.485722 4799 generic.go:334] "Generic (PLEG): container finished" podID="c4512146-3500-4301-ad02-636c0c550a5a" containerID="a06fbcf1d5b312c96ec5024225bcb9db14cfad49dcffbbc7c374492c89f55d07" exitCode=0 Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.485769 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zc9pv" event={"ID":"c4512146-3500-4301-ad02-636c0c550a5a","Type":"ContainerDied","Data":"a06fbcf1d5b312c96ec5024225bcb9db14cfad49dcffbbc7c374492c89f55d07"} Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.485784 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zc9pv" Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.485797 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zc9pv" event={"ID":"c4512146-3500-4301-ad02-636c0c550a5a","Type":"ContainerDied","Data":"6baed84c0a17313b88222c8055d5ae384369aef731e7d98209ffbb1f8d6c6a4b"} Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.485817 4799 scope.go:117] "RemoveContainer" containerID="a06fbcf1d5b312c96ec5024225bcb9db14cfad49dcffbbc7c374492c89f55d07" Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.513567 4799 scope.go:117] "RemoveContainer" containerID="eaf6c10a5602a0ecdaac4644a3899b58d14122dc28eb53b8070d8306d6e64688" Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.521433 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zc9pv"] Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.528729 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zc9pv"] Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.536102 4799 scope.go:117] "RemoveContainer" containerID="e9021a8acdc9ad55f41a4b27645eaeca1388b11f691c8a46c203f4f0f952ce03" Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.556263 4799 scope.go:117] "RemoveContainer" containerID="a06fbcf1d5b312c96ec5024225bcb9db14cfad49dcffbbc7c374492c89f55d07" Jan 27 08:48:51 crc kubenswrapper[4799]: E0127 08:48:51.556699 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a06fbcf1d5b312c96ec5024225bcb9db14cfad49dcffbbc7c374492c89f55d07\": container with ID starting with a06fbcf1d5b312c96ec5024225bcb9db14cfad49dcffbbc7c374492c89f55d07 not found: ID does not exist" containerID="a06fbcf1d5b312c96ec5024225bcb9db14cfad49dcffbbc7c374492c89f55d07" Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.556741 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a06fbcf1d5b312c96ec5024225bcb9db14cfad49dcffbbc7c374492c89f55d07"} err="failed to get container status \"a06fbcf1d5b312c96ec5024225bcb9db14cfad49dcffbbc7c374492c89f55d07\": rpc error: code = NotFound desc = could not find container \"a06fbcf1d5b312c96ec5024225bcb9db14cfad49dcffbbc7c374492c89f55d07\": container with ID starting with a06fbcf1d5b312c96ec5024225bcb9db14cfad49dcffbbc7c374492c89f55d07 not found: ID does not exist" Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.556768 4799 scope.go:117] "RemoveContainer" containerID="eaf6c10a5602a0ecdaac4644a3899b58d14122dc28eb53b8070d8306d6e64688" Jan 27 08:48:51 crc kubenswrapper[4799]: E0127 08:48:51.557136 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaf6c10a5602a0ecdaac4644a3899b58d14122dc28eb53b8070d8306d6e64688\": container with ID starting with eaf6c10a5602a0ecdaac4644a3899b58d14122dc28eb53b8070d8306d6e64688 not found: ID does not exist" containerID="eaf6c10a5602a0ecdaac4644a3899b58d14122dc28eb53b8070d8306d6e64688" Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.557166 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf6c10a5602a0ecdaac4644a3899b58d14122dc28eb53b8070d8306d6e64688"} err="failed to get container status \"eaf6c10a5602a0ecdaac4644a3899b58d14122dc28eb53b8070d8306d6e64688\": rpc error: code = NotFound desc = could not find container \"eaf6c10a5602a0ecdaac4644a3899b58d14122dc28eb53b8070d8306d6e64688\": container with ID starting with eaf6c10a5602a0ecdaac4644a3899b58d14122dc28eb53b8070d8306d6e64688 not found: ID does not exist" Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.557180 4799 scope.go:117] "RemoveContainer" containerID="e9021a8acdc9ad55f41a4b27645eaeca1388b11f691c8a46c203f4f0f952ce03" Jan 27 08:48:51 crc kubenswrapper[4799]: E0127 08:48:51.557508 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9021a8acdc9ad55f41a4b27645eaeca1388b11f691c8a46c203f4f0f952ce03\": container with ID starting with e9021a8acdc9ad55f41a4b27645eaeca1388b11f691c8a46c203f4f0f952ce03 not found: ID does not exist" containerID="e9021a8acdc9ad55f41a4b27645eaeca1388b11f691c8a46c203f4f0f952ce03" Jan 27 08:48:51 crc kubenswrapper[4799]: I0127 08:48:51.557533 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9021a8acdc9ad55f41a4b27645eaeca1388b11f691c8a46c203f4f0f952ce03"} err="failed to get container status \"e9021a8acdc9ad55f41a4b27645eaeca1388b11f691c8a46c203f4f0f952ce03\": rpc error: code = NotFound desc = could not find container \"e9021a8acdc9ad55f41a4b27645eaeca1388b11f691c8a46c203f4f0f952ce03\": container with ID starting with e9021a8acdc9ad55f41a4b27645eaeca1388b11f691c8a46c203f4f0f952ce03 not found: ID does not exist" Jan 27 08:48:52 crc kubenswrapper[4799]: I0127 08:48:52.459165 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4512146-3500-4301-ad02-636c0c550a5a" path="/var/lib/kubelet/pods/c4512146-3500-4301-ad02-636c0c550a5a/volumes" Jan 27 08:48:53 crc kubenswrapper[4799]: I0127 08:48:53.731472 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:48:53 crc kubenswrapper[4799]: I0127 08:48:53.733971 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:49:23 crc kubenswrapper[4799]: I0127 08:49:23.731225 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:49:23 crc kubenswrapper[4799]: I0127 08:49:23.731830 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:49:53 crc kubenswrapper[4799]: I0127 08:49:53.731762 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:49:53 crc kubenswrapper[4799]: I0127 08:49:53.734780 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:49:53 crc kubenswrapper[4799]: I0127 08:49:53.734993 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 08:49:53 crc kubenswrapper[4799]: I0127 08:49:53.736345 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 08:49:53 crc kubenswrapper[4799]: I0127 08:49:53.736612 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" gracePeriod=600 Jan 27 08:49:53 crc kubenswrapper[4799]: E0127 08:49:53.864944 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:49:53 crc kubenswrapper[4799]: I0127 08:49:53.981020 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" exitCode=0 Jan 27 08:49:53 crc kubenswrapper[4799]: I0127 08:49:53.981068 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234"} Jan 27 08:49:53 crc kubenswrapper[4799]: I0127 08:49:53.981110 4799 scope.go:117] "RemoveContainer" containerID="abff62e88fd30bda88592867caac75625d6007d843bce4d6830c760f14da5a65" Jan 27 08:49:53 crc kubenswrapper[4799]: I0127 08:49:53.981791 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:49:53 crc kubenswrapper[4799]: E0127 08:49:53.982106 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:50:08 crc kubenswrapper[4799]: I0127 08:50:08.451491 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:50:08 crc kubenswrapper[4799]: E0127 08:50:08.453544 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:50:20 crc kubenswrapper[4799]: I0127 08:50:20.451997 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:50:20 crc kubenswrapper[4799]: E0127 08:50:20.452638 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:50:21 crc kubenswrapper[4799]: I0127 08:50:21.918840 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bvt4l"] Jan 27 08:50:21 crc kubenswrapper[4799]: E0127 08:50:21.919498 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4512146-3500-4301-ad02-636c0c550a5a" containerName="extract-content" Jan 27 08:50:21 crc kubenswrapper[4799]: I0127 08:50:21.919513 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4512146-3500-4301-ad02-636c0c550a5a" containerName="extract-content" Jan 27 08:50:21 crc kubenswrapper[4799]: E0127 08:50:21.919537 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4512146-3500-4301-ad02-636c0c550a5a" containerName="registry-server" Jan 27 08:50:21 crc kubenswrapper[4799]: I0127 08:50:21.919545 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4512146-3500-4301-ad02-636c0c550a5a" containerName="registry-server" Jan 27 08:50:21 crc kubenswrapper[4799]: E0127 08:50:21.919560 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4512146-3500-4301-ad02-636c0c550a5a" containerName="extract-utilities" Jan 27 08:50:21 crc kubenswrapper[4799]: I0127 08:50:21.919568 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4512146-3500-4301-ad02-636c0c550a5a" containerName="extract-utilities" Jan 27 08:50:21 crc kubenswrapper[4799]: I0127 08:50:21.919753 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4512146-3500-4301-ad02-636c0c550a5a" containerName="registry-server" Jan 27 08:50:21 crc kubenswrapper[4799]: I0127 08:50:21.920979 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bvt4l" Jan 27 08:50:21 crc kubenswrapper[4799]: I0127 08:50:21.951199 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bvt4l"] Jan 27 08:50:22 crc kubenswrapper[4799]: I0127 08:50:22.040979 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/991b9a32-dd2f-47fe-82da-d6829240ba9f-utilities\") pod \"redhat-marketplace-bvt4l\" (UID: \"991b9a32-dd2f-47fe-82da-d6829240ba9f\") " pod="openshift-marketplace/redhat-marketplace-bvt4l" Jan 27 08:50:22 crc kubenswrapper[4799]: I0127 08:50:22.041042 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47fk6\" (UniqueName: \"kubernetes.io/projected/991b9a32-dd2f-47fe-82da-d6829240ba9f-kube-api-access-47fk6\") pod \"redhat-marketplace-bvt4l\" (UID: \"991b9a32-dd2f-47fe-82da-d6829240ba9f\") " pod="openshift-marketplace/redhat-marketplace-bvt4l" Jan 27 08:50:22 crc kubenswrapper[4799]: I0127 08:50:22.041108 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/991b9a32-dd2f-47fe-82da-d6829240ba9f-catalog-content\") pod \"redhat-marketplace-bvt4l\" (UID: \"991b9a32-dd2f-47fe-82da-d6829240ba9f\") " pod="openshift-marketplace/redhat-marketplace-bvt4l" Jan 27 08:50:22 crc kubenswrapper[4799]: I0127 08:50:22.141889 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/991b9a32-dd2f-47fe-82da-d6829240ba9f-utilities\") pod \"redhat-marketplace-bvt4l\" (UID: \"991b9a32-dd2f-47fe-82da-d6829240ba9f\") " pod="openshift-marketplace/redhat-marketplace-bvt4l" Jan 27 08:50:22 crc kubenswrapper[4799]: I0127 08:50:22.141930 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47fk6\" (UniqueName: \"kubernetes.io/projected/991b9a32-dd2f-47fe-82da-d6829240ba9f-kube-api-access-47fk6\") pod \"redhat-marketplace-bvt4l\" (UID: \"991b9a32-dd2f-47fe-82da-d6829240ba9f\") " pod="openshift-marketplace/redhat-marketplace-bvt4l" Jan 27 08:50:22 crc kubenswrapper[4799]: I0127 08:50:22.142022 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/991b9a32-dd2f-47fe-82da-d6829240ba9f-catalog-content\") pod \"redhat-marketplace-bvt4l\" (UID: \"991b9a32-dd2f-47fe-82da-d6829240ba9f\") " pod="openshift-marketplace/redhat-marketplace-bvt4l" Jan 27 08:50:22 crc kubenswrapper[4799]: I0127 08:50:22.142576 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/991b9a32-dd2f-47fe-82da-d6829240ba9f-catalog-content\") pod \"redhat-marketplace-bvt4l\" (UID: \"991b9a32-dd2f-47fe-82da-d6829240ba9f\") " pod="openshift-marketplace/redhat-marketplace-bvt4l" Jan 27 08:50:22 crc kubenswrapper[4799]: I0127 08:50:22.142637 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/991b9a32-dd2f-47fe-82da-d6829240ba9f-utilities\") pod \"redhat-marketplace-bvt4l\" (UID: \"991b9a32-dd2f-47fe-82da-d6829240ba9f\") " pod="openshift-marketplace/redhat-marketplace-bvt4l" Jan 27 08:50:22 crc kubenswrapper[4799]: I0127 08:50:22.162144 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47fk6\" (UniqueName: \"kubernetes.io/projected/991b9a32-dd2f-47fe-82da-d6829240ba9f-kube-api-access-47fk6\") pod \"redhat-marketplace-bvt4l\" (UID: \"991b9a32-dd2f-47fe-82da-d6829240ba9f\") " pod="openshift-marketplace/redhat-marketplace-bvt4l" Jan 27 08:50:22 crc kubenswrapper[4799]: I0127 08:50:22.246266 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bvt4l" Jan 27 08:50:22 crc kubenswrapper[4799]: I0127 08:50:22.744498 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bvt4l"] Jan 27 08:50:23 crc kubenswrapper[4799]: I0127 08:50:23.220845 4799 generic.go:334] "Generic (PLEG): container finished" podID="991b9a32-dd2f-47fe-82da-d6829240ba9f" containerID="59fdb45e56e2d6927c58c877af1c54564ebd29bc11344a7344f44e2269a0033b" exitCode=0 Jan 27 08:50:23 crc kubenswrapper[4799]: I0127 08:50:23.220887 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bvt4l" event={"ID":"991b9a32-dd2f-47fe-82da-d6829240ba9f","Type":"ContainerDied","Data":"59fdb45e56e2d6927c58c877af1c54564ebd29bc11344a7344f44e2269a0033b"} Jan 27 08:50:23 crc kubenswrapper[4799]: I0127 08:50:23.220912 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bvt4l" event={"ID":"991b9a32-dd2f-47fe-82da-d6829240ba9f","Type":"ContainerStarted","Data":"ab2774d7e606a81edc08b57ad204d21d4bbd6f16b4a6a1be653152b3e29940a6"} Jan 27 08:50:24 crc kubenswrapper[4799]: I0127 08:50:24.233537 4799 generic.go:334] "Generic (PLEG): container finished" podID="991b9a32-dd2f-47fe-82da-d6829240ba9f" containerID="66216c48b5cecec3fb8e0a058bb40ef55778abee69091be9aeda016cba54e670" exitCode=0 Jan 27 08:50:24 crc kubenswrapper[4799]: I0127 08:50:24.233754 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bvt4l" event={"ID":"991b9a32-dd2f-47fe-82da-d6829240ba9f","Type":"ContainerDied","Data":"66216c48b5cecec3fb8e0a058bb40ef55778abee69091be9aeda016cba54e670"} Jan 27 08:50:25 crc kubenswrapper[4799]: I0127 08:50:25.241873 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bvt4l" event={"ID":"991b9a32-dd2f-47fe-82da-d6829240ba9f","Type":"ContainerStarted","Data":"f262c488519c78ee21fd0aaec757197436b1c46cc8d8c842bc4400e26e28c113"} Jan 27 08:50:25 crc kubenswrapper[4799]: I0127 08:50:25.257912 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bvt4l" podStartSLOduration=2.843075509 podStartE2EDuration="4.257895112s" podCreationTimestamp="2026-01-27 08:50:21 +0000 UTC" firstStartedPulling="2026-01-27 08:50:23.223042235 +0000 UTC m=+3889.534146300" lastFinishedPulling="2026-01-27 08:50:24.637861798 +0000 UTC m=+3890.948965903" observedRunningTime="2026-01-27 08:50:25.255065896 +0000 UTC m=+3891.566169971" watchObservedRunningTime="2026-01-27 08:50:25.257895112 +0000 UTC m=+3891.568999177" Jan 27 08:50:31 crc kubenswrapper[4799]: I0127 08:50:31.451428 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:50:31 crc kubenswrapper[4799]: E0127 08:50:31.452292 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:50:32 crc kubenswrapper[4799]: I0127 08:50:32.246519 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bvt4l" Jan 27 08:50:32 crc kubenswrapper[4799]: I0127 08:50:32.246587 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bvt4l" Jan 27 08:50:32 crc kubenswrapper[4799]: I0127 08:50:32.295566 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bvt4l" Jan 27 08:50:32 crc kubenswrapper[4799]: I0127 08:50:32.357988 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bvt4l" Jan 27 08:50:32 crc kubenswrapper[4799]: I0127 08:50:32.531312 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bvt4l"] Jan 27 08:50:34 crc kubenswrapper[4799]: I0127 08:50:34.312124 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bvt4l" podUID="991b9a32-dd2f-47fe-82da-d6829240ba9f" containerName="registry-server" containerID="cri-o://f262c488519c78ee21fd0aaec757197436b1c46cc8d8c842bc4400e26e28c113" gracePeriod=2 Jan 27 08:50:34 crc kubenswrapper[4799]: I0127 08:50:34.902139 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bvt4l" Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.041052 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/991b9a32-dd2f-47fe-82da-d6829240ba9f-catalog-content\") pod \"991b9a32-dd2f-47fe-82da-d6829240ba9f\" (UID: \"991b9a32-dd2f-47fe-82da-d6829240ba9f\") " Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.041404 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/991b9a32-dd2f-47fe-82da-d6829240ba9f-utilities\") pod \"991b9a32-dd2f-47fe-82da-d6829240ba9f\" (UID: \"991b9a32-dd2f-47fe-82da-d6829240ba9f\") " Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.041506 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47fk6\" (UniqueName: \"kubernetes.io/projected/991b9a32-dd2f-47fe-82da-d6829240ba9f-kube-api-access-47fk6\") pod \"991b9a32-dd2f-47fe-82da-d6829240ba9f\" (UID: \"991b9a32-dd2f-47fe-82da-d6829240ba9f\") " Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.042194 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/991b9a32-dd2f-47fe-82da-d6829240ba9f-utilities" (OuterVolumeSpecName: "utilities") pod "991b9a32-dd2f-47fe-82da-d6829240ba9f" (UID: "991b9a32-dd2f-47fe-82da-d6829240ba9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.046349 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/991b9a32-dd2f-47fe-82da-d6829240ba9f-kube-api-access-47fk6" (OuterVolumeSpecName: "kube-api-access-47fk6") pod "991b9a32-dd2f-47fe-82da-d6829240ba9f" (UID: "991b9a32-dd2f-47fe-82da-d6829240ba9f"). InnerVolumeSpecName "kube-api-access-47fk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.063130 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/991b9a32-dd2f-47fe-82da-d6829240ba9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "991b9a32-dd2f-47fe-82da-d6829240ba9f" (UID: "991b9a32-dd2f-47fe-82da-d6829240ba9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.143887 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47fk6\" (UniqueName: \"kubernetes.io/projected/991b9a32-dd2f-47fe-82da-d6829240ba9f-kube-api-access-47fk6\") on node \"crc\" DevicePath \"\"" Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.143958 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/991b9a32-dd2f-47fe-82da-d6829240ba9f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.143984 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/991b9a32-dd2f-47fe-82da-d6829240ba9f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.327545 4799 generic.go:334] "Generic (PLEG): container finished" podID="991b9a32-dd2f-47fe-82da-d6829240ba9f" containerID="f262c488519c78ee21fd0aaec757197436b1c46cc8d8c842bc4400e26e28c113" exitCode=0 Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.327628 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bvt4l" event={"ID":"991b9a32-dd2f-47fe-82da-d6829240ba9f","Type":"ContainerDied","Data":"f262c488519c78ee21fd0aaec757197436b1c46cc8d8c842bc4400e26e28c113"} Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.327669 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bvt4l" Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.327701 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bvt4l" event={"ID":"991b9a32-dd2f-47fe-82da-d6829240ba9f","Type":"ContainerDied","Data":"ab2774d7e606a81edc08b57ad204d21d4bbd6f16b4a6a1be653152b3e29940a6"} Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.327749 4799 scope.go:117] "RemoveContainer" containerID="f262c488519c78ee21fd0aaec757197436b1c46cc8d8c842bc4400e26e28c113" Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.355364 4799 scope.go:117] "RemoveContainer" containerID="66216c48b5cecec3fb8e0a058bb40ef55778abee69091be9aeda016cba54e670" Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.377445 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bvt4l"] Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.385093 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bvt4l"] Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.449711 4799 scope.go:117] "RemoveContainer" containerID="59fdb45e56e2d6927c58c877af1c54564ebd29bc11344a7344f44e2269a0033b" Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.463901 4799 scope.go:117] "RemoveContainer" containerID="f262c488519c78ee21fd0aaec757197436b1c46cc8d8c842bc4400e26e28c113" Jan 27 08:50:35 crc kubenswrapper[4799]: E0127 08:50:35.464351 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f262c488519c78ee21fd0aaec757197436b1c46cc8d8c842bc4400e26e28c113\": container with ID starting with f262c488519c78ee21fd0aaec757197436b1c46cc8d8c842bc4400e26e28c113 not found: ID does not exist" containerID="f262c488519c78ee21fd0aaec757197436b1c46cc8d8c842bc4400e26e28c113" Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.464389 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f262c488519c78ee21fd0aaec757197436b1c46cc8d8c842bc4400e26e28c113"} err="failed to get container status \"f262c488519c78ee21fd0aaec757197436b1c46cc8d8c842bc4400e26e28c113\": rpc error: code = NotFound desc = could not find container \"f262c488519c78ee21fd0aaec757197436b1c46cc8d8c842bc4400e26e28c113\": container with ID starting with f262c488519c78ee21fd0aaec757197436b1c46cc8d8c842bc4400e26e28c113 not found: ID does not exist" Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.464444 4799 scope.go:117] "RemoveContainer" containerID="66216c48b5cecec3fb8e0a058bb40ef55778abee69091be9aeda016cba54e670" Jan 27 08:50:35 crc kubenswrapper[4799]: E0127 08:50:35.464796 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66216c48b5cecec3fb8e0a058bb40ef55778abee69091be9aeda016cba54e670\": container with ID starting with 66216c48b5cecec3fb8e0a058bb40ef55778abee69091be9aeda016cba54e670 not found: ID does not exist" containerID="66216c48b5cecec3fb8e0a058bb40ef55778abee69091be9aeda016cba54e670" Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.464826 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66216c48b5cecec3fb8e0a058bb40ef55778abee69091be9aeda016cba54e670"} err="failed to get container status \"66216c48b5cecec3fb8e0a058bb40ef55778abee69091be9aeda016cba54e670\": rpc error: code = NotFound desc = could not find container \"66216c48b5cecec3fb8e0a058bb40ef55778abee69091be9aeda016cba54e670\": container with ID starting with 66216c48b5cecec3fb8e0a058bb40ef55778abee69091be9aeda016cba54e670 not found: ID does not exist" Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.464844 4799 scope.go:117] "RemoveContainer" containerID="59fdb45e56e2d6927c58c877af1c54564ebd29bc11344a7344f44e2269a0033b" Jan 27 08:50:35 crc kubenswrapper[4799]: E0127 08:50:35.465145 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59fdb45e56e2d6927c58c877af1c54564ebd29bc11344a7344f44e2269a0033b\": container with ID starting with 59fdb45e56e2d6927c58c877af1c54564ebd29bc11344a7344f44e2269a0033b not found: ID does not exist" containerID="59fdb45e56e2d6927c58c877af1c54564ebd29bc11344a7344f44e2269a0033b" Jan 27 08:50:35 crc kubenswrapper[4799]: I0127 08:50:35.465176 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59fdb45e56e2d6927c58c877af1c54564ebd29bc11344a7344f44e2269a0033b"} err="failed to get container status \"59fdb45e56e2d6927c58c877af1c54564ebd29bc11344a7344f44e2269a0033b\": rpc error: code = NotFound desc = could not find container \"59fdb45e56e2d6927c58c877af1c54564ebd29bc11344a7344f44e2269a0033b\": container with ID starting with 59fdb45e56e2d6927c58c877af1c54564ebd29bc11344a7344f44e2269a0033b not found: ID does not exist" Jan 27 08:50:36 crc kubenswrapper[4799]: I0127 08:50:36.461113 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="991b9a32-dd2f-47fe-82da-d6829240ba9f" path="/var/lib/kubelet/pods/991b9a32-dd2f-47fe-82da-d6829240ba9f/volumes" Jan 27 08:50:44 crc kubenswrapper[4799]: I0127 08:50:44.462568 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:50:44 crc kubenswrapper[4799]: E0127 08:50:44.463530 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:50:59 crc kubenswrapper[4799]: I0127 08:50:59.451469 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:50:59 crc kubenswrapper[4799]: E0127 08:50:59.452588 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:51:10 crc kubenswrapper[4799]: I0127 08:51:10.451631 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:51:10 crc kubenswrapper[4799]: E0127 08:51:10.452249 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:51:25 crc kubenswrapper[4799]: I0127 08:51:25.451854 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:51:25 crc kubenswrapper[4799]: E0127 08:51:25.453037 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:51:40 crc kubenswrapper[4799]: I0127 08:51:40.451799 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:51:40 crc kubenswrapper[4799]: E0127 08:51:40.452556 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:51:51 crc kubenswrapper[4799]: I0127 08:51:51.451366 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:51:51 crc kubenswrapper[4799]: E0127 08:51:51.452211 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:52:04 crc kubenswrapper[4799]: I0127 08:52:04.456413 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:52:04 crc kubenswrapper[4799]: E0127 08:52:04.457245 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:52:17 crc kubenswrapper[4799]: I0127 08:52:17.451744 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:52:17 crc kubenswrapper[4799]: E0127 08:52:17.452931 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:52:29 crc kubenswrapper[4799]: I0127 08:52:29.451394 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:52:29 crc kubenswrapper[4799]: E0127 08:52:29.452935 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:52:40 crc kubenswrapper[4799]: I0127 08:52:40.451388 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:52:40 crc kubenswrapper[4799]: E0127 08:52:40.456173 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:52:52 crc kubenswrapper[4799]: I0127 08:52:52.451406 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:52:52 crc kubenswrapper[4799]: E0127 08:52:52.452925 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:53:04 crc kubenswrapper[4799]: I0127 08:53:04.455446 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:53:04 crc kubenswrapper[4799]: E0127 08:53:04.456160 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:53:19 crc kubenswrapper[4799]: I0127 08:53:19.451046 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:53:19 crc kubenswrapper[4799]: E0127 08:53:19.451811 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:53:33 crc kubenswrapper[4799]: I0127 08:53:33.452085 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:53:33 crc kubenswrapper[4799]: E0127 08:53:33.453419 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:53:46 crc kubenswrapper[4799]: I0127 08:53:46.451995 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:53:46 crc kubenswrapper[4799]: E0127 08:53:46.452730 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:54:00 crc kubenswrapper[4799]: I0127 08:54:00.451927 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:54:00 crc kubenswrapper[4799]: E0127 08:54:00.452751 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:54:07 crc kubenswrapper[4799]: I0127 08:54:07.595924 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vp7xf"] Jan 27 08:54:07 crc kubenswrapper[4799]: E0127 08:54:07.596821 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="991b9a32-dd2f-47fe-82da-d6829240ba9f" containerName="extract-content" Jan 27 08:54:07 crc kubenswrapper[4799]: I0127 08:54:07.596836 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="991b9a32-dd2f-47fe-82da-d6829240ba9f" containerName="extract-content" Jan 27 08:54:07 crc kubenswrapper[4799]: E0127 08:54:07.596857 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="991b9a32-dd2f-47fe-82da-d6829240ba9f" containerName="extract-utilities" Jan 27 08:54:07 crc kubenswrapper[4799]: I0127 08:54:07.596865 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="991b9a32-dd2f-47fe-82da-d6829240ba9f" containerName="extract-utilities" Jan 27 08:54:07 crc kubenswrapper[4799]: E0127 08:54:07.596887 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="991b9a32-dd2f-47fe-82da-d6829240ba9f" containerName="registry-server" Jan 27 08:54:07 crc kubenswrapper[4799]: I0127 08:54:07.596896 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="991b9a32-dd2f-47fe-82da-d6829240ba9f" containerName="registry-server" Jan 27 08:54:07 crc kubenswrapper[4799]: I0127 08:54:07.597095 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="991b9a32-dd2f-47fe-82da-d6829240ba9f" containerName="registry-server" Jan 27 08:54:07 crc kubenswrapper[4799]: I0127 08:54:07.598292 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vp7xf" Jan 27 08:54:07 crc kubenswrapper[4799]: I0127 08:54:07.617203 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vp7xf"] Jan 27 08:54:07 crc kubenswrapper[4799]: I0127 08:54:07.732875 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3504cb6-69a2-49c9-8259-5ea5392cbeb0-catalog-content\") pod \"redhat-operators-vp7xf\" (UID: \"b3504cb6-69a2-49c9-8259-5ea5392cbeb0\") " pod="openshift-marketplace/redhat-operators-vp7xf" Jan 27 08:54:07 crc kubenswrapper[4799]: I0127 08:54:07.732945 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxc7m\" (UniqueName: \"kubernetes.io/projected/b3504cb6-69a2-49c9-8259-5ea5392cbeb0-kube-api-access-gxc7m\") pod \"redhat-operators-vp7xf\" (UID: \"b3504cb6-69a2-49c9-8259-5ea5392cbeb0\") " pod="openshift-marketplace/redhat-operators-vp7xf" Jan 27 08:54:07 crc kubenswrapper[4799]: I0127 08:54:07.732994 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3504cb6-69a2-49c9-8259-5ea5392cbeb0-utilities\") pod \"redhat-operators-vp7xf\" (UID: \"b3504cb6-69a2-49c9-8259-5ea5392cbeb0\") " pod="openshift-marketplace/redhat-operators-vp7xf" Jan 27 08:54:07 crc kubenswrapper[4799]: I0127 08:54:07.834077 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxc7m\" (UniqueName: \"kubernetes.io/projected/b3504cb6-69a2-49c9-8259-5ea5392cbeb0-kube-api-access-gxc7m\") pod \"redhat-operators-vp7xf\" (UID: \"b3504cb6-69a2-49c9-8259-5ea5392cbeb0\") " pod="openshift-marketplace/redhat-operators-vp7xf" Jan 27 08:54:07 crc kubenswrapper[4799]: I0127 08:54:07.834175 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3504cb6-69a2-49c9-8259-5ea5392cbeb0-utilities\") pod \"redhat-operators-vp7xf\" (UID: \"b3504cb6-69a2-49c9-8259-5ea5392cbeb0\") " pod="openshift-marketplace/redhat-operators-vp7xf" Jan 27 08:54:07 crc kubenswrapper[4799]: I0127 08:54:07.834256 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3504cb6-69a2-49c9-8259-5ea5392cbeb0-catalog-content\") pod \"redhat-operators-vp7xf\" (UID: \"b3504cb6-69a2-49c9-8259-5ea5392cbeb0\") " pod="openshift-marketplace/redhat-operators-vp7xf" Jan 27 08:54:07 crc kubenswrapper[4799]: I0127 08:54:07.834759 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3504cb6-69a2-49c9-8259-5ea5392cbeb0-utilities\") pod \"redhat-operators-vp7xf\" (UID: \"b3504cb6-69a2-49c9-8259-5ea5392cbeb0\") " pod="openshift-marketplace/redhat-operators-vp7xf" Jan 27 08:54:07 crc kubenswrapper[4799]: I0127 08:54:07.834778 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3504cb6-69a2-49c9-8259-5ea5392cbeb0-catalog-content\") pod \"redhat-operators-vp7xf\" (UID: \"b3504cb6-69a2-49c9-8259-5ea5392cbeb0\") " pod="openshift-marketplace/redhat-operators-vp7xf" Jan 27 08:54:07 crc kubenswrapper[4799]: I0127 08:54:07.853999 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxc7m\" (UniqueName: \"kubernetes.io/projected/b3504cb6-69a2-49c9-8259-5ea5392cbeb0-kube-api-access-gxc7m\") pod \"redhat-operators-vp7xf\" (UID: \"b3504cb6-69a2-49c9-8259-5ea5392cbeb0\") " pod="openshift-marketplace/redhat-operators-vp7xf" Jan 27 08:54:07 crc kubenswrapper[4799]: I0127 08:54:07.919865 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vp7xf" Jan 27 08:54:08 crc kubenswrapper[4799]: I0127 08:54:08.389099 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vp7xf"] Jan 27 08:54:08 crc kubenswrapper[4799]: I0127 08:54:08.951737 4799 generic.go:334] "Generic (PLEG): container finished" podID="b3504cb6-69a2-49c9-8259-5ea5392cbeb0" containerID="4f801b9b291eb0ea2751310fc0d6fb786bf4b35b54d14b346f7ca61cf6d1d5c2" exitCode=0 Jan 27 08:54:08 crc kubenswrapper[4799]: I0127 08:54:08.951784 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vp7xf" event={"ID":"b3504cb6-69a2-49c9-8259-5ea5392cbeb0","Type":"ContainerDied","Data":"4f801b9b291eb0ea2751310fc0d6fb786bf4b35b54d14b346f7ca61cf6d1d5c2"} Jan 27 08:54:08 crc kubenswrapper[4799]: I0127 08:54:08.952011 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vp7xf" event={"ID":"b3504cb6-69a2-49c9-8259-5ea5392cbeb0","Type":"ContainerStarted","Data":"2d5fd5f4c4a83ea6348ab130c734f69a97ab10767771470a252cbc79cb0ad05e"} Jan 27 08:54:08 crc kubenswrapper[4799]: I0127 08:54:08.953350 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 08:54:09 crc kubenswrapper[4799]: I0127 08:54:09.974774 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vp7xf" event={"ID":"b3504cb6-69a2-49c9-8259-5ea5392cbeb0","Type":"ContainerStarted","Data":"14c4a6184c2f760f39a207cfbb56ccf47dc333a45d08a98d068c6117bd1928ec"} Jan 27 08:54:10 crc kubenswrapper[4799]: I0127 08:54:10.801188 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ght78"] Jan 27 08:54:10 crc kubenswrapper[4799]: I0127 08:54:10.803052 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ght78" Jan 27 08:54:10 crc kubenswrapper[4799]: I0127 08:54:10.806688 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ght78"] Jan 27 08:54:10 crc kubenswrapper[4799]: I0127 08:54:10.879357 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv6zj\" (UniqueName: \"kubernetes.io/projected/7408787f-acea-4bdc-91d8-d42fb17b54ce-kube-api-access-mv6zj\") pod \"community-operators-ght78\" (UID: \"7408787f-acea-4bdc-91d8-d42fb17b54ce\") " pod="openshift-marketplace/community-operators-ght78" Jan 27 08:54:10 crc kubenswrapper[4799]: I0127 08:54:10.879430 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7408787f-acea-4bdc-91d8-d42fb17b54ce-utilities\") pod \"community-operators-ght78\" (UID: \"7408787f-acea-4bdc-91d8-d42fb17b54ce\") " pod="openshift-marketplace/community-operators-ght78" Jan 27 08:54:10 crc kubenswrapper[4799]: I0127 08:54:10.879466 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7408787f-acea-4bdc-91d8-d42fb17b54ce-catalog-content\") pod \"community-operators-ght78\" (UID: \"7408787f-acea-4bdc-91d8-d42fb17b54ce\") " pod="openshift-marketplace/community-operators-ght78" Jan 27 08:54:10 crc kubenswrapper[4799]: I0127 08:54:10.981212 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv6zj\" (UniqueName: \"kubernetes.io/projected/7408787f-acea-4bdc-91d8-d42fb17b54ce-kube-api-access-mv6zj\") pod \"community-operators-ght78\" (UID: \"7408787f-acea-4bdc-91d8-d42fb17b54ce\") " pod="openshift-marketplace/community-operators-ght78" Jan 27 08:54:10 crc kubenswrapper[4799]: I0127 08:54:10.981488 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7408787f-acea-4bdc-91d8-d42fb17b54ce-utilities\") pod \"community-operators-ght78\" (UID: \"7408787f-acea-4bdc-91d8-d42fb17b54ce\") " pod="openshift-marketplace/community-operators-ght78" Jan 27 08:54:10 crc kubenswrapper[4799]: I0127 08:54:10.981528 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7408787f-acea-4bdc-91d8-d42fb17b54ce-catalog-content\") pod \"community-operators-ght78\" (UID: \"7408787f-acea-4bdc-91d8-d42fb17b54ce\") " pod="openshift-marketplace/community-operators-ght78" Jan 27 08:54:10 crc kubenswrapper[4799]: I0127 08:54:10.981991 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7408787f-acea-4bdc-91d8-d42fb17b54ce-utilities\") pod \"community-operators-ght78\" (UID: \"7408787f-acea-4bdc-91d8-d42fb17b54ce\") " pod="openshift-marketplace/community-operators-ght78" Jan 27 08:54:10 crc kubenswrapper[4799]: I0127 08:54:10.982003 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7408787f-acea-4bdc-91d8-d42fb17b54ce-catalog-content\") pod \"community-operators-ght78\" (UID: \"7408787f-acea-4bdc-91d8-d42fb17b54ce\") " pod="openshift-marketplace/community-operators-ght78" Jan 27 08:54:10 crc kubenswrapper[4799]: I0127 08:54:10.984055 4799 generic.go:334] "Generic (PLEG): container finished" podID="b3504cb6-69a2-49c9-8259-5ea5392cbeb0" containerID="14c4a6184c2f760f39a207cfbb56ccf47dc333a45d08a98d068c6117bd1928ec" exitCode=0 Jan 27 08:54:10 crc kubenswrapper[4799]: I0127 08:54:10.984096 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vp7xf" event={"ID":"b3504cb6-69a2-49c9-8259-5ea5392cbeb0","Type":"ContainerDied","Data":"14c4a6184c2f760f39a207cfbb56ccf47dc333a45d08a98d068c6117bd1928ec"} Jan 27 08:54:11 crc kubenswrapper[4799]: I0127 08:54:11.015552 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv6zj\" (UniqueName: \"kubernetes.io/projected/7408787f-acea-4bdc-91d8-d42fb17b54ce-kube-api-access-mv6zj\") pod \"community-operators-ght78\" (UID: \"7408787f-acea-4bdc-91d8-d42fb17b54ce\") " pod="openshift-marketplace/community-operators-ght78" Jan 27 08:54:11 crc kubenswrapper[4799]: I0127 08:54:11.136630 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ght78" Jan 27 08:54:11 crc kubenswrapper[4799]: I0127 08:54:11.686419 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ght78"] Jan 27 08:54:11 crc kubenswrapper[4799]: W0127 08:54:11.696762 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7408787f_acea_4bdc_91d8_d42fb17b54ce.slice/crio-a274d8d0283c76c555889bf0db96c140b083c2bbdcb70f7744efde0bd28d8428 WatchSource:0}: Error finding container a274d8d0283c76c555889bf0db96c140b083c2bbdcb70f7744efde0bd28d8428: Status 404 returned error can't find the container with id a274d8d0283c76c555889bf0db96c140b083c2bbdcb70f7744efde0bd28d8428 Jan 27 08:54:11 crc kubenswrapper[4799]: I0127 08:54:11.993450 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vp7xf" event={"ID":"b3504cb6-69a2-49c9-8259-5ea5392cbeb0","Type":"ContainerStarted","Data":"d9f647a110c15d32d63d505a033ece06c0c7f5c0e37baf4424192e64755a412c"} Jan 27 08:54:11 crc kubenswrapper[4799]: I0127 08:54:11.995631 4799 generic.go:334] "Generic (PLEG): container finished" podID="7408787f-acea-4bdc-91d8-d42fb17b54ce" containerID="2376f2f4f8bc8c240b0b5808b0a1e4fea0e96f80abea291d4f11427500e0af13" exitCode=0 Jan 27 08:54:11 crc kubenswrapper[4799]: I0127 08:54:11.995671 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ght78" event={"ID":"7408787f-acea-4bdc-91d8-d42fb17b54ce","Type":"ContainerDied","Data":"2376f2f4f8bc8c240b0b5808b0a1e4fea0e96f80abea291d4f11427500e0af13"} Jan 27 08:54:11 crc kubenswrapper[4799]: I0127 08:54:11.995716 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ght78" event={"ID":"7408787f-acea-4bdc-91d8-d42fb17b54ce","Type":"ContainerStarted","Data":"a274d8d0283c76c555889bf0db96c140b083c2bbdcb70f7744efde0bd28d8428"} Jan 27 08:54:12 crc kubenswrapper[4799]: I0127 08:54:12.038459 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vp7xf" podStartSLOduration=2.480794372 podStartE2EDuration="5.038423297s" podCreationTimestamp="2026-01-27 08:54:07 +0000 UTC" firstStartedPulling="2026-01-27 08:54:08.953155804 +0000 UTC m=+4115.264259859" lastFinishedPulling="2026-01-27 08:54:11.510784719 +0000 UTC m=+4117.821888784" observedRunningTime="2026-01-27 08:54:12.03002325 +0000 UTC m=+4118.341127335" watchObservedRunningTime="2026-01-27 08:54:12.038423297 +0000 UTC m=+4118.349527362" Jan 27 08:54:13 crc kubenswrapper[4799]: I0127 08:54:13.004226 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ght78" event={"ID":"7408787f-acea-4bdc-91d8-d42fb17b54ce","Type":"ContainerStarted","Data":"4fddb0b786bd471fde86d229bfe2e4a524b98a2673f780d31e1d0afd2089bca2"} Jan 27 08:54:13 crc kubenswrapper[4799]: I0127 08:54:13.450819 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:54:13 crc kubenswrapper[4799]: E0127 08:54:13.451080 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:54:14 crc kubenswrapper[4799]: I0127 08:54:14.011104 4799 generic.go:334] "Generic (PLEG): container finished" podID="7408787f-acea-4bdc-91d8-d42fb17b54ce" containerID="4fddb0b786bd471fde86d229bfe2e4a524b98a2673f780d31e1d0afd2089bca2" exitCode=0 Jan 27 08:54:14 crc kubenswrapper[4799]: I0127 08:54:14.011149 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ght78" event={"ID":"7408787f-acea-4bdc-91d8-d42fb17b54ce","Type":"ContainerDied","Data":"4fddb0b786bd471fde86d229bfe2e4a524b98a2673f780d31e1d0afd2089bca2"} Jan 27 08:54:15 crc kubenswrapper[4799]: I0127 08:54:15.044813 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ght78" event={"ID":"7408787f-acea-4bdc-91d8-d42fb17b54ce","Type":"ContainerStarted","Data":"de92c1d395092a85474752a6e55a9996b8b32e63b10f49fc0a8ffd7c42cdc9e8"} Jan 27 08:54:15 crc kubenswrapper[4799]: I0127 08:54:15.063180 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ght78" podStartSLOduration=2.662744279 podStartE2EDuration="5.063159574s" podCreationTimestamp="2026-01-27 08:54:10 +0000 UTC" firstStartedPulling="2026-01-27 08:54:11.9967096 +0000 UTC m=+4118.307813665" lastFinishedPulling="2026-01-27 08:54:14.397124895 +0000 UTC m=+4120.708228960" observedRunningTime="2026-01-27 08:54:15.061074558 +0000 UTC m=+4121.372178633" watchObservedRunningTime="2026-01-27 08:54:15.063159574 +0000 UTC m=+4121.374263639" Jan 27 08:54:17 crc kubenswrapper[4799]: I0127 08:54:17.920376 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vp7xf" Jan 27 08:54:17 crc kubenswrapper[4799]: I0127 08:54:17.920810 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vp7xf" Jan 27 08:54:18 crc kubenswrapper[4799]: I0127 08:54:18.078337 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vp7xf" Jan 27 08:54:18 crc kubenswrapper[4799]: I0127 08:54:18.122593 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vp7xf" Jan 27 08:54:19 crc kubenswrapper[4799]: I0127 08:54:19.178911 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vp7xf"] Jan 27 08:54:20 crc kubenswrapper[4799]: I0127 08:54:20.078163 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vp7xf" podUID="b3504cb6-69a2-49c9-8259-5ea5392cbeb0" containerName="registry-server" containerID="cri-o://d9f647a110c15d32d63d505a033ece06c0c7f5c0e37baf4424192e64755a412c" gracePeriod=2 Jan 27 08:54:20 crc kubenswrapper[4799]: I0127 08:54:20.445035 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vp7xf" Jan 27 08:54:20 crc kubenswrapper[4799]: I0127 08:54:20.518546 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3504cb6-69a2-49c9-8259-5ea5392cbeb0-utilities\") pod \"b3504cb6-69a2-49c9-8259-5ea5392cbeb0\" (UID: \"b3504cb6-69a2-49c9-8259-5ea5392cbeb0\") " Jan 27 08:54:20 crc kubenswrapper[4799]: I0127 08:54:20.518593 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxc7m\" (UniqueName: \"kubernetes.io/projected/b3504cb6-69a2-49c9-8259-5ea5392cbeb0-kube-api-access-gxc7m\") pod \"b3504cb6-69a2-49c9-8259-5ea5392cbeb0\" (UID: \"b3504cb6-69a2-49c9-8259-5ea5392cbeb0\") " Jan 27 08:54:20 crc kubenswrapper[4799]: I0127 08:54:20.518661 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3504cb6-69a2-49c9-8259-5ea5392cbeb0-catalog-content\") pod \"b3504cb6-69a2-49c9-8259-5ea5392cbeb0\" (UID: \"b3504cb6-69a2-49c9-8259-5ea5392cbeb0\") " Jan 27 08:54:20 crc kubenswrapper[4799]: I0127 08:54:20.519315 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3504cb6-69a2-49c9-8259-5ea5392cbeb0-utilities" (OuterVolumeSpecName: "utilities") pod "b3504cb6-69a2-49c9-8259-5ea5392cbeb0" (UID: "b3504cb6-69a2-49c9-8259-5ea5392cbeb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:54:20 crc kubenswrapper[4799]: I0127 08:54:20.524553 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3504cb6-69a2-49c9-8259-5ea5392cbeb0-kube-api-access-gxc7m" (OuterVolumeSpecName: "kube-api-access-gxc7m") pod "b3504cb6-69a2-49c9-8259-5ea5392cbeb0" (UID: "b3504cb6-69a2-49c9-8259-5ea5392cbeb0"). InnerVolumeSpecName "kube-api-access-gxc7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:54:20 crc kubenswrapper[4799]: I0127 08:54:20.620559 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3504cb6-69a2-49c9-8259-5ea5392cbeb0-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:54:20 crc kubenswrapper[4799]: I0127 08:54:20.620626 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxc7m\" (UniqueName: \"kubernetes.io/projected/b3504cb6-69a2-49c9-8259-5ea5392cbeb0-kube-api-access-gxc7m\") on node \"crc\" DevicePath \"\"" Jan 27 08:54:21 crc kubenswrapper[4799]: I0127 08:54:21.088113 4799 generic.go:334] "Generic (PLEG): container finished" podID="b3504cb6-69a2-49c9-8259-5ea5392cbeb0" containerID="d9f647a110c15d32d63d505a033ece06c0c7f5c0e37baf4424192e64755a412c" exitCode=0 Jan 27 08:54:21 crc kubenswrapper[4799]: I0127 08:54:21.088170 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vp7xf" event={"ID":"b3504cb6-69a2-49c9-8259-5ea5392cbeb0","Type":"ContainerDied","Data":"d9f647a110c15d32d63d505a033ece06c0c7f5c0e37baf4424192e64755a412c"} Jan 27 08:54:21 crc kubenswrapper[4799]: I0127 08:54:21.088522 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vp7xf" event={"ID":"b3504cb6-69a2-49c9-8259-5ea5392cbeb0","Type":"ContainerDied","Data":"2d5fd5f4c4a83ea6348ab130c734f69a97ab10767771470a252cbc79cb0ad05e"} Jan 27 08:54:21 crc kubenswrapper[4799]: I0127 08:54:21.088543 4799 scope.go:117] "RemoveContainer" containerID="d9f647a110c15d32d63d505a033ece06c0c7f5c0e37baf4424192e64755a412c" Jan 27 08:54:21 crc kubenswrapper[4799]: I0127 08:54:21.088189 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vp7xf" Jan 27 08:54:21 crc kubenswrapper[4799]: I0127 08:54:21.109765 4799 scope.go:117] "RemoveContainer" containerID="14c4a6184c2f760f39a207cfbb56ccf47dc333a45d08a98d068c6117bd1928ec" Jan 27 08:54:21 crc kubenswrapper[4799]: I0127 08:54:21.124946 4799 scope.go:117] "RemoveContainer" containerID="4f801b9b291eb0ea2751310fc0d6fb786bf4b35b54d14b346f7ca61cf6d1d5c2" Jan 27 08:54:21 crc kubenswrapper[4799]: I0127 08:54:21.137056 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ght78" Jan 27 08:54:21 crc kubenswrapper[4799]: I0127 08:54:21.138079 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ght78" Jan 27 08:54:21 crc kubenswrapper[4799]: I0127 08:54:21.165798 4799 scope.go:117] "RemoveContainer" containerID="d9f647a110c15d32d63d505a033ece06c0c7f5c0e37baf4424192e64755a412c" Jan 27 08:54:21 crc kubenswrapper[4799]: E0127 08:54:21.166320 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9f647a110c15d32d63d505a033ece06c0c7f5c0e37baf4424192e64755a412c\": container with ID starting with d9f647a110c15d32d63d505a033ece06c0c7f5c0e37baf4424192e64755a412c not found: ID does not exist" containerID="d9f647a110c15d32d63d505a033ece06c0c7f5c0e37baf4424192e64755a412c" Jan 27 08:54:21 crc kubenswrapper[4799]: I0127 08:54:21.166350 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9f647a110c15d32d63d505a033ece06c0c7f5c0e37baf4424192e64755a412c"} err="failed to get container status \"d9f647a110c15d32d63d505a033ece06c0c7f5c0e37baf4424192e64755a412c\": rpc error: code = NotFound desc = could not find container \"d9f647a110c15d32d63d505a033ece06c0c7f5c0e37baf4424192e64755a412c\": container with ID starting with d9f647a110c15d32d63d505a033ece06c0c7f5c0e37baf4424192e64755a412c not found: ID does not exist" Jan 27 08:54:21 crc kubenswrapper[4799]: I0127 08:54:21.166370 4799 scope.go:117] "RemoveContainer" containerID="14c4a6184c2f760f39a207cfbb56ccf47dc333a45d08a98d068c6117bd1928ec" Jan 27 08:54:21 crc kubenswrapper[4799]: E0127 08:54:21.166632 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14c4a6184c2f760f39a207cfbb56ccf47dc333a45d08a98d068c6117bd1928ec\": container with ID starting with 14c4a6184c2f760f39a207cfbb56ccf47dc333a45d08a98d068c6117bd1928ec not found: ID does not exist" containerID="14c4a6184c2f760f39a207cfbb56ccf47dc333a45d08a98d068c6117bd1928ec" Jan 27 08:54:21 crc kubenswrapper[4799]: I0127 08:54:21.166655 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14c4a6184c2f760f39a207cfbb56ccf47dc333a45d08a98d068c6117bd1928ec"} err="failed to get container status \"14c4a6184c2f760f39a207cfbb56ccf47dc333a45d08a98d068c6117bd1928ec\": rpc error: code = NotFound desc = could not find container \"14c4a6184c2f760f39a207cfbb56ccf47dc333a45d08a98d068c6117bd1928ec\": container with ID starting with 14c4a6184c2f760f39a207cfbb56ccf47dc333a45d08a98d068c6117bd1928ec not found: ID does not exist" Jan 27 08:54:21 crc kubenswrapper[4799]: I0127 08:54:21.166668 4799 scope.go:117] "RemoveContainer" containerID="4f801b9b291eb0ea2751310fc0d6fb786bf4b35b54d14b346f7ca61cf6d1d5c2" Jan 27 08:54:21 crc kubenswrapper[4799]: E0127 08:54:21.166915 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f801b9b291eb0ea2751310fc0d6fb786bf4b35b54d14b346f7ca61cf6d1d5c2\": container with ID starting with 4f801b9b291eb0ea2751310fc0d6fb786bf4b35b54d14b346f7ca61cf6d1d5c2 not found: ID does not exist" containerID="4f801b9b291eb0ea2751310fc0d6fb786bf4b35b54d14b346f7ca61cf6d1d5c2" Jan 27 08:54:21 crc kubenswrapper[4799]: I0127 08:54:21.166948 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f801b9b291eb0ea2751310fc0d6fb786bf4b35b54d14b346f7ca61cf6d1d5c2"} err="failed to get container status \"4f801b9b291eb0ea2751310fc0d6fb786bf4b35b54d14b346f7ca61cf6d1d5c2\": rpc error: code = NotFound desc = could not find container \"4f801b9b291eb0ea2751310fc0d6fb786bf4b35b54d14b346f7ca61cf6d1d5c2\": container with ID starting with 4f801b9b291eb0ea2751310fc0d6fb786bf4b35b54d14b346f7ca61cf6d1d5c2 not found: ID does not exist" Jan 27 08:54:21 crc kubenswrapper[4799]: I0127 08:54:21.203459 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ght78" Jan 27 08:54:21 crc kubenswrapper[4799]: I0127 08:54:21.950942 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3504cb6-69a2-49c9-8259-5ea5392cbeb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3504cb6-69a2-49c9-8259-5ea5392cbeb0" (UID: "b3504cb6-69a2-49c9-8259-5ea5392cbeb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:54:22 crc kubenswrapper[4799]: I0127 08:54:22.029294 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vp7xf"] Jan 27 08:54:22 crc kubenswrapper[4799]: I0127 08:54:22.035849 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vp7xf"] Jan 27 08:54:22 crc kubenswrapper[4799]: I0127 08:54:22.048025 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3504cb6-69a2-49c9-8259-5ea5392cbeb0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:54:22 crc kubenswrapper[4799]: I0127 08:54:22.141319 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ght78" Jan 27 08:54:22 crc kubenswrapper[4799]: I0127 08:54:22.468402 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3504cb6-69a2-49c9-8259-5ea5392cbeb0" path="/var/lib/kubelet/pods/b3504cb6-69a2-49c9-8259-5ea5392cbeb0/volumes" Jan 27 08:54:22 crc kubenswrapper[4799]: I0127 08:54:22.981792 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ght78"] Jan 27 08:54:24 crc kubenswrapper[4799]: I0127 08:54:24.110961 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ght78" podUID="7408787f-acea-4bdc-91d8-d42fb17b54ce" containerName="registry-server" containerID="cri-o://de92c1d395092a85474752a6e55a9996b8b32e63b10f49fc0a8ffd7c42cdc9e8" gracePeriod=2 Jan 27 08:54:24 crc kubenswrapper[4799]: I0127 08:54:24.553532 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ght78" Jan 27 08:54:24 crc kubenswrapper[4799]: I0127 08:54:24.685762 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv6zj\" (UniqueName: \"kubernetes.io/projected/7408787f-acea-4bdc-91d8-d42fb17b54ce-kube-api-access-mv6zj\") pod \"7408787f-acea-4bdc-91d8-d42fb17b54ce\" (UID: \"7408787f-acea-4bdc-91d8-d42fb17b54ce\") " Jan 27 08:54:24 crc kubenswrapper[4799]: I0127 08:54:24.685881 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7408787f-acea-4bdc-91d8-d42fb17b54ce-utilities\") pod \"7408787f-acea-4bdc-91d8-d42fb17b54ce\" (UID: \"7408787f-acea-4bdc-91d8-d42fb17b54ce\") " Jan 27 08:54:24 crc kubenswrapper[4799]: I0127 08:54:24.687479 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7408787f-acea-4bdc-91d8-d42fb17b54ce-utilities" (OuterVolumeSpecName: "utilities") pod "7408787f-acea-4bdc-91d8-d42fb17b54ce" (UID: "7408787f-acea-4bdc-91d8-d42fb17b54ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:54:24 crc kubenswrapper[4799]: I0127 08:54:24.687596 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7408787f-acea-4bdc-91d8-d42fb17b54ce-catalog-content\") pod \"7408787f-acea-4bdc-91d8-d42fb17b54ce\" (UID: \"7408787f-acea-4bdc-91d8-d42fb17b54ce\") " Jan 27 08:54:24 crc kubenswrapper[4799]: I0127 08:54:24.698460 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7408787f-acea-4bdc-91d8-d42fb17b54ce-kube-api-access-mv6zj" (OuterVolumeSpecName: "kube-api-access-mv6zj") pod "7408787f-acea-4bdc-91d8-d42fb17b54ce" (UID: "7408787f-acea-4bdc-91d8-d42fb17b54ce"). InnerVolumeSpecName "kube-api-access-mv6zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 08:54:24 crc kubenswrapper[4799]: I0127 08:54:24.704037 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv6zj\" (UniqueName: \"kubernetes.io/projected/7408787f-acea-4bdc-91d8-d42fb17b54ce-kube-api-access-mv6zj\") on node \"crc\" DevicePath \"\"" Jan 27 08:54:24 crc kubenswrapper[4799]: I0127 08:54:24.704086 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7408787f-acea-4bdc-91d8-d42fb17b54ce-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 08:54:24 crc kubenswrapper[4799]: I0127 08:54:24.771846 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7408787f-acea-4bdc-91d8-d42fb17b54ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7408787f-acea-4bdc-91d8-d42fb17b54ce" (UID: "7408787f-acea-4bdc-91d8-d42fb17b54ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 08:54:24 crc kubenswrapper[4799]: I0127 08:54:24.804887 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7408787f-acea-4bdc-91d8-d42fb17b54ce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 08:54:25 crc kubenswrapper[4799]: I0127 08:54:25.121198 4799 generic.go:334] "Generic (PLEG): container finished" podID="7408787f-acea-4bdc-91d8-d42fb17b54ce" containerID="de92c1d395092a85474752a6e55a9996b8b32e63b10f49fc0a8ffd7c42cdc9e8" exitCode=0 Jan 27 08:54:25 crc kubenswrapper[4799]: I0127 08:54:25.121255 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ght78" event={"ID":"7408787f-acea-4bdc-91d8-d42fb17b54ce","Type":"ContainerDied","Data":"de92c1d395092a85474752a6e55a9996b8b32e63b10f49fc0a8ffd7c42cdc9e8"} Jan 27 08:54:25 crc kubenswrapper[4799]: I0127 08:54:25.121319 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ght78" Jan 27 08:54:25 crc kubenswrapper[4799]: I0127 08:54:25.121357 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ght78" event={"ID":"7408787f-acea-4bdc-91d8-d42fb17b54ce","Type":"ContainerDied","Data":"a274d8d0283c76c555889bf0db96c140b083c2bbdcb70f7744efde0bd28d8428"} Jan 27 08:54:25 crc kubenswrapper[4799]: I0127 08:54:25.121392 4799 scope.go:117] "RemoveContainer" containerID="de92c1d395092a85474752a6e55a9996b8b32e63b10f49fc0a8ffd7c42cdc9e8" Jan 27 08:54:25 crc kubenswrapper[4799]: I0127 08:54:25.141150 4799 scope.go:117] "RemoveContainer" containerID="4fddb0b786bd471fde86d229bfe2e4a524b98a2673f780d31e1d0afd2089bca2" Jan 27 08:54:25 crc kubenswrapper[4799]: I0127 08:54:25.163555 4799 scope.go:117] "RemoveContainer" containerID="2376f2f4f8bc8c240b0b5808b0a1e4fea0e96f80abea291d4f11427500e0af13" Jan 27 08:54:25 crc kubenswrapper[4799]: I0127 08:54:25.168326 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ght78"] Jan 27 08:54:25 crc kubenswrapper[4799]: I0127 08:54:25.177580 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ght78"] Jan 27 08:54:25 crc kubenswrapper[4799]: I0127 08:54:25.199213 4799 scope.go:117] "RemoveContainer" containerID="de92c1d395092a85474752a6e55a9996b8b32e63b10f49fc0a8ffd7c42cdc9e8" Jan 27 08:54:25 crc kubenswrapper[4799]: E0127 08:54:25.199783 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de92c1d395092a85474752a6e55a9996b8b32e63b10f49fc0a8ffd7c42cdc9e8\": container with ID starting with de92c1d395092a85474752a6e55a9996b8b32e63b10f49fc0a8ffd7c42cdc9e8 not found: ID does not exist" containerID="de92c1d395092a85474752a6e55a9996b8b32e63b10f49fc0a8ffd7c42cdc9e8" Jan 27 08:54:25 crc kubenswrapper[4799]: I0127 08:54:25.199837 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de92c1d395092a85474752a6e55a9996b8b32e63b10f49fc0a8ffd7c42cdc9e8"} err="failed to get container status \"de92c1d395092a85474752a6e55a9996b8b32e63b10f49fc0a8ffd7c42cdc9e8\": rpc error: code = NotFound desc = could not find container \"de92c1d395092a85474752a6e55a9996b8b32e63b10f49fc0a8ffd7c42cdc9e8\": container with ID starting with de92c1d395092a85474752a6e55a9996b8b32e63b10f49fc0a8ffd7c42cdc9e8 not found: ID does not exist" Jan 27 08:54:25 crc kubenswrapper[4799]: I0127 08:54:25.199888 4799 scope.go:117] "RemoveContainer" containerID="4fddb0b786bd471fde86d229bfe2e4a524b98a2673f780d31e1d0afd2089bca2" Jan 27 08:54:25 crc kubenswrapper[4799]: E0127 08:54:25.200208 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fddb0b786bd471fde86d229bfe2e4a524b98a2673f780d31e1d0afd2089bca2\": container with ID starting with 4fddb0b786bd471fde86d229bfe2e4a524b98a2673f780d31e1d0afd2089bca2 not found: ID does not exist" containerID="4fddb0b786bd471fde86d229bfe2e4a524b98a2673f780d31e1d0afd2089bca2" Jan 27 08:54:25 crc kubenswrapper[4799]: I0127 08:54:25.200253 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fddb0b786bd471fde86d229bfe2e4a524b98a2673f780d31e1d0afd2089bca2"} err="failed to get container status \"4fddb0b786bd471fde86d229bfe2e4a524b98a2673f780d31e1d0afd2089bca2\": rpc error: code = NotFound desc = could not find container \"4fddb0b786bd471fde86d229bfe2e4a524b98a2673f780d31e1d0afd2089bca2\": container with ID starting with 4fddb0b786bd471fde86d229bfe2e4a524b98a2673f780d31e1d0afd2089bca2 not found: ID does not exist" Jan 27 08:54:25 crc kubenswrapper[4799]: I0127 08:54:25.200283 4799 scope.go:117] "RemoveContainer" containerID="2376f2f4f8bc8c240b0b5808b0a1e4fea0e96f80abea291d4f11427500e0af13" Jan 27 08:54:25 crc kubenswrapper[4799]: E0127 08:54:25.200637 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2376f2f4f8bc8c240b0b5808b0a1e4fea0e96f80abea291d4f11427500e0af13\": container with ID starting with 2376f2f4f8bc8c240b0b5808b0a1e4fea0e96f80abea291d4f11427500e0af13 not found: ID does not exist" containerID="2376f2f4f8bc8c240b0b5808b0a1e4fea0e96f80abea291d4f11427500e0af13" Jan 27 08:54:25 crc kubenswrapper[4799]: I0127 08:54:25.200671 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2376f2f4f8bc8c240b0b5808b0a1e4fea0e96f80abea291d4f11427500e0af13"} err="failed to get container status \"2376f2f4f8bc8c240b0b5808b0a1e4fea0e96f80abea291d4f11427500e0af13\": rpc error: code = NotFound desc = could not find container \"2376f2f4f8bc8c240b0b5808b0a1e4fea0e96f80abea291d4f11427500e0af13\": container with ID starting with 2376f2f4f8bc8c240b0b5808b0a1e4fea0e96f80abea291d4f11427500e0af13 not found: ID does not exist" Jan 27 08:54:26 crc kubenswrapper[4799]: I0127 08:54:26.452279 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:54:26 crc kubenswrapper[4799]: E0127 08:54:26.452821 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:54:26 crc kubenswrapper[4799]: I0127 08:54:26.460171 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7408787f-acea-4bdc-91d8-d42fb17b54ce" path="/var/lib/kubelet/pods/7408787f-acea-4bdc-91d8-d42fb17b54ce/volumes" Jan 27 08:54:39 crc kubenswrapper[4799]: I0127 08:54:39.452632 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:54:39 crc kubenswrapper[4799]: E0127 08:54:39.453685 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:54:51 crc kubenswrapper[4799]: I0127 08:54:51.451890 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:54:51 crc kubenswrapper[4799]: E0127 08:54:51.452694 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 08:55:05 crc kubenswrapper[4799]: I0127 08:55:05.452227 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:55:06 crc kubenswrapper[4799]: I0127 08:55:06.478542 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"2f4b68b4e376d60ecd74e2f36e3efc5a8ff7958d696e54cbc819055f075de3ba"} Jan 27 08:57:23 crc kubenswrapper[4799]: I0127 08:57:23.731829 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:57:23 crc kubenswrapper[4799]: I0127 08:57:23.732398 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:57:53 crc kubenswrapper[4799]: I0127 08:57:53.731480 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:57:53 crc kubenswrapper[4799]: I0127 08:57:53.732203 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:58:23 crc kubenswrapper[4799]: I0127 08:58:23.731189 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 08:58:23 crc kubenswrapper[4799]: I0127 08:58:23.731753 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 08:58:23 crc kubenswrapper[4799]: I0127 08:58:23.731800 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 08:58:23 crc kubenswrapper[4799]: I0127 08:58:23.732445 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2f4b68b4e376d60ecd74e2f36e3efc5a8ff7958d696e54cbc819055f075de3ba"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 08:58:23 crc kubenswrapper[4799]: I0127 08:58:23.732502 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://2f4b68b4e376d60ecd74e2f36e3efc5a8ff7958d696e54cbc819055f075de3ba" gracePeriod=600 Jan 27 08:58:23 crc kubenswrapper[4799]: I0127 08:58:23.886280 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="2f4b68b4e376d60ecd74e2f36e3efc5a8ff7958d696e54cbc819055f075de3ba" exitCode=0 Jan 27 08:58:23 crc kubenswrapper[4799]: I0127 08:58:23.886676 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"2f4b68b4e376d60ecd74e2f36e3efc5a8ff7958d696e54cbc819055f075de3ba"} Jan 27 08:58:23 crc kubenswrapper[4799]: I0127 08:58:23.886712 4799 scope.go:117] "RemoveContainer" containerID="b38aef1b19e2a84613102b4f8e181353467d1b4bfa4317111c876286b82cf234" Jan 27 08:58:24 crc kubenswrapper[4799]: I0127 08:58:24.896886 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e"} Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.198698 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491740-gj79t"] Jan 27 09:00:00 crc kubenswrapper[4799]: E0127 09:00:00.200234 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3504cb6-69a2-49c9-8259-5ea5392cbeb0" containerName="extract-content" Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.200278 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3504cb6-69a2-49c9-8259-5ea5392cbeb0" containerName="extract-content" Jan 27 09:00:00 crc kubenswrapper[4799]: E0127 09:00:00.200313 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7408787f-acea-4bdc-91d8-d42fb17b54ce" containerName="extract-utilities" Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.200325 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7408787f-acea-4bdc-91d8-d42fb17b54ce" containerName="extract-utilities" Jan 27 09:00:00 crc kubenswrapper[4799]: E0127 09:00:00.200342 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3504cb6-69a2-49c9-8259-5ea5392cbeb0" containerName="extract-utilities" Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.200352 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3504cb6-69a2-49c9-8259-5ea5392cbeb0" containerName="extract-utilities" Jan 27 09:00:00 crc kubenswrapper[4799]: E0127 09:00:00.200366 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3504cb6-69a2-49c9-8259-5ea5392cbeb0" containerName="registry-server" Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.200374 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3504cb6-69a2-49c9-8259-5ea5392cbeb0" containerName="registry-server" Jan 27 09:00:00 crc kubenswrapper[4799]: E0127 09:00:00.200397 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7408787f-acea-4bdc-91d8-d42fb17b54ce" containerName="registry-server" Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.200405 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7408787f-acea-4bdc-91d8-d42fb17b54ce" containerName="registry-server" Jan 27 09:00:00 crc kubenswrapper[4799]: E0127 09:00:00.200422 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7408787f-acea-4bdc-91d8-d42fb17b54ce" containerName="extract-content" Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.200429 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7408787f-acea-4bdc-91d8-d42fb17b54ce" containerName="extract-content" Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.200631 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="7408787f-acea-4bdc-91d8-d42fb17b54ce" containerName="registry-server" Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.200647 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3504cb6-69a2-49c9-8259-5ea5392cbeb0" containerName="registry-server" Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.201317 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-gj79t" Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.204203 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.207772 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.210070 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491740-gj79t"] Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.346269 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3fb7cb9-2bed-4322-bef1-13c066775faf-secret-volume\") pod \"collect-profiles-29491740-gj79t\" (UID: \"c3fb7cb9-2bed-4322-bef1-13c066775faf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-gj79t" Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.346535 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5b6v\" (UniqueName: \"kubernetes.io/projected/c3fb7cb9-2bed-4322-bef1-13c066775faf-kube-api-access-k5b6v\") pod \"collect-profiles-29491740-gj79t\" (UID: \"c3fb7cb9-2bed-4322-bef1-13c066775faf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-gj79t" Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.346599 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3fb7cb9-2bed-4322-bef1-13c066775faf-config-volume\") pod \"collect-profiles-29491740-gj79t\" (UID: \"c3fb7cb9-2bed-4322-bef1-13c066775faf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-gj79t" Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.448010 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3fb7cb9-2bed-4322-bef1-13c066775faf-secret-volume\") pod \"collect-profiles-29491740-gj79t\" (UID: \"c3fb7cb9-2bed-4322-bef1-13c066775faf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-gj79t" Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.448131 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5b6v\" (UniqueName: \"kubernetes.io/projected/c3fb7cb9-2bed-4322-bef1-13c066775faf-kube-api-access-k5b6v\") pod \"collect-profiles-29491740-gj79t\" (UID: \"c3fb7cb9-2bed-4322-bef1-13c066775faf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-gj79t" Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.448158 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3fb7cb9-2bed-4322-bef1-13c066775faf-config-volume\") pod \"collect-profiles-29491740-gj79t\" (UID: \"c3fb7cb9-2bed-4322-bef1-13c066775faf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-gj79t" Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.449883 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3fb7cb9-2bed-4322-bef1-13c066775faf-config-volume\") pod \"collect-profiles-29491740-gj79t\" (UID: \"c3fb7cb9-2bed-4322-bef1-13c066775faf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-gj79t" Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.454623 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3fb7cb9-2bed-4322-bef1-13c066775faf-secret-volume\") pod \"collect-profiles-29491740-gj79t\" (UID: \"c3fb7cb9-2bed-4322-bef1-13c066775faf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-gj79t" Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.474109 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5b6v\" (UniqueName: \"kubernetes.io/projected/c3fb7cb9-2bed-4322-bef1-13c066775faf-kube-api-access-k5b6v\") pod \"collect-profiles-29491740-gj79t\" (UID: \"c3fb7cb9-2bed-4322-bef1-13c066775faf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-gj79t" Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.527998 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-gj79t" Jan 27 09:00:00 crc kubenswrapper[4799]: I0127 09:00:00.965663 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491740-gj79t"] Jan 27 09:00:00 crc kubenswrapper[4799]: W0127 09:00:00.981148 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3fb7cb9_2bed_4322_bef1_13c066775faf.slice/crio-67685e2b2d5a6c33fcc7be3cbeef24f3cecf14c70d2237dc24534112222b0558 WatchSource:0}: Error finding container 67685e2b2d5a6c33fcc7be3cbeef24f3cecf14c70d2237dc24534112222b0558: Status 404 returned error can't find the container with id 67685e2b2d5a6c33fcc7be3cbeef24f3cecf14c70d2237dc24534112222b0558 Jan 27 09:00:01 crc kubenswrapper[4799]: I0127 09:00:01.712717 4799 generic.go:334] "Generic (PLEG): container finished" podID="c3fb7cb9-2bed-4322-bef1-13c066775faf" containerID="522e8dfd621985c71cbf9bfe5a0795f5a5eb1eb421d64baa7c496a6824962f7e" exitCode=0 Jan 27 09:00:01 crc kubenswrapper[4799]: I0127 09:00:01.712790 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-gj79t" event={"ID":"c3fb7cb9-2bed-4322-bef1-13c066775faf","Type":"ContainerDied","Data":"522e8dfd621985c71cbf9bfe5a0795f5a5eb1eb421d64baa7c496a6824962f7e"} Jan 27 09:00:01 crc kubenswrapper[4799]: I0127 09:00:01.712836 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-gj79t" event={"ID":"c3fb7cb9-2bed-4322-bef1-13c066775faf","Type":"ContainerStarted","Data":"67685e2b2d5a6c33fcc7be3cbeef24f3cecf14c70d2237dc24534112222b0558"} Jan 27 09:00:02 crc kubenswrapper[4799]: I0127 09:00:02.987830 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-gj79t" Jan 27 09:00:03 crc kubenswrapper[4799]: I0127 09:00:03.094879 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5b6v\" (UniqueName: \"kubernetes.io/projected/c3fb7cb9-2bed-4322-bef1-13c066775faf-kube-api-access-k5b6v\") pod \"c3fb7cb9-2bed-4322-bef1-13c066775faf\" (UID: \"c3fb7cb9-2bed-4322-bef1-13c066775faf\") " Jan 27 09:00:03 crc kubenswrapper[4799]: I0127 09:00:03.094970 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3fb7cb9-2bed-4322-bef1-13c066775faf-config-volume\") pod \"c3fb7cb9-2bed-4322-bef1-13c066775faf\" (UID: \"c3fb7cb9-2bed-4322-bef1-13c066775faf\") " Jan 27 09:00:03 crc kubenswrapper[4799]: I0127 09:00:03.095003 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3fb7cb9-2bed-4322-bef1-13c066775faf-secret-volume\") pod \"c3fb7cb9-2bed-4322-bef1-13c066775faf\" (UID: \"c3fb7cb9-2bed-4322-bef1-13c066775faf\") " Jan 27 09:00:03 crc kubenswrapper[4799]: I0127 09:00:03.095951 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3fb7cb9-2bed-4322-bef1-13c066775faf-config-volume" (OuterVolumeSpecName: "config-volume") pod "c3fb7cb9-2bed-4322-bef1-13c066775faf" (UID: "c3fb7cb9-2bed-4322-bef1-13c066775faf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:00:03 crc kubenswrapper[4799]: I0127 09:00:03.100752 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3fb7cb9-2bed-4322-bef1-13c066775faf-kube-api-access-k5b6v" (OuterVolumeSpecName: "kube-api-access-k5b6v") pod "c3fb7cb9-2bed-4322-bef1-13c066775faf" (UID: "c3fb7cb9-2bed-4322-bef1-13c066775faf"). InnerVolumeSpecName "kube-api-access-k5b6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:00:03 crc kubenswrapper[4799]: I0127 09:00:03.103509 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3fb7cb9-2bed-4322-bef1-13c066775faf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c3fb7cb9-2bed-4322-bef1-13c066775faf" (UID: "c3fb7cb9-2bed-4322-bef1-13c066775faf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:00:03 crc kubenswrapper[4799]: I0127 09:00:03.196345 4799 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3fb7cb9-2bed-4322-bef1-13c066775faf-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 09:00:03 crc kubenswrapper[4799]: I0127 09:00:03.196424 4799 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3fb7cb9-2bed-4322-bef1-13c066775faf-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 09:00:03 crc kubenswrapper[4799]: I0127 09:00:03.196436 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5b6v\" (UniqueName: \"kubernetes.io/projected/c3fb7cb9-2bed-4322-bef1-13c066775faf-kube-api-access-k5b6v\") on node \"crc\" DevicePath \"\"" Jan 27 09:00:03 crc kubenswrapper[4799]: I0127 09:00:03.731236 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-gj79t" event={"ID":"c3fb7cb9-2bed-4322-bef1-13c066775faf","Type":"ContainerDied","Data":"67685e2b2d5a6c33fcc7be3cbeef24f3cecf14c70d2237dc24534112222b0558"} Jan 27 09:00:03 crc kubenswrapper[4799]: I0127 09:00:03.731295 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67685e2b2d5a6c33fcc7be3cbeef24f3cecf14c70d2237dc24534112222b0558" Jan 27 09:00:03 crc kubenswrapper[4799]: I0127 09:00:03.731393 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491740-gj79t" Jan 27 09:00:04 crc kubenswrapper[4799]: I0127 09:00:04.064657 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491695-qzn4t"] Jan 27 09:00:04 crc kubenswrapper[4799]: I0127 09:00:04.070604 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491695-qzn4t"] Jan 27 09:00:04 crc kubenswrapper[4799]: I0127 09:00:04.462836 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9afe2e1b-9426-4065-b6ee-1df70cdf7b25" path="/var/lib/kubelet/pods/9afe2e1b-9426-4065-b6ee-1df70cdf7b25/volumes" Jan 27 09:00:39 crc kubenswrapper[4799]: I0127 09:00:39.449722 4799 scope.go:117] "RemoveContainer" containerID="dd1955955a5347e9719e29cb8ab95880d34683ade4e36041d12a64bfa03e6f71" Jan 27 09:00:53 crc kubenswrapper[4799]: I0127 09:00:53.731590 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:00:53 crc kubenswrapper[4799]: I0127 09:00:53.732454 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:01:17 crc kubenswrapper[4799]: I0127 09:01:17.017411 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jjff8"] Jan 27 09:01:17 crc kubenswrapper[4799]: E0127 09:01:17.018384 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3fb7cb9-2bed-4322-bef1-13c066775faf" containerName="collect-profiles" Jan 27 09:01:17 crc kubenswrapper[4799]: I0127 09:01:17.018400 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3fb7cb9-2bed-4322-bef1-13c066775faf" containerName="collect-profiles" Jan 27 09:01:17 crc kubenswrapper[4799]: I0127 09:01:17.018603 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3fb7cb9-2bed-4322-bef1-13c066775faf" containerName="collect-profiles" Jan 27 09:01:17 crc kubenswrapper[4799]: I0127 09:01:17.019761 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jjff8" Jan 27 09:01:17 crc kubenswrapper[4799]: I0127 09:01:17.032637 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jjff8"] Jan 27 09:01:17 crc kubenswrapper[4799]: I0127 09:01:17.111087 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhs7m\" (UniqueName: \"kubernetes.io/projected/069a2e22-4ca4-4d29-82a6-e8b2f4befa9e-kube-api-access-xhs7m\") pod \"redhat-marketplace-jjff8\" (UID: \"069a2e22-4ca4-4d29-82a6-e8b2f4befa9e\") " pod="openshift-marketplace/redhat-marketplace-jjff8" Jan 27 09:01:17 crc kubenswrapper[4799]: I0127 09:01:17.111149 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/069a2e22-4ca4-4d29-82a6-e8b2f4befa9e-catalog-content\") pod \"redhat-marketplace-jjff8\" (UID: \"069a2e22-4ca4-4d29-82a6-e8b2f4befa9e\") " pod="openshift-marketplace/redhat-marketplace-jjff8" Jan 27 09:01:17 crc kubenswrapper[4799]: I0127 09:01:17.111199 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/069a2e22-4ca4-4d29-82a6-e8b2f4befa9e-utilities\") pod \"redhat-marketplace-jjff8\" (UID: \"069a2e22-4ca4-4d29-82a6-e8b2f4befa9e\") " pod="openshift-marketplace/redhat-marketplace-jjff8" Jan 27 09:01:17 crc kubenswrapper[4799]: I0127 09:01:17.212034 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhs7m\" (UniqueName: \"kubernetes.io/projected/069a2e22-4ca4-4d29-82a6-e8b2f4befa9e-kube-api-access-xhs7m\") pod \"redhat-marketplace-jjff8\" (UID: \"069a2e22-4ca4-4d29-82a6-e8b2f4befa9e\") " pod="openshift-marketplace/redhat-marketplace-jjff8" Jan 27 09:01:17 crc kubenswrapper[4799]: I0127 09:01:17.212088 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/069a2e22-4ca4-4d29-82a6-e8b2f4befa9e-catalog-content\") pod \"redhat-marketplace-jjff8\" (UID: \"069a2e22-4ca4-4d29-82a6-e8b2f4befa9e\") " pod="openshift-marketplace/redhat-marketplace-jjff8" Jan 27 09:01:17 crc kubenswrapper[4799]: I0127 09:01:17.212129 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/069a2e22-4ca4-4d29-82a6-e8b2f4befa9e-utilities\") pod \"redhat-marketplace-jjff8\" (UID: \"069a2e22-4ca4-4d29-82a6-e8b2f4befa9e\") " pod="openshift-marketplace/redhat-marketplace-jjff8" Jan 27 09:01:17 crc kubenswrapper[4799]: I0127 09:01:17.212673 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/069a2e22-4ca4-4d29-82a6-e8b2f4befa9e-utilities\") pod \"redhat-marketplace-jjff8\" (UID: \"069a2e22-4ca4-4d29-82a6-e8b2f4befa9e\") " pod="openshift-marketplace/redhat-marketplace-jjff8" Jan 27 09:01:17 crc kubenswrapper[4799]: I0127 09:01:17.212762 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/069a2e22-4ca4-4d29-82a6-e8b2f4befa9e-catalog-content\") pod \"redhat-marketplace-jjff8\" (UID: \"069a2e22-4ca4-4d29-82a6-e8b2f4befa9e\") " pod="openshift-marketplace/redhat-marketplace-jjff8" Jan 27 09:01:17 crc kubenswrapper[4799]: I0127 09:01:17.231189 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhs7m\" (UniqueName: \"kubernetes.io/projected/069a2e22-4ca4-4d29-82a6-e8b2f4befa9e-kube-api-access-xhs7m\") pod \"redhat-marketplace-jjff8\" (UID: \"069a2e22-4ca4-4d29-82a6-e8b2f4befa9e\") " pod="openshift-marketplace/redhat-marketplace-jjff8" Jan 27 09:01:17 crc kubenswrapper[4799]: I0127 09:01:17.336990 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jjff8" Jan 27 09:01:17 crc kubenswrapper[4799]: I0127 09:01:17.820993 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jjff8"] Jan 27 09:01:18 crc kubenswrapper[4799]: I0127 09:01:18.365958 4799 generic.go:334] "Generic (PLEG): container finished" podID="069a2e22-4ca4-4d29-82a6-e8b2f4befa9e" containerID="dd2ed3e143b7a0f1e9ee62302f47234bc04d19adc0c545e9225e28e89fae6968" exitCode=0 Jan 27 09:01:18 crc kubenswrapper[4799]: I0127 09:01:18.366026 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjff8" event={"ID":"069a2e22-4ca4-4d29-82a6-e8b2f4befa9e","Type":"ContainerDied","Data":"dd2ed3e143b7a0f1e9ee62302f47234bc04d19adc0c545e9225e28e89fae6968"} Jan 27 09:01:18 crc kubenswrapper[4799]: I0127 09:01:18.366251 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjff8" event={"ID":"069a2e22-4ca4-4d29-82a6-e8b2f4befa9e","Type":"ContainerStarted","Data":"fa4a3d6b7698fb0bf94502021c0a7a33d3a9c0dfb268e87d9a67fd173116d74e"} Jan 27 09:01:18 crc kubenswrapper[4799]: I0127 09:01:18.368469 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 09:01:19 crc kubenswrapper[4799]: I0127 09:01:19.375215 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjff8" event={"ID":"069a2e22-4ca4-4d29-82a6-e8b2f4befa9e","Type":"ContainerStarted","Data":"d353a8e265d7eb5779ce462a7a44395f5f8515c63cd1d549ab6f1ff6aa2fa996"} Jan 27 09:01:20 crc kubenswrapper[4799]: I0127 09:01:20.382842 4799 generic.go:334] "Generic (PLEG): container finished" podID="069a2e22-4ca4-4d29-82a6-e8b2f4befa9e" containerID="d353a8e265d7eb5779ce462a7a44395f5f8515c63cd1d549ab6f1ff6aa2fa996" exitCode=0 Jan 27 09:01:20 crc kubenswrapper[4799]: I0127 09:01:20.382897 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjff8" event={"ID":"069a2e22-4ca4-4d29-82a6-e8b2f4befa9e","Type":"ContainerDied","Data":"d353a8e265d7eb5779ce462a7a44395f5f8515c63cd1d549ab6f1ff6aa2fa996"} Jan 27 09:01:21 crc kubenswrapper[4799]: I0127 09:01:21.392225 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjff8" event={"ID":"069a2e22-4ca4-4d29-82a6-e8b2f4befa9e","Type":"ContainerStarted","Data":"c900bb230c88f639fda6a0e403d2a22095271966eb0cb3cbe169a24249f1ccba"} Jan 27 09:01:21 crc kubenswrapper[4799]: I0127 09:01:21.409461 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jjff8" podStartSLOduration=2.976064839 podStartE2EDuration="5.409445327s" podCreationTimestamp="2026-01-27 09:01:16 +0000 UTC" firstStartedPulling="2026-01-27 09:01:18.368205488 +0000 UTC m=+4544.679309563" lastFinishedPulling="2026-01-27 09:01:20.801585946 +0000 UTC m=+4547.112690051" observedRunningTime="2026-01-27 09:01:21.408792978 +0000 UTC m=+4547.719897143" watchObservedRunningTime="2026-01-27 09:01:21.409445327 +0000 UTC m=+4547.720549392" Jan 27 09:01:23 crc kubenswrapper[4799]: I0127 09:01:23.731604 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:01:23 crc kubenswrapper[4799]: I0127 09:01:23.732069 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:01:27 crc kubenswrapper[4799]: I0127 09:01:27.337217 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jjff8" Jan 27 09:01:27 crc kubenswrapper[4799]: I0127 09:01:27.337672 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jjff8" Jan 27 09:01:27 crc kubenswrapper[4799]: I0127 09:01:27.390493 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jjff8" Jan 27 09:01:27 crc kubenswrapper[4799]: I0127 09:01:27.476431 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jjff8" Jan 27 09:01:27 crc kubenswrapper[4799]: I0127 09:01:27.624738 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jjff8"] Jan 27 09:01:29 crc kubenswrapper[4799]: I0127 09:01:29.461509 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jjff8" podUID="069a2e22-4ca4-4d29-82a6-e8b2f4befa9e" containerName="registry-server" containerID="cri-o://c900bb230c88f639fda6a0e403d2a22095271966eb0cb3cbe169a24249f1ccba" gracePeriod=2 Jan 27 09:01:29 crc kubenswrapper[4799]: I0127 09:01:29.878460 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jjff8" Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.011837 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/069a2e22-4ca4-4d29-82a6-e8b2f4befa9e-utilities\") pod \"069a2e22-4ca4-4d29-82a6-e8b2f4befa9e\" (UID: \"069a2e22-4ca4-4d29-82a6-e8b2f4befa9e\") " Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.012256 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhs7m\" (UniqueName: \"kubernetes.io/projected/069a2e22-4ca4-4d29-82a6-e8b2f4befa9e-kube-api-access-xhs7m\") pod \"069a2e22-4ca4-4d29-82a6-e8b2f4befa9e\" (UID: \"069a2e22-4ca4-4d29-82a6-e8b2f4befa9e\") " Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.012402 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/069a2e22-4ca4-4d29-82a6-e8b2f4befa9e-catalog-content\") pod \"069a2e22-4ca4-4d29-82a6-e8b2f4befa9e\" (UID: \"069a2e22-4ca4-4d29-82a6-e8b2f4befa9e\") " Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.014581 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/069a2e22-4ca4-4d29-82a6-e8b2f4befa9e-utilities" (OuterVolumeSpecName: "utilities") pod "069a2e22-4ca4-4d29-82a6-e8b2f4befa9e" (UID: "069a2e22-4ca4-4d29-82a6-e8b2f4befa9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.028641 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/069a2e22-4ca4-4d29-82a6-e8b2f4befa9e-kube-api-access-xhs7m" (OuterVolumeSpecName: "kube-api-access-xhs7m") pod "069a2e22-4ca4-4d29-82a6-e8b2f4befa9e" (UID: "069a2e22-4ca4-4d29-82a6-e8b2f4befa9e"). InnerVolumeSpecName "kube-api-access-xhs7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.039934 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/069a2e22-4ca4-4d29-82a6-e8b2f4befa9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "069a2e22-4ca4-4d29-82a6-e8b2f4befa9e" (UID: "069a2e22-4ca4-4d29-82a6-e8b2f4befa9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.114463 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/069a2e22-4ca4-4d29-82a6-e8b2f4befa9e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.114510 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/069a2e22-4ca4-4d29-82a6-e8b2f4befa9e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.114525 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhs7m\" (UniqueName: \"kubernetes.io/projected/069a2e22-4ca4-4d29-82a6-e8b2f4befa9e-kube-api-access-xhs7m\") on node \"crc\" DevicePath \"\"" Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.479380 4799 generic.go:334] "Generic (PLEG): container finished" podID="069a2e22-4ca4-4d29-82a6-e8b2f4befa9e" containerID="c900bb230c88f639fda6a0e403d2a22095271966eb0cb3cbe169a24249f1ccba" exitCode=0 Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.479431 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjff8" event={"ID":"069a2e22-4ca4-4d29-82a6-e8b2f4befa9e","Type":"ContainerDied","Data":"c900bb230c88f639fda6a0e403d2a22095271966eb0cb3cbe169a24249f1ccba"} Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.479484 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jjff8" event={"ID":"069a2e22-4ca4-4d29-82a6-e8b2f4befa9e","Type":"ContainerDied","Data":"fa4a3d6b7698fb0bf94502021c0a7a33d3a9c0dfb268e87d9a67fd173116d74e"} Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.479506 4799 scope.go:117] "RemoveContainer" containerID="c900bb230c88f639fda6a0e403d2a22095271966eb0cb3cbe169a24249f1ccba" Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.479454 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jjff8" Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.512955 4799 scope.go:117] "RemoveContainer" containerID="d353a8e265d7eb5779ce462a7a44395f5f8515c63cd1d549ab6f1ff6aa2fa996" Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.513562 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jjff8"] Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.521937 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jjff8"] Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.540561 4799 scope.go:117] "RemoveContainer" containerID="dd2ed3e143b7a0f1e9ee62302f47234bc04d19adc0c545e9225e28e89fae6968" Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.566252 4799 scope.go:117] "RemoveContainer" containerID="c900bb230c88f639fda6a0e403d2a22095271966eb0cb3cbe169a24249f1ccba" Jan 27 09:01:30 crc kubenswrapper[4799]: E0127 09:01:30.567136 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c900bb230c88f639fda6a0e403d2a22095271966eb0cb3cbe169a24249f1ccba\": container with ID starting with c900bb230c88f639fda6a0e403d2a22095271966eb0cb3cbe169a24249f1ccba not found: ID does not exist" containerID="c900bb230c88f639fda6a0e403d2a22095271966eb0cb3cbe169a24249f1ccba" Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.567199 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c900bb230c88f639fda6a0e403d2a22095271966eb0cb3cbe169a24249f1ccba"} err="failed to get container status \"c900bb230c88f639fda6a0e403d2a22095271966eb0cb3cbe169a24249f1ccba\": rpc error: code = NotFound desc = could not find container \"c900bb230c88f639fda6a0e403d2a22095271966eb0cb3cbe169a24249f1ccba\": container with ID starting with c900bb230c88f639fda6a0e403d2a22095271966eb0cb3cbe169a24249f1ccba not found: ID does not exist" Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.567239 4799 scope.go:117] "RemoveContainer" containerID="d353a8e265d7eb5779ce462a7a44395f5f8515c63cd1d549ab6f1ff6aa2fa996" Jan 27 09:01:30 crc kubenswrapper[4799]: E0127 09:01:30.568206 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d353a8e265d7eb5779ce462a7a44395f5f8515c63cd1d549ab6f1ff6aa2fa996\": container with ID starting with d353a8e265d7eb5779ce462a7a44395f5f8515c63cd1d549ab6f1ff6aa2fa996 not found: ID does not exist" containerID="d353a8e265d7eb5779ce462a7a44395f5f8515c63cd1d549ab6f1ff6aa2fa996" Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.568260 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d353a8e265d7eb5779ce462a7a44395f5f8515c63cd1d549ab6f1ff6aa2fa996"} err="failed to get container status \"d353a8e265d7eb5779ce462a7a44395f5f8515c63cd1d549ab6f1ff6aa2fa996\": rpc error: code = NotFound desc = could not find container \"d353a8e265d7eb5779ce462a7a44395f5f8515c63cd1d549ab6f1ff6aa2fa996\": container with ID starting with d353a8e265d7eb5779ce462a7a44395f5f8515c63cd1d549ab6f1ff6aa2fa996 not found: ID does not exist" Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.568295 4799 scope.go:117] "RemoveContainer" containerID="dd2ed3e143b7a0f1e9ee62302f47234bc04d19adc0c545e9225e28e89fae6968" Jan 27 09:01:30 crc kubenswrapper[4799]: E0127 09:01:30.568737 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd2ed3e143b7a0f1e9ee62302f47234bc04d19adc0c545e9225e28e89fae6968\": container with ID starting with dd2ed3e143b7a0f1e9ee62302f47234bc04d19adc0c545e9225e28e89fae6968 not found: ID does not exist" containerID="dd2ed3e143b7a0f1e9ee62302f47234bc04d19adc0c545e9225e28e89fae6968" Jan 27 09:01:30 crc kubenswrapper[4799]: I0127 09:01:30.568779 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd2ed3e143b7a0f1e9ee62302f47234bc04d19adc0c545e9225e28e89fae6968"} err="failed to get container status \"dd2ed3e143b7a0f1e9ee62302f47234bc04d19adc0c545e9225e28e89fae6968\": rpc error: code = NotFound desc = could not find container \"dd2ed3e143b7a0f1e9ee62302f47234bc04d19adc0c545e9225e28e89fae6968\": container with ID starting with dd2ed3e143b7a0f1e9ee62302f47234bc04d19adc0c545e9225e28e89fae6968 not found: ID does not exist" Jan 27 09:01:32 crc kubenswrapper[4799]: I0127 09:01:32.467297 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="069a2e22-4ca4-4d29-82a6-e8b2f4befa9e" path="/var/lib/kubelet/pods/069a2e22-4ca4-4d29-82a6-e8b2f4befa9e/volumes" Jan 27 09:01:53 crc kubenswrapper[4799]: I0127 09:01:53.731341 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:01:53 crc kubenswrapper[4799]: I0127 09:01:53.732570 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:01:53 crc kubenswrapper[4799]: I0127 09:01:53.732680 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 09:01:53 crc kubenswrapper[4799]: I0127 09:01:53.733683 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 09:01:53 crc kubenswrapper[4799]: I0127 09:01:53.733825 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" gracePeriod=600 Jan 27 09:01:53 crc kubenswrapper[4799]: E0127 09:01:53.861621 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:01:54 crc kubenswrapper[4799]: I0127 09:01:54.685103 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" exitCode=0 Jan 27 09:01:54 crc kubenswrapper[4799]: I0127 09:01:54.685151 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e"} Jan 27 09:01:54 crc kubenswrapper[4799]: I0127 09:01:54.685196 4799 scope.go:117] "RemoveContainer" containerID="2f4b68b4e376d60ecd74e2f36e3efc5a8ff7958d696e54cbc819055f075de3ba" Jan 27 09:01:54 crc kubenswrapper[4799]: I0127 09:01:54.685786 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:01:54 crc kubenswrapper[4799]: E0127 09:01:54.688684 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:02:02 crc kubenswrapper[4799]: I0127 09:02:02.557173 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-x65vd"] Jan 27 09:02:02 crc kubenswrapper[4799]: I0127 09:02:02.562052 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-x65vd"] Jan 27 09:02:02 crc kubenswrapper[4799]: I0127 09:02:02.674092 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-qgz2z"] Jan 27 09:02:02 crc kubenswrapper[4799]: E0127 09:02:02.674523 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="069a2e22-4ca4-4d29-82a6-e8b2f4befa9e" containerName="extract-content" Jan 27 09:02:02 crc kubenswrapper[4799]: I0127 09:02:02.674549 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="069a2e22-4ca4-4d29-82a6-e8b2f4befa9e" containerName="extract-content" Jan 27 09:02:02 crc kubenswrapper[4799]: E0127 09:02:02.674570 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="069a2e22-4ca4-4d29-82a6-e8b2f4befa9e" containerName="registry-server" Jan 27 09:02:02 crc kubenswrapper[4799]: I0127 09:02:02.674579 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="069a2e22-4ca4-4d29-82a6-e8b2f4befa9e" containerName="registry-server" Jan 27 09:02:02 crc kubenswrapper[4799]: E0127 09:02:02.674589 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="069a2e22-4ca4-4d29-82a6-e8b2f4befa9e" containerName="extract-utilities" Jan 27 09:02:02 crc kubenswrapper[4799]: I0127 09:02:02.674599 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="069a2e22-4ca4-4d29-82a6-e8b2f4befa9e" containerName="extract-utilities" Jan 27 09:02:02 crc kubenswrapper[4799]: I0127 09:02:02.674771 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="069a2e22-4ca4-4d29-82a6-e8b2f4befa9e" containerName="registry-server" Jan 27 09:02:02 crc kubenswrapper[4799]: I0127 09:02:02.675333 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-qgz2z" Jan 27 09:02:02 crc kubenswrapper[4799]: I0127 09:02:02.678852 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 27 09:02:02 crc kubenswrapper[4799]: I0127 09:02:02.679438 4799 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-5dffm" Jan 27 09:02:02 crc kubenswrapper[4799]: I0127 09:02:02.679457 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 27 09:02:02 crc kubenswrapper[4799]: I0127 09:02:02.680993 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 27 09:02:02 crc kubenswrapper[4799]: I0127 09:02:02.687389 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-qgz2z"] Jan 27 09:02:02 crc kubenswrapper[4799]: I0127 09:02:02.818696 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/46e2615e-f571-4735-b0fa-93fde1c91735-crc-storage\") pod \"crc-storage-crc-qgz2z\" (UID: \"46e2615e-f571-4735-b0fa-93fde1c91735\") " pod="crc-storage/crc-storage-crc-qgz2z" Jan 27 09:02:02 crc kubenswrapper[4799]: I0127 09:02:02.819116 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vffq\" (UniqueName: \"kubernetes.io/projected/46e2615e-f571-4735-b0fa-93fde1c91735-kube-api-access-8vffq\") pod \"crc-storage-crc-qgz2z\" (UID: \"46e2615e-f571-4735-b0fa-93fde1c91735\") " pod="crc-storage/crc-storage-crc-qgz2z" Jan 27 09:02:02 crc kubenswrapper[4799]: I0127 09:02:02.819202 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/46e2615e-f571-4735-b0fa-93fde1c91735-node-mnt\") pod \"crc-storage-crc-qgz2z\" (UID: \"46e2615e-f571-4735-b0fa-93fde1c91735\") " pod="crc-storage/crc-storage-crc-qgz2z" Jan 27 09:02:02 crc kubenswrapper[4799]: I0127 09:02:02.921050 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/46e2615e-f571-4735-b0fa-93fde1c91735-crc-storage\") pod \"crc-storage-crc-qgz2z\" (UID: \"46e2615e-f571-4735-b0fa-93fde1c91735\") " pod="crc-storage/crc-storage-crc-qgz2z" Jan 27 09:02:02 crc kubenswrapper[4799]: I0127 09:02:02.921125 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vffq\" (UniqueName: \"kubernetes.io/projected/46e2615e-f571-4735-b0fa-93fde1c91735-kube-api-access-8vffq\") pod \"crc-storage-crc-qgz2z\" (UID: \"46e2615e-f571-4735-b0fa-93fde1c91735\") " pod="crc-storage/crc-storage-crc-qgz2z" Jan 27 09:02:02 crc kubenswrapper[4799]: I0127 09:02:02.921217 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/46e2615e-f571-4735-b0fa-93fde1c91735-node-mnt\") pod \"crc-storage-crc-qgz2z\" (UID: \"46e2615e-f571-4735-b0fa-93fde1c91735\") " pod="crc-storage/crc-storage-crc-qgz2z" Jan 27 09:02:02 crc kubenswrapper[4799]: I0127 09:02:02.921640 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/46e2615e-f571-4735-b0fa-93fde1c91735-node-mnt\") pod \"crc-storage-crc-qgz2z\" (UID: \"46e2615e-f571-4735-b0fa-93fde1c91735\") " pod="crc-storage/crc-storage-crc-qgz2z" Jan 27 09:02:02 crc kubenswrapper[4799]: I0127 09:02:02.922408 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/46e2615e-f571-4735-b0fa-93fde1c91735-crc-storage\") pod \"crc-storage-crc-qgz2z\" (UID: \"46e2615e-f571-4735-b0fa-93fde1c91735\") " pod="crc-storage/crc-storage-crc-qgz2z" Jan 27 09:02:02 crc kubenswrapper[4799]: I0127 09:02:02.952488 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vffq\" (UniqueName: \"kubernetes.io/projected/46e2615e-f571-4735-b0fa-93fde1c91735-kube-api-access-8vffq\") pod \"crc-storage-crc-qgz2z\" (UID: \"46e2615e-f571-4735-b0fa-93fde1c91735\") " pod="crc-storage/crc-storage-crc-qgz2z" Jan 27 09:02:03 crc kubenswrapper[4799]: I0127 09:02:03.009787 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-qgz2z" Jan 27 09:02:03 crc kubenswrapper[4799]: I0127 09:02:03.451779 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-qgz2z"] Jan 27 09:02:03 crc kubenswrapper[4799]: W0127 09:02:03.457597 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46e2615e_f571_4735_b0fa_93fde1c91735.slice/crio-9ecb53d401db9ec2c0c9e2e90202573fdf39865c523c2a987f74969396a8bdbe WatchSource:0}: Error finding container 9ecb53d401db9ec2c0c9e2e90202573fdf39865c523c2a987f74969396a8bdbe: Status 404 returned error can't find the container with id 9ecb53d401db9ec2c0c9e2e90202573fdf39865c523c2a987f74969396a8bdbe Jan 27 09:02:03 crc kubenswrapper[4799]: I0127 09:02:03.761957 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-qgz2z" event={"ID":"46e2615e-f571-4735-b0fa-93fde1c91735","Type":"ContainerStarted","Data":"9ecb53d401db9ec2c0c9e2e90202573fdf39865c523c2a987f74969396a8bdbe"} Jan 27 09:02:04 crc kubenswrapper[4799]: I0127 09:02:04.472200 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a490c085-39fb-4831-89cf-2ccaf0826bdd" path="/var/lib/kubelet/pods/a490c085-39fb-4831-89cf-2ccaf0826bdd/volumes" Jan 27 09:02:04 crc kubenswrapper[4799]: I0127 09:02:04.774241 4799 generic.go:334] "Generic (PLEG): container finished" podID="46e2615e-f571-4735-b0fa-93fde1c91735" containerID="607c0d7ea684b8229f809164cd1162587bdcee77fa82bdad5a0d7779ebd9693b" exitCode=0 Jan 27 09:02:04 crc kubenswrapper[4799]: I0127 09:02:04.774389 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-qgz2z" event={"ID":"46e2615e-f571-4735-b0fa-93fde1c91735","Type":"ContainerDied","Data":"607c0d7ea684b8229f809164cd1162587bdcee77fa82bdad5a0d7779ebd9693b"} Jan 27 09:02:06 crc kubenswrapper[4799]: I0127 09:02:06.146894 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-qgz2z" Jan 27 09:02:06 crc kubenswrapper[4799]: I0127 09:02:06.296150 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/46e2615e-f571-4735-b0fa-93fde1c91735-node-mnt\") pod \"46e2615e-f571-4735-b0fa-93fde1c91735\" (UID: \"46e2615e-f571-4735-b0fa-93fde1c91735\") " Jan 27 09:02:06 crc kubenswrapper[4799]: I0127 09:02:06.296294 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vffq\" (UniqueName: \"kubernetes.io/projected/46e2615e-f571-4735-b0fa-93fde1c91735-kube-api-access-8vffq\") pod \"46e2615e-f571-4735-b0fa-93fde1c91735\" (UID: \"46e2615e-f571-4735-b0fa-93fde1c91735\") " Jan 27 09:02:06 crc kubenswrapper[4799]: I0127 09:02:06.296409 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/46e2615e-f571-4735-b0fa-93fde1c91735-crc-storage\") pod \"46e2615e-f571-4735-b0fa-93fde1c91735\" (UID: \"46e2615e-f571-4735-b0fa-93fde1c91735\") " Jan 27 09:02:06 crc kubenswrapper[4799]: I0127 09:02:06.296408 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46e2615e-f571-4735-b0fa-93fde1c91735-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "46e2615e-f571-4735-b0fa-93fde1c91735" (UID: "46e2615e-f571-4735-b0fa-93fde1c91735"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:02:06 crc kubenswrapper[4799]: I0127 09:02:06.296839 4799 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/46e2615e-f571-4735-b0fa-93fde1c91735-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 27 09:02:06 crc kubenswrapper[4799]: I0127 09:02:06.302420 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46e2615e-f571-4735-b0fa-93fde1c91735-kube-api-access-8vffq" (OuterVolumeSpecName: "kube-api-access-8vffq") pod "46e2615e-f571-4735-b0fa-93fde1c91735" (UID: "46e2615e-f571-4735-b0fa-93fde1c91735"). InnerVolumeSpecName "kube-api-access-8vffq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:02:06 crc kubenswrapper[4799]: I0127 09:02:06.317362 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46e2615e-f571-4735-b0fa-93fde1c91735-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "46e2615e-f571-4735-b0fa-93fde1c91735" (UID: "46e2615e-f571-4735-b0fa-93fde1c91735"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:02:06 crc kubenswrapper[4799]: I0127 09:02:06.398648 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vffq\" (UniqueName: \"kubernetes.io/projected/46e2615e-f571-4735-b0fa-93fde1c91735-kube-api-access-8vffq\") on node \"crc\" DevicePath \"\"" Jan 27 09:02:06 crc kubenswrapper[4799]: I0127 09:02:06.398943 4799 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/46e2615e-f571-4735-b0fa-93fde1c91735-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 27 09:02:06 crc kubenswrapper[4799]: I0127 09:02:06.466994 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:02:06 crc kubenswrapper[4799]: E0127 09:02:06.467570 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:02:06 crc kubenswrapper[4799]: I0127 09:02:06.795105 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-qgz2z" event={"ID":"46e2615e-f571-4735-b0fa-93fde1c91735","Type":"ContainerDied","Data":"9ecb53d401db9ec2c0c9e2e90202573fdf39865c523c2a987f74969396a8bdbe"} Jan 27 09:02:06 crc kubenswrapper[4799]: I0127 09:02:06.795166 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ecb53d401db9ec2c0c9e2e90202573fdf39865c523c2a987f74969396a8bdbe" Jan 27 09:02:06 crc kubenswrapper[4799]: I0127 09:02:06.795171 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-qgz2z" Jan 27 09:02:08 crc kubenswrapper[4799]: I0127 09:02:08.467769 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-qgz2z"] Jan 27 09:02:08 crc kubenswrapper[4799]: I0127 09:02:08.477170 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-qgz2z"] Jan 27 09:02:08 crc kubenswrapper[4799]: I0127 09:02:08.595193 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-6mc72"] Jan 27 09:02:08 crc kubenswrapper[4799]: E0127 09:02:08.595852 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e2615e-f571-4735-b0fa-93fde1c91735" containerName="storage" Jan 27 09:02:08 crc kubenswrapper[4799]: I0127 09:02:08.595884 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e2615e-f571-4735-b0fa-93fde1c91735" containerName="storage" Jan 27 09:02:08 crc kubenswrapper[4799]: I0127 09:02:08.596217 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="46e2615e-f571-4735-b0fa-93fde1c91735" containerName="storage" Jan 27 09:02:08 crc kubenswrapper[4799]: I0127 09:02:08.597183 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-6mc72" Jan 27 09:02:08 crc kubenswrapper[4799]: I0127 09:02:08.599877 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 27 09:02:08 crc kubenswrapper[4799]: I0127 09:02:08.599890 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 27 09:02:08 crc kubenswrapper[4799]: I0127 09:02:08.600456 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 27 09:02:08 crc kubenswrapper[4799]: I0127 09:02:08.603877 4799 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-5dffm" Jan 27 09:02:08 crc kubenswrapper[4799]: I0127 09:02:08.605219 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-6mc72"] Jan 27 09:02:08 crc kubenswrapper[4799]: I0127 09:02:08.734596 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/db2373b7-4d3f-4c20-acbf-94daa7e303bf-node-mnt\") pod \"crc-storage-crc-6mc72\" (UID: \"db2373b7-4d3f-4c20-acbf-94daa7e303bf\") " pod="crc-storage/crc-storage-crc-6mc72" Jan 27 09:02:08 crc kubenswrapper[4799]: I0127 09:02:08.734703 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4qqm\" (UniqueName: \"kubernetes.io/projected/db2373b7-4d3f-4c20-acbf-94daa7e303bf-kube-api-access-j4qqm\") pod \"crc-storage-crc-6mc72\" (UID: \"db2373b7-4d3f-4c20-acbf-94daa7e303bf\") " pod="crc-storage/crc-storage-crc-6mc72" Jan 27 09:02:08 crc kubenswrapper[4799]: I0127 09:02:08.734763 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/db2373b7-4d3f-4c20-acbf-94daa7e303bf-crc-storage\") pod \"crc-storage-crc-6mc72\" (UID: \"db2373b7-4d3f-4c20-acbf-94daa7e303bf\") " pod="crc-storage/crc-storage-crc-6mc72" Jan 27 09:02:08 crc kubenswrapper[4799]: I0127 09:02:08.835879 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/db2373b7-4d3f-4c20-acbf-94daa7e303bf-node-mnt\") pod \"crc-storage-crc-6mc72\" (UID: \"db2373b7-4d3f-4c20-acbf-94daa7e303bf\") " pod="crc-storage/crc-storage-crc-6mc72" Jan 27 09:02:08 crc kubenswrapper[4799]: I0127 09:02:08.835963 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4qqm\" (UniqueName: \"kubernetes.io/projected/db2373b7-4d3f-4c20-acbf-94daa7e303bf-kube-api-access-j4qqm\") pod \"crc-storage-crc-6mc72\" (UID: \"db2373b7-4d3f-4c20-acbf-94daa7e303bf\") " pod="crc-storage/crc-storage-crc-6mc72" Jan 27 09:02:08 crc kubenswrapper[4799]: I0127 09:02:08.835986 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/db2373b7-4d3f-4c20-acbf-94daa7e303bf-crc-storage\") pod \"crc-storage-crc-6mc72\" (UID: \"db2373b7-4d3f-4c20-acbf-94daa7e303bf\") " pod="crc-storage/crc-storage-crc-6mc72" Jan 27 09:02:08 crc kubenswrapper[4799]: I0127 09:02:08.836189 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/db2373b7-4d3f-4c20-acbf-94daa7e303bf-node-mnt\") pod \"crc-storage-crc-6mc72\" (UID: \"db2373b7-4d3f-4c20-acbf-94daa7e303bf\") " pod="crc-storage/crc-storage-crc-6mc72" Jan 27 09:02:08 crc kubenswrapper[4799]: I0127 09:02:08.836670 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/db2373b7-4d3f-4c20-acbf-94daa7e303bf-crc-storage\") pod \"crc-storage-crc-6mc72\" (UID: \"db2373b7-4d3f-4c20-acbf-94daa7e303bf\") " pod="crc-storage/crc-storage-crc-6mc72" Jan 27 09:02:08 crc kubenswrapper[4799]: I0127 09:02:08.863407 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4qqm\" (UniqueName: \"kubernetes.io/projected/db2373b7-4d3f-4c20-acbf-94daa7e303bf-kube-api-access-j4qqm\") pod \"crc-storage-crc-6mc72\" (UID: \"db2373b7-4d3f-4c20-acbf-94daa7e303bf\") " pod="crc-storage/crc-storage-crc-6mc72" Jan 27 09:02:08 crc kubenswrapper[4799]: I0127 09:02:08.915028 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-6mc72" Jan 27 09:02:09 crc kubenswrapper[4799]: I0127 09:02:09.418102 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-6mc72"] Jan 27 09:02:09 crc kubenswrapper[4799]: I0127 09:02:09.818537 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-6mc72" event={"ID":"db2373b7-4d3f-4c20-acbf-94daa7e303bf","Type":"ContainerStarted","Data":"b3b0d681e324597b35e2441b0f3b8dcaf6751db5a360c61b86261aa2c5b53ed8"} Jan 27 09:02:10 crc kubenswrapper[4799]: I0127 09:02:10.468754 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46e2615e-f571-4735-b0fa-93fde1c91735" path="/var/lib/kubelet/pods/46e2615e-f571-4735-b0fa-93fde1c91735/volumes" Jan 27 09:02:10 crc kubenswrapper[4799]: I0127 09:02:10.840372 4799 generic.go:334] "Generic (PLEG): container finished" podID="db2373b7-4d3f-4c20-acbf-94daa7e303bf" containerID="373046e30ed9b94d750049961b768a0c5771e185cd470869d9150cf6baae7b35" exitCode=0 Jan 27 09:02:10 crc kubenswrapper[4799]: I0127 09:02:10.840432 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-6mc72" event={"ID":"db2373b7-4d3f-4c20-acbf-94daa7e303bf","Type":"ContainerDied","Data":"373046e30ed9b94d750049961b768a0c5771e185cd470869d9150cf6baae7b35"} Jan 27 09:02:12 crc kubenswrapper[4799]: I0127 09:02:12.136905 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-6mc72" Jan 27 09:02:12 crc kubenswrapper[4799]: I0127 09:02:12.293117 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/db2373b7-4d3f-4c20-acbf-94daa7e303bf-crc-storage\") pod \"db2373b7-4d3f-4c20-acbf-94daa7e303bf\" (UID: \"db2373b7-4d3f-4c20-acbf-94daa7e303bf\") " Jan 27 09:02:12 crc kubenswrapper[4799]: I0127 09:02:12.293252 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/db2373b7-4d3f-4c20-acbf-94daa7e303bf-node-mnt\") pod \"db2373b7-4d3f-4c20-acbf-94daa7e303bf\" (UID: \"db2373b7-4d3f-4c20-acbf-94daa7e303bf\") " Jan 27 09:02:12 crc kubenswrapper[4799]: I0127 09:02:12.293342 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db2373b7-4d3f-4c20-acbf-94daa7e303bf-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "db2373b7-4d3f-4c20-acbf-94daa7e303bf" (UID: "db2373b7-4d3f-4c20-acbf-94daa7e303bf"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:02:12 crc kubenswrapper[4799]: I0127 09:02:12.293457 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4qqm\" (UniqueName: \"kubernetes.io/projected/db2373b7-4d3f-4c20-acbf-94daa7e303bf-kube-api-access-j4qqm\") pod \"db2373b7-4d3f-4c20-acbf-94daa7e303bf\" (UID: \"db2373b7-4d3f-4c20-acbf-94daa7e303bf\") " Jan 27 09:02:12 crc kubenswrapper[4799]: I0127 09:02:12.293856 4799 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/db2373b7-4d3f-4c20-acbf-94daa7e303bf-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 27 09:02:12 crc kubenswrapper[4799]: I0127 09:02:12.300959 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db2373b7-4d3f-4c20-acbf-94daa7e303bf-kube-api-access-j4qqm" (OuterVolumeSpecName: "kube-api-access-j4qqm") pod "db2373b7-4d3f-4c20-acbf-94daa7e303bf" (UID: "db2373b7-4d3f-4c20-acbf-94daa7e303bf"). InnerVolumeSpecName "kube-api-access-j4qqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:02:12 crc kubenswrapper[4799]: I0127 09:02:12.335399 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db2373b7-4d3f-4c20-acbf-94daa7e303bf-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "db2373b7-4d3f-4c20-acbf-94daa7e303bf" (UID: "db2373b7-4d3f-4c20-acbf-94daa7e303bf"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:02:12 crc kubenswrapper[4799]: I0127 09:02:12.395047 4799 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/db2373b7-4d3f-4c20-acbf-94daa7e303bf-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 27 09:02:12 crc kubenswrapper[4799]: I0127 09:02:12.395102 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4qqm\" (UniqueName: \"kubernetes.io/projected/db2373b7-4d3f-4c20-acbf-94daa7e303bf-kube-api-access-j4qqm\") on node \"crc\" DevicePath \"\"" Jan 27 09:02:12 crc kubenswrapper[4799]: I0127 09:02:12.869327 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-6mc72" event={"ID":"db2373b7-4d3f-4c20-acbf-94daa7e303bf","Type":"ContainerDied","Data":"b3b0d681e324597b35e2441b0f3b8dcaf6751db5a360c61b86261aa2c5b53ed8"} Jan 27 09:02:12 crc kubenswrapper[4799]: I0127 09:02:12.869367 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-6mc72" Jan 27 09:02:12 crc kubenswrapper[4799]: I0127 09:02:12.869369 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3b0d681e324597b35e2441b0f3b8dcaf6751db5a360c61b86261aa2c5b53ed8" Jan 27 09:02:20 crc kubenswrapper[4799]: I0127 09:02:20.451098 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:02:20 crc kubenswrapper[4799]: E0127 09:02:20.451943 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:02:32 crc kubenswrapper[4799]: I0127 09:02:32.451856 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:02:32 crc kubenswrapper[4799]: E0127 09:02:32.452544 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:02:39 crc kubenswrapper[4799]: I0127 09:02:39.525815 4799 scope.go:117] "RemoveContainer" containerID="45e689bb17e3a2ac39c362e0a8f7a1fdd7d647298079cb5d445febefd7b9a5b6" Jan 27 09:02:46 crc kubenswrapper[4799]: I0127 09:02:46.452137 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:02:46 crc kubenswrapper[4799]: E0127 09:02:46.452738 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:02:54 crc kubenswrapper[4799]: I0127 09:02:54.650882 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6xfkf"] Jan 27 09:02:54 crc kubenswrapper[4799]: E0127 09:02:54.651627 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db2373b7-4d3f-4c20-acbf-94daa7e303bf" containerName="storage" Jan 27 09:02:54 crc kubenswrapper[4799]: I0127 09:02:54.651639 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="db2373b7-4d3f-4c20-acbf-94daa7e303bf" containerName="storage" Jan 27 09:02:54 crc kubenswrapper[4799]: I0127 09:02:54.651821 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="db2373b7-4d3f-4c20-acbf-94daa7e303bf" containerName="storage" Jan 27 09:02:54 crc kubenswrapper[4799]: I0127 09:02:54.652852 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xfkf" Jan 27 09:02:54 crc kubenswrapper[4799]: I0127 09:02:54.668035 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xfkf"] Jan 27 09:02:54 crc kubenswrapper[4799]: I0127 09:02:54.795277 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e38387-2dcc-4957-8cb0-f5bfe0ae9977-utilities\") pod \"certified-operators-6xfkf\" (UID: \"d7e38387-2dcc-4957-8cb0-f5bfe0ae9977\") " pod="openshift-marketplace/certified-operators-6xfkf" Jan 27 09:02:54 crc kubenswrapper[4799]: I0127 09:02:54.795416 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxsfs\" (UniqueName: \"kubernetes.io/projected/d7e38387-2dcc-4957-8cb0-f5bfe0ae9977-kube-api-access-jxsfs\") pod \"certified-operators-6xfkf\" (UID: \"d7e38387-2dcc-4957-8cb0-f5bfe0ae9977\") " pod="openshift-marketplace/certified-operators-6xfkf" Jan 27 09:02:54 crc kubenswrapper[4799]: I0127 09:02:54.795439 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e38387-2dcc-4957-8cb0-f5bfe0ae9977-catalog-content\") pod \"certified-operators-6xfkf\" (UID: \"d7e38387-2dcc-4957-8cb0-f5bfe0ae9977\") " pod="openshift-marketplace/certified-operators-6xfkf" Jan 27 09:02:54 crc kubenswrapper[4799]: I0127 09:02:54.896576 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e38387-2dcc-4957-8cb0-f5bfe0ae9977-utilities\") pod \"certified-operators-6xfkf\" (UID: \"d7e38387-2dcc-4957-8cb0-f5bfe0ae9977\") " pod="openshift-marketplace/certified-operators-6xfkf" Jan 27 09:02:54 crc kubenswrapper[4799]: I0127 09:02:54.896674 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxsfs\" (UniqueName: \"kubernetes.io/projected/d7e38387-2dcc-4957-8cb0-f5bfe0ae9977-kube-api-access-jxsfs\") pod \"certified-operators-6xfkf\" (UID: \"d7e38387-2dcc-4957-8cb0-f5bfe0ae9977\") " pod="openshift-marketplace/certified-operators-6xfkf" Jan 27 09:02:54 crc kubenswrapper[4799]: I0127 09:02:54.896700 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e38387-2dcc-4957-8cb0-f5bfe0ae9977-catalog-content\") pod \"certified-operators-6xfkf\" (UID: \"d7e38387-2dcc-4957-8cb0-f5bfe0ae9977\") " pod="openshift-marketplace/certified-operators-6xfkf" Jan 27 09:02:54 crc kubenswrapper[4799]: I0127 09:02:54.897386 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e38387-2dcc-4957-8cb0-f5bfe0ae9977-catalog-content\") pod \"certified-operators-6xfkf\" (UID: \"d7e38387-2dcc-4957-8cb0-f5bfe0ae9977\") " pod="openshift-marketplace/certified-operators-6xfkf" Jan 27 09:02:54 crc kubenswrapper[4799]: I0127 09:02:54.897635 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e38387-2dcc-4957-8cb0-f5bfe0ae9977-utilities\") pod \"certified-operators-6xfkf\" (UID: \"d7e38387-2dcc-4957-8cb0-f5bfe0ae9977\") " pod="openshift-marketplace/certified-operators-6xfkf" Jan 27 09:02:54 crc kubenswrapper[4799]: I0127 09:02:54.916943 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxsfs\" (UniqueName: \"kubernetes.io/projected/d7e38387-2dcc-4957-8cb0-f5bfe0ae9977-kube-api-access-jxsfs\") pod \"certified-operators-6xfkf\" (UID: \"d7e38387-2dcc-4957-8cb0-f5bfe0ae9977\") " pod="openshift-marketplace/certified-operators-6xfkf" Jan 27 09:02:54 crc kubenswrapper[4799]: I0127 09:02:54.969180 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xfkf" Jan 27 09:02:55 crc kubenswrapper[4799]: I0127 09:02:55.445688 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xfkf"] Jan 27 09:02:56 crc kubenswrapper[4799]: I0127 09:02:56.213991 4799 generic.go:334] "Generic (PLEG): container finished" podID="d7e38387-2dcc-4957-8cb0-f5bfe0ae9977" containerID="9d1f93a03e0aca37fb91b0c85d1f6188674bbc93efa8568ade9dd3cac7934781" exitCode=0 Jan 27 09:02:56 crc kubenswrapper[4799]: I0127 09:02:56.215416 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xfkf" event={"ID":"d7e38387-2dcc-4957-8cb0-f5bfe0ae9977","Type":"ContainerDied","Data":"9d1f93a03e0aca37fb91b0c85d1f6188674bbc93efa8568ade9dd3cac7934781"} Jan 27 09:02:56 crc kubenswrapper[4799]: I0127 09:02:56.215483 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xfkf" event={"ID":"d7e38387-2dcc-4957-8cb0-f5bfe0ae9977","Type":"ContainerStarted","Data":"622af0afc25904ed58be86f0ec96c7503d0aca1b65fa0b586d5a4c2a378f3f6c"} Jan 27 09:02:57 crc kubenswrapper[4799]: I0127 09:02:57.222642 4799 generic.go:334] "Generic (PLEG): container finished" podID="d7e38387-2dcc-4957-8cb0-f5bfe0ae9977" containerID="1cba28e7e0b3b57c5dff72f0f8f91c0329d01fff5e2b32eef31ee77509341643" exitCode=0 Jan 27 09:02:57 crc kubenswrapper[4799]: I0127 09:02:57.222812 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xfkf" event={"ID":"d7e38387-2dcc-4957-8cb0-f5bfe0ae9977","Type":"ContainerDied","Data":"1cba28e7e0b3b57c5dff72f0f8f91c0329d01fff5e2b32eef31ee77509341643"} Jan 27 09:02:58 crc kubenswrapper[4799]: I0127 09:02:58.231365 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xfkf" event={"ID":"d7e38387-2dcc-4957-8cb0-f5bfe0ae9977","Type":"ContainerStarted","Data":"8c4f7dc514befd13f153982d1bb4035958939a11a3a4e8e0ed0ceca27dcb5e59"} Jan 27 09:02:58 crc kubenswrapper[4799]: I0127 09:02:58.248604 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6xfkf" podStartSLOduration=2.867591052 podStartE2EDuration="4.248584313s" podCreationTimestamp="2026-01-27 09:02:54 +0000 UTC" firstStartedPulling="2026-01-27 09:02:56.216146283 +0000 UTC m=+4642.527250348" lastFinishedPulling="2026-01-27 09:02:57.597139544 +0000 UTC m=+4643.908243609" observedRunningTime="2026-01-27 09:02:58.246778284 +0000 UTC m=+4644.557882359" watchObservedRunningTime="2026-01-27 09:02:58.248584313 +0000 UTC m=+4644.559688378" Jan 27 09:02:58 crc kubenswrapper[4799]: I0127 09:02:58.451266 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:02:58 crc kubenswrapper[4799]: E0127 09:02:58.451520 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:03:04 crc kubenswrapper[4799]: I0127 09:03:04.970139 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6xfkf" Jan 27 09:03:04 crc kubenswrapper[4799]: I0127 09:03:04.970744 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6xfkf" Jan 27 09:03:05 crc kubenswrapper[4799]: I0127 09:03:05.085861 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6xfkf" Jan 27 09:03:05 crc kubenswrapper[4799]: I0127 09:03:05.321962 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6xfkf" Jan 27 09:03:05 crc kubenswrapper[4799]: I0127 09:03:05.380107 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6xfkf"] Jan 27 09:03:07 crc kubenswrapper[4799]: I0127 09:03:07.296430 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6xfkf" podUID="d7e38387-2dcc-4957-8cb0-f5bfe0ae9977" containerName="registry-server" containerID="cri-o://8c4f7dc514befd13f153982d1bb4035958939a11a3a4e8e0ed0ceca27dcb5e59" gracePeriod=2 Jan 27 09:03:08 crc kubenswrapper[4799]: I0127 09:03:08.304949 4799 generic.go:334] "Generic (PLEG): container finished" podID="d7e38387-2dcc-4957-8cb0-f5bfe0ae9977" containerID="8c4f7dc514befd13f153982d1bb4035958939a11a3a4e8e0ed0ceca27dcb5e59" exitCode=0 Jan 27 09:03:08 crc kubenswrapper[4799]: I0127 09:03:08.304986 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xfkf" event={"ID":"d7e38387-2dcc-4957-8cb0-f5bfe0ae9977","Type":"ContainerDied","Data":"8c4f7dc514befd13f153982d1bb4035958939a11a3a4e8e0ed0ceca27dcb5e59"} Jan 27 09:03:08 crc kubenswrapper[4799]: I0127 09:03:08.781270 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xfkf" Jan 27 09:03:08 crc kubenswrapper[4799]: I0127 09:03:08.919478 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e38387-2dcc-4957-8cb0-f5bfe0ae9977-catalog-content\") pod \"d7e38387-2dcc-4957-8cb0-f5bfe0ae9977\" (UID: \"d7e38387-2dcc-4957-8cb0-f5bfe0ae9977\") " Jan 27 09:03:08 crc kubenswrapper[4799]: I0127 09:03:08.919648 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e38387-2dcc-4957-8cb0-f5bfe0ae9977-utilities\") pod \"d7e38387-2dcc-4957-8cb0-f5bfe0ae9977\" (UID: \"d7e38387-2dcc-4957-8cb0-f5bfe0ae9977\") " Jan 27 09:03:08 crc kubenswrapper[4799]: I0127 09:03:08.919691 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxsfs\" (UniqueName: \"kubernetes.io/projected/d7e38387-2dcc-4957-8cb0-f5bfe0ae9977-kube-api-access-jxsfs\") pod \"d7e38387-2dcc-4957-8cb0-f5bfe0ae9977\" (UID: \"d7e38387-2dcc-4957-8cb0-f5bfe0ae9977\") " Jan 27 09:03:08 crc kubenswrapper[4799]: I0127 09:03:08.920803 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7e38387-2dcc-4957-8cb0-f5bfe0ae9977-utilities" (OuterVolumeSpecName: "utilities") pod "d7e38387-2dcc-4957-8cb0-f5bfe0ae9977" (UID: "d7e38387-2dcc-4957-8cb0-f5bfe0ae9977"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:03:08 crc kubenswrapper[4799]: I0127 09:03:08.925159 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e38387-2dcc-4957-8cb0-f5bfe0ae9977-kube-api-access-jxsfs" (OuterVolumeSpecName: "kube-api-access-jxsfs") pod "d7e38387-2dcc-4957-8cb0-f5bfe0ae9977" (UID: "d7e38387-2dcc-4957-8cb0-f5bfe0ae9977"). InnerVolumeSpecName "kube-api-access-jxsfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:03:08 crc kubenswrapper[4799]: I0127 09:03:08.961104 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7e38387-2dcc-4957-8cb0-f5bfe0ae9977-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7e38387-2dcc-4957-8cb0-f5bfe0ae9977" (UID: "d7e38387-2dcc-4957-8cb0-f5bfe0ae9977"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:03:09 crc kubenswrapper[4799]: I0127 09:03:09.021438 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e38387-2dcc-4957-8cb0-f5bfe0ae9977-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:03:09 crc kubenswrapper[4799]: I0127 09:03:09.021484 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxsfs\" (UniqueName: \"kubernetes.io/projected/d7e38387-2dcc-4957-8cb0-f5bfe0ae9977-kube-api-access-jxsfs\") on node \"crc\" DevicePath \"\"" Jan 27 09:03:09 crc kubenswrapper[4799]: I0127 09:03:09.021502 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e38387-2dcc-4957-8cb0-f5bfe0ae9977-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:03:09 crc kubenswrapper[4799]: I0127 09:03:09.317131 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xfkf" event={"ID":"d7e38387-2dcc-4957-8cb0-f5bfe0ae9977","Type":"ContainerDied","Data":"622af0afc25904ed58be86f0ec96c7503d0aca1b65fa0b586d5a4c2a378f3f6c"} Jan 27 09:03:09 crc kubenswrapper[4799]: I0127 09:03:09.317198 4799 scope.go:117] "RemoveContainer" containerID="8c4f7dc514befd13f153982d1bb4035958939a11a3a4e8e0ed0ceca27dcb5e59" Jan 27 09:03:09 crc kubenswrapper[4799]: I0127 09:03:09.317283 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xfkf" Jan 27 09:03:09 crc kubenswrapper[4799]: I0127 09:03:09.339011 4799 scope.go:117] "RemoveContainer" containerID="1cba28e7e0b3b57c5dff72f0f8f91c0329d01fff5e2b32eef31ee77509341643" Jan 27 09:03:09 crc kubenswrapper[4799]: I0127 09:03:09.360352 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6xfkf"] Jan 27 09:03:09 crc kubenswrapper[4799]: I0127 09:03:09.368812 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6xfkf"] Jan 27 09:03:09 crc kubenswrapper[4799]: I0127 09:03:09.383470 4799 scope.go:117] "RemoveContainer" containerID="9d1f93a03e0aca37fb91b0c85d1f6188674bbc93efa8568ade9dd3cac7934781" Jan 27 09:03:10 crc kubenswrapper[4799]: I0127 09:03:10.463951 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e38387-2dcc-4957-8cb0-f5bfe0ae9977" path="/var/lib/kubelet/pods/d7e38387-2dcc-4957-8cb0-f5bfe0ae9977/volumes" Jan 27 09:03:12 crc kubenswrapper[4799]: I0127 09:03:12.452376 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:03:12 crc kubenswrapper[4799]: E0127 09:03:12.453137 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:03:25 crc kubenswrapper[4799]: I0127 09:03:25.451700 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:03:25 crc kubenswrapper[4799]: E0127 09:03:25.452764 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:03:38 crc kubenswrapper[4799]: I0127 09:03:38.451549 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:03:38 crc kubenswrapper[4799]: E0127 09:03:38.452472 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:03:49 crc kubenswrapper[4799]: I0127 09:03:49.456365 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:03:49 crc kubenswrapper[4799]: E0127 09:03:49.458048 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:04:00 crc kubenswrapper[4799]: I0127 09:04:00.452272 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:04:00 crc kubenswrapper[4799]: E0127 09:04:00.453237 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:04:12 crc kubenswrapper[4799]: I0127 09:04:12.451367 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:04:12 crc kubenswrapper[4799]: E0127 09:04:12.452541 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:04:23 crc kubenswrapper[4799]: I0127 09:04:23.451239 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:04:23 crc kubenswrapper[4799]: E0127 09:04:23.452143 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:04:37 crc kubenswrapper[4799]: I0127 09:04:37.451770 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:04:37 crc kubenswrapper[4799]: E0127 09:04:37.452491 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:04:47 crc kubenswrapper[4799]: I0127 09:04:47.175168 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9cdfk"] Jan 27 09:04:47 crc kubenswrapper[4799]: E0127 09:04:47.176173 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e38387-2dcc-4957-8cb0-f5bfe0ae9977" containerName="extract-content" Jan 27 09:04:47 crc kubenswrapper[4799]: I0127 09:04:47.176191 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e38387-2dcc-4957-8cb0-f5bfe0ae9977" containerName="extract-content" Jan 27 09:04:47 crc kubenswrapper[4799]: E0127 09:04:47.176216 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e38387-2dcc-4957-8cb0-f5bfe0ae9977" containerName="registry-server" Jan 27 09:04:47 crc kubenswrapper[4799]: I0127 09:04:47.176223 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e38387-2dcc-4957-8cb0-f5bfe0ae9977" containerName="registry-server" Jan 27 09:04:47 crc kubenswrapper[4799]: E0127 09:04:47.176239 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e38387-2dcc-4957-8cb0-f5bfe0ae9977" containerName="extract-utilities" Jan 27 09:04:47 crc kubenswrapper[4799]: I0127 09:04:47.176246 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e38387-2dcc-4957-8cb0-f5bfe0ae9977" containerName="extract-utilities" Jan 27 09:04:47 crc kubenswrapper[4799]: I0127 09:04:47.176448 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7e38387-2dcc-4957-8cb0-f5bfe0ae9977" containerName="registry-server" Jan 27 09:04:47 crc kubenswrapper[4799]: I0127 09:04:47.177660 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9cdfk" Jan 27 09:04:47 crc kubenswrapper[4799]: I0127 09:04:47.193150 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9cdfk"] Jan 27 09:04:47 crc kubenswrapper[4799]: I0127 09:04:47.264691 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/341b6c05-d5cf-4c19-bd2b-41adb309c9d8-utilities\") pod \"redhat-operators-9cdfk\" (UID: \"341b6c05-d5cf-4c19-bd2b-41adb309c9d8\") " pod="openshift-marketplace/redhat-operators-9cdfk" Jan 27 09:04:47 crc kubenswrapper[4799]: I0127 09:04:47.264761 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/341b6c05-d5cf-4c19-bd2b-41adb309c9d8-catalog-content\") pod \"redhat-operators-9cdfk\" (UID: \"341b6c05-d5cf-4c19-bd2b-41adb309c9d8\") " pod="openshift-marketplace/redhat-operators-9cdfk" Jan 27 09:04:47 crc kubenswrapper[4799]: I0127 09:04:47.264860 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stgqb\" (UniqueName: \"kubernetes.io/projected/341b6c05-d5cf-4c19-bd2b-41adb309c9d8-kube-api-access-stgqb\") pod \"redhat-operators-9cdfk\" (UID: \"341b6c05-d5cf-4c19-bd2b-41adb309c9d8\") " pod="openshift-marketplace/redhat-operators-9cdfk" Jan 27 09:04:47 crc kubenswrapper[4799]: I0127 09:04:47.366208 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/341b6c05-d5cf-4c19-bd2b-41adb309c9d8-utilities\") pod \"redhat-operators-9cdfk\" (UID: \"341b6c05-d5cf-4c19-bd2b-41adb309c9d8\") " pod="openshift-marketplace/redhat-operators-9cdfk" Jan 27 09:04:47 crc kubenswrapper[4799]: I0127 09:04:47.366262 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/341b6c05-d5cf-4c19-bd2b-41adb309c9d8-catalog-content\") pod \"redhat-operators-9cdfk\" (UID: \"341b6c05-d5cf-4c19-bd2b-41adb309c9d8\") " pod="openshift-marketplace/redhat-operators-9cdfk" Jan 27 09:04:47 crc kubenswrapper[4799]: I0127 09:04:47.366356 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stgqb\" (UniqueName: \"kubernetes.io/projected/341b6c05-d5cf-4c19-bd2b-41adb309c9d8-kube-api-access-stgqb\") pod \"redhat-operators-9cdfk\" (UID: \"341b6c05-d5cf-4c19-bd2b-41adb309c9d8\") " pod="openshift-marketplace/redhat-operators-9cdfk" Jan 27 09:04:47 crc kubenswrapper[4799]: I0127 09:04:47.366965 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/341b6c05-d5cf-4c19-bd2b-41adb309c9d8-utilities\") pod \"redhat-operators-9cdfk\" (UID: \"341b6c05-d5cf-4c19-bd2b-41adb309c9d8\") " pod="openshift-marketplace/redhat-operators-9cdfk" Jan 27 09:04:47 crc kubenswrapper[4799]: I0127 09:04:47.367031 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/341b6c05-d5cf-4c19-bd2b-41adb309c9d8-catalog-content\") pod \"redhat-operators-9cdfk\" (UID: \"341b6c05-d5cf-4c19-bd2b-41adb309c9d8\") " pod="openshift-marketplace/redhat-operators-9cdfk" Jan 27 09:04:47 crc kubenswrapper[4799]: I0127 09:04:47.388124 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stgqb\" (UniqueName: \"kubernetes.io/projected/341b6c05-d5cf-4c19-bd2b-41adb309c9d8-kube-api-access-stgqb\") pod \"redhat-operators-9cdfk\" (UID: \"341b6c05-d5cf-4c19-bd2b-41adb309c9d8\") " pod="openshift-marketplace/redhat-operators-9cdfk" Jan 27 09:04:47 crc kubenswrapper[4799]: I0127 09:04:47.540530 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9cdfk" Jan 27 09:04:47 crc kubenswrapper[4799]: I0127 09:04:47.949753 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9cdfk"] Jan 27 09:04:47 crc kubenswrapper[4799]: W0127 09:04:47.952557 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod341b6c05_d5cf_4c19_bd2b_41adb309c9d8.slice/crio-4b0d14b75d210e7a105a095efefe1ff8bca54babbe68259d6dcd5f29e5bd693d WatchSource:0}: Error finding container 4b0d14b75d210e7a105a095efefe1ff8bca54babbe68259d6dcd5f29e5bd693d: Status 404 returned error can't find the container with id 4b0d14b75d210e7a105a095efefe1ff8bca54babbe68259d6dcd5f29e5bd693d Jan 27 09:04:48 crc kubenswrapper[4799]: I0127 09:04:48.070854 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9cdfk" event={"ID":"341b6c05-d5cf-4c19-bd2b-41adb309c9d8","Type":"ContainerStarted","Data":"4b0d14b75d210e7a105a095efefe1ff8bca54babbe68259d6dcd5f29e5bd693d"} Jan 27 09:04:49 crc kubenswrapper[4799]: I0127 09:04:49.081385 4799 generic.go:334] "Generic (PLEG): container finished" podID="341b6c05-d5cf-4c19-bd2b-41adb309c9d8" containerID="142c19f067f4d7c5fc293a3af201000f9eedb3dea8b23734e47296c1c6983174" exitCode=0 Jan 27 09:04:49 crc kubenswrapper[4799]: I0127 09:04:49.081470 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9cdfk" event={"ID":"341b6c05-d5cf-4c19-bd2b-41adb309c9d8","Type":"ContainerDied","Data":"142c19f067f4d7c5fc293a3af201000f9eedb3dea8b23734e47296c1c6983174"} Jan 27 09:04:51 crc kubenswrapper[4799]: I0127 09:04:51.098584 4799 generic.go:334] "Generic (PLEG): container finished" podID="341b6c05-d5cf-4c19-bd2b-41adb309c9d8" containerID="8e381ac74f2898248180f23d3e8fe318eac7a0516a308f0a77bdbae230b7d66d" exitCode=0 Jan 27 09:04:51 crc kubenswrapper[4799]: I0127 09:04:51.099186 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9cdfk" event={"ID":"341b6c05-d5cf-4c19-bd2b-41adb309c9d8","Type":"ContainerDied","Data":"8e381ac74f2898248180f23d3e8fe318eac7a0516a308f0a77bdbae230b7d66d"} Jan 27 09:04:51 crc kubenswrapper[4799]: I0127 09:04:51.452709 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:04:51 crc kubenswrapper[4799]: E0127 09:04:51.453117 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:04:52 crc kubenswrapper[4799]: I0127 09:04:52.109675 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9cdfk" event={"ID":"341b6c05-d5cf-4c19-bd2b-41adb309c9d8","Type":"ContainerStarted","Data":"e9e77285054d793901f04065268cd741e95010435239f1a82a4817b928ff78c0"} Jan 27 09:04:56 crc kubenswrapper[4799]: I0127 09:04:56.849819 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9cdfk" podStartSLOduration=7.353853479 podStartE2EDuration="9.849787322s" podCreationTimestamp="2026-01-27 09:04:47 +0000 UTC" firstStartedPulling="2026-01-27 09:04:49.083063754 +0000 UTC m=+4755.394167819" lastFinishedPulling="2026-01-27 09:04:51.578997567 +0000 UTC m=+4757.890101662" observedRunningTime="2026-01-27 09:04:52.132529526 +0000 UTC m=+4758.443633641" watchObservedRunningTime="2026-01-27 09:04:56.849787322 +0000 UTC m=+4763.160891397" Jan 27 09:04:56 crc kubenswrapper[4799]: I0127 09:04:56.859021 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5m9d5"] Jan 27 09:04:56 crc kubenswrapper[4799]: I0127 09:04:56.861158 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5m9d5" Jan 27 09:04:56 crc kubenswrapper[4799]: I0127 09:04:56.876480 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5m9d5"] Jan 27 09:04:56 crc kubenswrapper[4799]: I0127 09:04:56.905697 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9320bcde-0dba-460a-81f0-3d3470e0f9b5-utilities\") pod \"community-operators-5m9d5\" (UID: \"9320bcde-0dba-460a-81f0-3d3470e0f9b5\") " pod="openshift-marketplace/community-operators-5m9d5" Jan 27 09:04:56 crc kubenswrapper[4799]: I0127 09:04:56.905793 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd2bz\" (UniqueName: \"kubernetes.io/projected/9320bcde-0dba-460a-81f0-3d3470e0f9b5-kube-api-access-vd2bz\") pod \"community-operators-5m9d5\" (UID: \"9320bcde-0dba-460a-81f0-3d3470e0f9b5\") " pod="openshift-marketplace/community-operators-5m9d5" Jan 27 09:04:56 crc kubenswrapper[4799]: I0127 09:04:56.905835 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9320bcde-0dba-460a-81f0-3d3470e0f9b5-catalog-content\") pod \"community-operators-5m9d5\" (UID: \"9320bcde-0dba-460a-81f0-3d3470e0f9b5\") " pod="openshift-marketplace/community-operators-5m9d5" Jan 27 09:04:57 crc kubenswrapper[4799]: I0127 09:04:57.006975 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd2bz\" (UniqueName: \"kubernetes.io/projected/9320bcde-0dba-460a-81f0-3d3470e0f9b5-kube-api-access-vd2bz\") pod \"community-operators-5m9d5\" (UID: \"9320bcde-0dba-460a-81f0-3d3470e0f9b5\") " pod="openshift-marketplace/community-operators-5m9d5" Jan 27 09:04:57 crc kubenswrapper[4799]: I0127 09:04:57.007061 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9320bcde-0dba-460a-81f0-3d3470e0f9b5-catalog-content\") pod \"community-operators-5m9d5\" (UID: \"9320bcde-0dba-460a-81f0-3d3470e0f9b5\") " pod="openshift-marketplace/community-operators-5m9d5" Jan 27 09:04:57 crc kubenswrapper[4799]: I0127 09:04:57.007158 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9320bcde-0dba-460a-81f0-3d3470e0f9b5-utilities\") pod \"community-operators-5m9d5\" (UID: \"9320bcde-0dba-460a-81f0-3d3470e0f9b5\") " pod="openshift-marketplace/community-operators-5m9d5" Jan 27 09:04:57 crc kubenswrapper[4799]: I0127 09:04:57.007614 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9320bcde-0dba-460a-81f0-3d3470e0f9b5-catalog-content\") pod \"community-operators-5m9d5\" (UID: \"9320bcde-0dba-460a-81f0-3d3470e0f9b5\") " pod="openshift-marketplace/community-operators-5m9d5" Jan 27 09:04:57 crc kubenswrapper[4799]: I0127 09:04:57.007628 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9320bcde-0dba-460a-81f0-3d3470e0f9b5-utilities\") pod \"community-operators-5m9d5\" (UID: \"9320bcde-0dba-460a-81f0-3d3470e0f9b5\") " pod="openshift-marketplace/community-operators-5m9d5" Jan 27 09:04:57 crc kubenswrapper[4799]: I0127 09:04:57.029745 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd2bz\" (UniqueName: \"kubernetes.io/projected/9320bcde-0dba-460a-81f0-3d3470e0f9b5-kube-api-access-vd2bz\") pod \"community-operators-5m9d5\" (UID: \"9320bcde-0dba-460a-81f0-3d3470e0f9b5\") " pod="openshift-marketplace/community-operators-5m9d5" Jan 27 09:04:57 crc kubenswrapper[4799]: I0127 09:04:57.190550 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5m9d5" Jan 27 09:04:57 crc kubenswrapper[4799]: I0127 09:04:57.541260 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9cdfk" Jan 27 09:04:57 crc kubenswrapper[4799]: I0127 09:04:57.541726 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9cdfk" Jan 27 09:04:57 crc kubenswrapper[4799]: I0127 09:04:57.728826 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5m9d5"] Jan 27 09:04:58 crc kubenswrapper[4799]: I0127 09:04:58.150748 4799 generic.go:334] "Generic (PLEG): container finished" podID="9320bcde-0dba-460a-81f0-3d3470e0f9b5" containerID="a93379cbabe10c4773e15c75fe098a96618e4de326ff981fca1578e409749aa0" exitCode=0 Jan 27 09:04:58 crc kubenswrapper[4799]: I0127 09:04:58.150784 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5m9d5" event={"ID":"9320bcde-0dba-460a-81f0-3d3470e0f9b5","Type":"ContainerDied","Data":"a93379cbabe10c4773e15c75fe098a96618e4de326ff981fca1578e409749aa0"} Jan 27 09:04:58 crc kubenswrapper[4799]: I0127 09:04:58.150828 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5m9d5" event={"ID":"9320bcde-0dba-460a-81f0-3d3470e0f9b5","Type":"ContainerStarted","Data":"311d76b4afaf7f728c0584fd21cebd11ba69d1b037c998596239ab9537420f4b"} Jan 27 09:04:58 crc kubenswrapper[4799]: I0127 09:04:58.585128 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9cdfk" podUID="341b6c05-d5cf-4c19-bd2b-41adb309c9d8" containerName="registry-server" probeResult="failure" output=< Jan 27 09:04:58 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Jan 27 09:04:58 crc kubenswrapper[4799]: > Jan 27 09:05:00 crc kubenswrapper[4799]: I0127 09:05:00.170069 4799 generic.go:334] "Generic (PLEG): container finished" podID="9320bcde-0dba-460a-81f0-3d3470e0f9b5" containerID="7721abeec753fd11cea9587708d8a3ba76c6fa8a58d524688e57bbd6a8080ae5" exitCode=0 Jan 27 09:05:00 crc kubenswrapper[4799]: I0127 09:05:00.170295 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5m9d5" event={"ID":"9320bcde-0dba-460a-81f0-3d3470e0f9b5","Type":"ContainerDied","Data":"7721abeec753fd11cea9587708d8a3ba76c6fa8a58d524688e57bbd6a8080ae5"} Jan 27 09:05:02 crc kubenswrapper[4799]: I0127 09:05:02.187851 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5m9d5" event={"ID":"9320bcde-0dba-460a-81f0-3d3470e0f9b5","Type":"ContainerStarted","Data":"49632e0c5da7df88e0c1e989a68db73211997faf35005d4e3a6eb2a8604bcc3b"} Jan 27 09:05:05 crc kubenswrapper[4799]: I0127 09:05:05.451555 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:05:05 crc kubenswrapper[4799]: E0127 09:05:05.452110 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:05:07 crc kubenswrapper[4799]: I0127 09:05:07.191598 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5m9d5" Jan 27 09:05:07 crc kubenswrapper[4799]: I0127 09:05:07.192511 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5m9d5" Jan 27 09:05:07 crc kubenswrapper[4799]: I0127 09:05:07.244684 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5m9d5" Jan 27 09:05:07 crc kubenswrapper[4799]: I0127 09:05:07.262229 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5m9d5" podStartSLOduration=7.893279143 podStartE2EDuration="11.26221435s" podCreationTimestamp="2026-01-27 09:04:56 +0000 UTC" firstStartedPulling="2026-01-27 09:04:58.152006263 +0000 UTC m=+4764.463110328" lastFinishedPulling="2026-01-27 09:05:01.52094147 +0000 UTC m=+4767.832045535" observedRunningTime="2026-01-27 09:05:02.209475802 +0000 UTC m=+4768.520579887" watchObservedRunningTime="2026-01-27 09:05:07.26221435 +0000 UTC m=+4773.573318415" Jan 27 09:05:07 crc kubenswrapper[4799]: I0127 09:05:07.597763 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9cdfk" Jan 27 09:05:07 crc kubenswrapper[4799]: I0127 09:05:07.644233 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9cdfk" Jan 27 09:05:08 crc kubenswrapper[4799]: I0127 09:05:08.311940 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5m9d5" Jan 27 09:05:08 crc kubenswrapper[4799]: I0127 09:05:08.476995 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9cdfk"] Jan 27 09:05:09 crc kubenswrapper[4799]: I0127 09:05:09.257046 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9cdfk" podUID="341b6c05-d5cf-4c19-bd2b-41adb309c9d8" containerName="registry-server" containerID="cri-o://e9e77285054d793901f04065268cd741e95010435239f1a82a4817b928ff78c0" gracePeriod=2 Jan 27 09:05:09 crc kubenswrapper[4799]: I0127 09:05:09.673601 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9cdfk" Jan 27 09:05:09 crc kubenswrapper[4799]: I0127 09:05:09.805372 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/341b6c05-d5cf-4c19-bd2b-41adb309c9d8-catalog-content\") pod \"341b6c05-d5cf-4c19-bd2b-41adb309c9d8\" (UID: \"341b6c05-d5cf-4c19-bd2b-41adb309c9d8\") " Jan 27 09:05:09 crc kubenswrapper[4799]: I0127 09:05:09.805442 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stgqb\" (UniqueName: \"kubernetes.io/projected/341b6c05-d5cf-4c19-bd2b-41adb309c9d8-kube-api-access-stgqb\") pod \"341b6c05-d5cf-4c19-bd2b-41adb309c9d8\" (UID: \"341b6c05-d5cf-4c19-bd2b-41adb309c9d8\") " Jan 27 09:05:09 crc kubenswrapper[4799]: I0127 09:05:09.805578 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/341b6c05-d5cf-4c19-bd2b-41adb309c9d8-utilities\") pod \"341b6c05-d5cf-4c19-bd2b-41adb309c9d8\" (UID: \"341b6c05-d5cf-4c19-bd2b-41adb309c9d8\") " Jan 27 09:05:09 crc kubenswrapper[4799]: I0127 09:05:09.806463 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/341b6c05-d5cf-4c19-bd2b-41adb309c9d8-utilities" (OuterVolumeSpecName: "utilities") pod "341b6c05-d5cf-4c19-bd2b-41adb309c9d8" (UID: "341b6c05-d5cf-4c19-bd2b-41adb309c9d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:05:09 crc kubenswrapper[4799]: I0127 09:05:09.816118 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/341b6c05-d5cf-4c19-bd2b-41adb309c9d8-kube-api-access-stgqb" (OuterVolumeSpecName: "kube-api-access-stgqb") pod "341b6c05-d5cf-4c19-bd2b-41adb309c9d8" (UID: "341b6c05-d5cf-4c19-bd2b-41adb309c9d8"). InnerVolumeSpecName "kube-api-access-stgqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:05:09 crc kubenswrapper[4799]: I0127 09:05:09.906822 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/341b6c05-d5cf-4c19-bd2b-41adb309c9d8-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:09 crc kubenswrapper[4799]: I0127 09:05:09.906865 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stgqb\" (UniqueName: \"kubernetes.io/projected/341b6c05-d5cf-4c19-bd2b-41adb309c9d8-kube-api-access-stgqb\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:09 crc kubenswrapper[4799]: I0127 09:05:09.943343 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/341b6c05-d5cf-4c19-bd2b-41adb309c9d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "341b6c05-d5cf-4c19-bd2b-41adb309c9d8" (UID: "341b6c05-d5cf-4c19-bd2b-41adb309c9d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:05:10 crc kubenswrapper[4799]: I0127 09:05:10.008289 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/341b6c05-d5cf-4c19-bd2b-41adb309c9d8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:10 crc kubenswrapper[4799]: I0127 09:05:10.266523 4799 generic.go:334] "Generic (PLEG): container finished" podID="341b6c05-d5cf-4c19-bd2b-41adb309c9d8" containerID="e9e77285054d793901f04065268cd741e95010435239f1a82a4817b928ff78c0" exitCode=0 Jan 27 09:05:10 crc kubenswrapper[4799]: I0127 09:05:10.266604 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9cdfk" Jan 27 09:05:10 crc kubenswrapper[4799]: I0127 09:05:10.266624 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9cdfk" event={"ID":"341b6c05-d5cf-4c19-bd2b-41adb309c9d8","Type":"ContainerDied","Data":"e9e77285054d793901f04065268cd741e95010435239f1a82a4817b928ff78c0"} Jan 27 09:05:10 crc kubenswrapper[4799]: I0127 09:05:10.267212 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9cdfk" event={"ID":"341b6c05-d5cf-4c19-bd2b-41adb309c9d8","Type":"ContainerDied","Data":"4b0d14b75d210e7a105a095efefe1ff8bca54babbe68259d6dcd5f29e5bd693d"} Jan 27 09:05:10 crc kubenswrapper[4799]: I0127 09:05:10.267234 4799 scope.go:117] "RemoveContainer" containerID="e9e77285054d793901f04065268cd741e95010435239f1a82a4817b928ff78c0" Jan 27 09:05:10 crc kubenswrapper[4799]: I0127 09:05:10.304087 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9cdfk"] Jan 27 09:05:10 crc kubenswrapper[4799]: I0127 09:05:10.307882 4799 scope.go:117] "RemoveContainer" containerID="8e381ac74f2898248180f23d3e8fe318eac7a0516a308f0a77bdbae230b7d66d" Jan 27 09:05:10 crc kubenswrapper[4799]: I0127 09:05:10.309178 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9cdfk"] Jan 27 09:05:10 crc kubenswrapper[4799]: I0127 09:05:10.345351 4799 scope.go:117] "RemoveContainer" containerID="142c19f067f4d7c5fc293a3af201000f9eedb3dea8b23734e47296c1c6983174" Jan 27 09:05:10 crc kubenswrapper[4799]: I0127 09:05:10.368426 4799 scope.go:117] "RemoveContainer" containerID="e9e77285054d793901f04065268cd741e95010435239f1a82a4817b928ff78c0" Jan 27 09:05:10 crc kubenswrapper[4799]: E0127 09:05:10.369291 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9e77285054d793901f04065268cd741e95010435239f1a82a4817b928ff78c0\": container with ID starting with e9e77285054d793901f04065268cd741e95010435239f1a82a4817b928ff78c0 not found: ID does not exist" containerID="e9e77285054d793901f04065268cd741e95010435239f1a82a4817b928ff78c0" Jan 27 09:05:10 crc kubenswrapper[4799]: I0127 09:05:10.369377 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9e77285054d793901f04065268cd741e95010435239f1a82a4817b928ff78c0"} err="failed to get container status \"e9e77285054d793901f04065268cd741e95010435239f1a82a4817b928ff78c0\": rpc error: code = NotFound desc = could not find container \"e9e77285054d793901f04065268cd741e95010435239f1a82a4817b928ff78c0\": container with ID starting with e9e77285054d793901f04065268cd741e95010435239f1a82a4817b928ff78c0 not found: ID does not exist" Jan 27 09:05:10 crc kubenswrapper[4799]: I0127 09:05:10.369422 4799 scope.go:117] "RemoveContainer" containerID="8e381ac74f2898248180f23d3e8fe318eac7a0516a308f0a77bdbae230b7d66d" Jan 27 09:05:10 crc kubenswrapper[4799]: E0127 09:05:10.370702 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e381ac74f2898248180f23d3e8fe318eac7a0516a308f0a77bdbae230b7d66d\": container with ID starting with 8e381ac74f2898248180f23d3e8fe318eac7a0516a308f0a77bdbae230b7d66d not found: ID does not exist" containerID="8e381ac74f2898248180f23d3e8fe318eac7a0516a308f0a77bdbae230b7d66d" Jan 27 09:05:10 crc kubenswrapper[4799]: I0127 09:05:10.370760 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e381ac74f2898248180f23d3e8fe318eac7a0516a308f0a77bdbae230b7d66d"} err="failed to get container status \"8e381ac74f2898248180f23d3e8fe318eac7a0516a308f0a77bdbae230b7d66d\": rpc error: code = NotFound desc = could not find container \"8e381ac74f2898248180f23d3e8fe318eac7a0516a308f0a77bdbae230b7d66d\": container with ID starting with 8e381ac74f2898248180f23d3e8fe318eac7a0516a308f0a77bdbae230b7d66d not found: ID does not exist" Jan 27 09:05:10 crc kubenswrapper[4799]: I0127 09:05:10.370796 4799 scope.go:117] "RemoveContainer" containerID="142c19f067f4d7c5fc293a3af201000f9eedb3dea8b23734e47296c1c6983174" Jan 27 09:05:10 crc kubenswrapper[4799]: E0127 09:05:10.371110 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"142c19f067f4d7c5fc293a3af201000f9eedb3dea8b23734e47296c1c6983174\": container with ID starting with 142c19f067f4d7c5fc293a3af201000f9eedb3dea8b23734e47296c1c6983174 not found: ID does not exist" containerID="142c19f067f4d7c5fc293a3af201000f9eedb3dea8b23734e47296c1c6983174" Jan 27 09:05:10 crc kubenswrapper[4799]: I0127 09:05:10.371138 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"142c19f067f4d7c5fc293a3af201000f9eedb3dea8b23734e47296c1c6983174"} err="failed to get container status \"142c19f067f4d7c5fc293a3af201000f9eedb3dea8b23734e47296c1c6983174\": rpc error: code = NotFound desc = could not find container \"142c19f067f4d7c5fc293a3af201000f9eedb3dea8b23734e47296c1c6983174\": container with ID starting with 142c19f067f4d7c5fc293a3af201000f9eedb3dea8b23734e47296c1c6983174 not found: ID does not exist" Jan 27 09:05:10 crc kubenswrapper[4799]: I0127 09:05:10.462147 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="341b6c05-d5cf-4c19-bd2b-41adb309c9d8" path="/var/lib/kubelet/pods/341b6c05-d5cf-4c19-bd2b-41adb309c9d8/volumes" Jan 27 09:05:10 crc kubenswrapper[4799]: I0127 09:05:10.676948 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5m9d5"] Jan 27 09:05:10 crc kubenswrapper[4799]: I0127 09:05:10.677214 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5m9d5" podUID="9320bcde-0dba-460a-81f0-3d3470e0f9b5" containerName="registry-server" containerID="cri-o://49632e0c5da7df88e0c1e989a68db73211997faf35005d4e3a6eb2a8604bcc3b" gracePeriod=2 Jan 27 09:05:11 crc kubenswrapper[4799]: I0127 09:05:11.275488 4799 generic.go:334] "Generic (PLEG): container finished" podID="9320bcde-0dba-460a-81f0-3d3470e0f9b5" containerID="49632e0c5da7df88e0c1e989a68db73211997faf35005d4e3a6eb2a8604bcc3b" exitCode=0 Jan 27 09:05:11 crc kubenswrapper[4799]: I0127 09:05:11.275564 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5m9d5" event={"ID":"9320bcde-0dba-460a-81f0-3d3470e0f9b5","Type":"ContainerDied","Data":"49632e0c5da7df88e0c1e989a68db73211997faf35005d4e3a6eb2a8604bcc3b"} Jan 27 09:05:11 crc kubenswrapper[4799]: I0127 09:05:11.275594 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5m9d5" event={"ID":"9320bcde-0dba-460a-81f0-3d3470e0f9b5","Type":"ContainerDied","Data":"311d76b4afaf7f728c0584fd21cebd11ba69d1b037c998596239ab9537420f4b"} Jan 27 09:05:11 crc kubenswrapper[4799]: I0127 09:05:11.275605 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="311d76b4afaf7f728c0584fd21cebd11ba69d1b037c998596239ab9537420f4b" Jan 27 09:05:11 crc kubenswrapper[4799]: I0127 09:05:11.329899 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5m9d5" Jan 27 09:05:11 crc kubenswrapper[4799]: I0127 09:05:11.428100 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9320bcde-0dba-460a-81f0-3d3470e0f9b5-catalog-content\") pod \"9320bcde-0dba-460a-81f0-3d3470e0f9b5\" (UID: \"9320bcde-0dba-460a-81f0-3d3470e0f9b5\") " Jan 27 09:05:11 crc kubenswrapper[4799]: I0127 09:05:11.428175 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd2bz\" (UniqueName: \"kubernetes.io/projected/9320bcde-0dba-460a-81f0-3d3470e0f9b5-kube-api-access-vd2bz\") pod \"9320bcde-0dba-460a-81f0-3d3470e0f9b5\" (UID: \"9320bcde-0dba-460a-81f0-3d3470e0f9b5\") " Jan 27 09:05:11 crc kubenswrapper[4799]: I0127 09:05:11.428203 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9320bcde-0dba-460a-81f0-3d3470e0f9b5-utilities\") pod \"9320bcde-0dba-460a-81f0-3d3470e0f9b5\" (UID: \"9320bcde-0dba-460a-81f0-3d3470e0f9b5\") " Jan 27 09:05:11 crc kubenswrapper[4799]: I0127 09:05:11.429594 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9320bcde-0dba-460a-81f0-3d3470e0f9b5-utilities" (OuterVolumeSpecName: "utilities") pod "9320bcde-0dba-460a-81f0-3d3470e0f9b5" (UID: "9320bcde-0dba-460a-81f0-3d3470e0f9b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:05:11 crc kubenswrapper[4799]: I0127 09:05:11.435045 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9320bcde-0dba-460a-81f0-3d3470e0f9b5-kube-api-access-vd2bz" (OuterVolumeSpecName: "kube-api-access-vd2bz") pod "9320bcde-0dba-460a-81f0-3d3470e0f9b5" (UID: "9320bcde-0dba-460a-81f0-3d3470e0f9b5"). InnerVolumeSpecName "kube-api-access-vd2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:05:11 crc kubenswrapper[4799]: I0127 09:05:11.500267 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9320bcde-0dba-460a-81f0-3d3470e0f9b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9320bcde-0dba-460a-81f0-3d3470e0f9b5" (UID: "9320bcde-0dba-460a-81f0-3d3470e0f9b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:05:11 crc kubenswrapper[4799]: I0127 09:05:11.529838 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9320bcde-0dba-460a-81f0-3d3470e0f9b5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:11 crc kubenswrapper[4799]: I0127 09:05:11.529877 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd2bz\" (UniqueName: \"kubernetes.io/projected/9320bcde-0dba-460a-81f0-3d3470e0f9b5-kube-api-access-vd2bz\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:11 crc kubenswrapper[4799]: I0127 09:05:11.529895 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9320bcde-0dba-460a-81f0-3d3470e0f9b5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:12 crc kubenswrapper[4799]: I0127 09:05:12.286613 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5m9d5" Jan 27 09:05:12 crc kubenswrapper[4799]: I0127 09:05:12.327791 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5m9d5"] Jan 27 09:05:12 crc kubenswrapper[4799]: I0127 09:05:12.334868 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5m9d5"] Jan 27 09:05:12 crc kubenswrapper[4799]: I0127 09:05:12.461129 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9320bcde-0dba-460a-81f0-3d3470e0f9b5" path="/var/lib/kubelet/pods/9320bcde-0dba-460a-81f0-3d3470e0f9b5/volumes" Jan 27 09:05:18 crc kubenswrapper[4799]: I0127 09:05:18.451625 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:05:18 crc kubenswrapper[4799]: E0127 09:05:18.452372 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.349023 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-m7sqn"] Jan 27 09:05:20 crc kubenswrapper[4799]: E0127 09:05:20.349745 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341b6c05-d5cf-4c19-bd2b-41adb309c9d8" containerName="extract-content" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.349761 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="341b6c05-d5cf-4c19-bd2b-41adb309c9d8" containerName="extract-content" Jan 27 09:05:20 crc kubenswrapper[4799]: E0127 09:05:20.349782 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341b6c05-d5cf-4c19-bd2b-41adb309c9d8" containerName="extract-utilities" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.349790 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="341b6c05-d5cf-4c19-bd2b-41adb309c9d8" containerName="extract-utilities" Jan 27 09:05:20 crc kubenswrapper[4799]: E0127 09:05:20.349799 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9320bcde-0dba-460a-81f0-3d3470e0f9b5" containerName="registry-server" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.349806 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9320bcde-0dba-460a-81f0-3d3470e0f9b5" containerName="registry-server" Jan 27 09:05:20 crc kubenswrapper[4799]: E0127 09:05:20.349831 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9320bcde-0dba-460a-81f0-3d3470e0f9b5" containerName="extract-utilities" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.349838 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9320bcde-0dba-460a-81f0-3d3470e0f9b5" containerName="extract-utilities" Jan 27 09:05:20 crc kubenswrapper[4799]: E0127 09:05:20.349849 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341b6c05-d5cf-4c19-bd2b-41adb309c9d8" containerName="registry-server" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.349856 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="341b6c05-d5cf-4c19-bd2b-41adb309c9d8" containerName="registry-server" Jan 27 09:05:20 crc kubenswrapper[4799]: E0127 09:05:20.349867 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9320bcde-0dba-460a-81f0-3d3470e0f9b5" containerName="extract-content" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.349874 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9320bcde-0dba-460a-81f0-3d3470e0f9b5" containerName="extract-content" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.350038 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="9320bcde-0dba-460a-81f0-3d3470e0f9b5" containerName="registry-server" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.350054 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="341b6c05-d5cf-4c19-bd2b-41adb309c9d8" containerName="registry-server" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.359797 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-m7sqn" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.365017 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.368113 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.368540 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.368566 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.368842 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-c7bmt" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.449362 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-m7sqn"] Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.475021 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aab68f84-c210-4d9a-a516-ddfd602bb371-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-m7sqn\" (UID: \"aab68f84-c210-4d9a-a516-ddfd602bb371\") " pod="openstack/dnsmasq-dns-5d7b5456f5-m7sqn" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.475468 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtghf\" (UniqueName: \"kubernetes.io/projected/aab68f84-c210-4d9a-a516-ddfd602bb371-kube-api-access-xtghf\") pod \"dnsmasq-dns-5d7b5456f5-m7sqn\" (UID: \"aab68f84-c210-4d9a-a516-ddfd602bb371\") " pod="openstack/dnsmasq-dns-5d7b5456f5-m7sqn" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.475618 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aab68f84-c210-4d9a-a516-ddfd602bb371-config\") pod \"dnsmasq-dns-5d7b5456f5-m7sqn\" (UID: \"aab68f84-c210-4d9a-a516-ddfd602bb371\") " pod="openstack/dnsmasq-dns-5d7b5456f5-m7sqn" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.577107 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtghf\" (UniqueName: \"kubernetes.io/projected/aab68f84-c210-4d9a-a516-ddfd602bb371-kube-api-access-xtghf\") pod \"dnsmasq-dns-5d7b5456f5-m7sqn\" (UID: \"aab68f84-c210-4d9a-a516-ddfd602bb371\") " pod="openstack/dnsmasq-dns-5d7b5456f5-m7sqn" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.577186 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aab68f84-c210-4d9a-a516-ddfd602bb371-config\") pod \"dnsmasq-dns-5d7b5456f5-m7sqn\" (UID: \"aab68f84-c210-4d9a-a516-ddfd602bb371\") " pod="openstack/dnsmasq-dns-5d7b5456f5-m7sqn" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.577272 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aab68f84-c210-4d9a-a516-ddfd602bb371-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-m7sqn\" (UID: \"aab68f84-c210-4d9a-a516-ddfd602bb371\") " pod="openstack/dnsmasq-dns-5d7b5456f5-m7sqn" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.578258 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aab68f84-c210-4d9a-a516-ddfd602bb371-config\") pod \"dnsmasq-dns-5d7b5456f5-m7sqn\" (UID: \"aab68f84-c210-4d9a-a516-ddfd602bb371\") " pod="openstack/dnsmasq-dns-5d7b5456f5-m7sqn" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.578827 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aab68f84-c210-4d9a-a516-ddfd602bb371-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-m7sqn\" (UID: \"aab68f84-c210-4d9a-a516-ddfd602bb371\") " pod="openstack/dnsmasq-dns-5d7b5456f5-m7sqn" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.619899 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtghf\" (UniqueName: \"kubernetes.io/projected/aab68f84-c210-4d9a-a516-ddfd602bb371-kube-api-access-xtghf\") pod \"dnsmasq-dns-5d7b5456f5-m7sqn\" (UID: \"aab68f84-c210-4d9a-a516-ddfd602bb371\") " pod="openstack/dnsmasq-dns-5d7b5456f5-m7sqn" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.649912 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-5r5hj"] Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.660488 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-5r5hj" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.674600 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-5r5hj"] Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.752974 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-m7sqn" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.782996 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e9901af-5957-43d7-a8a2-dd341614a031-config\") pod \"dnsmasq-dns-98ddfc8f-5r5hj\" (UID: \"9e9901af-5957-43d7-a8a2-dd341614a031\") " pod="openstack/dnsmasq-dns-98ddfc8f-5r5hj" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.783051 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e9901af-5957-43d7-a8a2-dd341614a031-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-5r5hj\" (UID: \"9e9901af-5957-43d7-a8a2-dd341614a031\") " pod="openstack/dnsmasq-dns-98ddfc8f-5r5hj" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.783143 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6ggk\" (UniqueName: \"kubernetes.io/projected/9e9901af-5957-43d7-a8a2-dd341614a031-kube-api-access-d6ggk\") pod \"dnsmasq-dns-98ddfc8f-5r5hj\" (UID: \"9e9901af-5957-43d7-a8a2-dd341614a031\") " pod="openstack/dnsmasq-dns-98ddfc8f-5r5hj" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.884475 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e9901af-5957-43d7-a8a2-dd341614a031-config\") pod \"dnsmasq-dns-98ddfc8f-5r5hj\" (UID: \"9e9901af-5957-43d7-a8a2-dd341614a031\") " pod="openstack/dnsmasq-dns-98ddfc8f-5r5hj" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.884529 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e9901af-5957-43d7-a8a2-dd341614a031-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-5r5hj\" (UID: \"9e9901af-5957-43d7-a8a2-dd341614a031\") " pod="openstack/dnsmasq-dns-98ddfc8f-5r5hj" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.884623 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6ggk\" (UniqueName: \"kubernetes.io/projected/9e9901af-5957-43d7-a8a2-dd341614a031-kube-api-access-d6ggk\") pod \"dnsmasq-dns-98ddfc8f-5r5hj\" (UID: \"9e9901af-5957-43d7-a8a2-dd341614a031\") " pod="openstack/dnsmasq-dns-98ddfc8f-5r5hj" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.885540 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e9901af-5957-43d7-a8a2-dd341614a031-config\") pod \"dnsmasq-dns-98ddfc8f-5r5hj\" (UID: \"9e9901af-5957-43d7-a8a2-dd341614a031\") " pod="openstack/dnsmasq-dns-98ddfc8f-5r5hj" Jan 27 09:05:20 crc kubenswrapper[4799]: I0127 09:05:20.885804 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e9901af-5957-43d7-a8a2-dd341614a031-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-5r5hj\" (UID: \"9e9901af-5957-43d7-a8a2-dd341614a031\") " pod="openstack/dnsmasq-dns-98ddfc8f-5r5hj" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.244007 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6ggk\" (UniqueName: \"kubernetes.io/projected/9e9901af-5957-43d7-a8a2-dd341614a031-kube-api-access-d6ggk\") pod \"dnsmasq-dns-98ddfc8f-5r5hj\" (UID: \"9e9901af-5957-43d7-a8a2-dd341614a031\") " pod="openstack/dnsmasq-dns-98ddfc8f-5r5hj" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.260371 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-m7sqn"] Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.297801 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-5r5hj" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.360145 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-m7sqn" event={"ID":"aab68f84-c210-4d9a-a516-ddfd602bb371","Type":"ContainerStarted","Data":"90693b878fde7cf196c2e390f2e494fe12eceae4f761f7c55dc02d45ba48d334"} Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.448680 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.450110 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.452015 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.452219 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-46llv" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.452354 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.452496 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.452732 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.471586 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.593572 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.593637 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.593704 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.593750 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.593781 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.593798 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.593848 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.593923 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgwzc\" (UniqueName: \"kubernetes.io/projected/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-kube-api-access-jgwzc\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.593971 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.695226 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.695284 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.695362 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.695398 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.695435 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.695458 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.695480 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.695510 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgwzc\" (UniqueName: \"kubernetes.io/projected/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-kube-api-access-jgwzc\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.695539 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.697271 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.697862 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.699035 4799 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.699063 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/78a41ffdc2eea7b69c2b06ff892c0ca32b72f90ef3ddbfbd963acc48ca8f6c16/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.699469 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.703020 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.703262 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.703348 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.707002 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.719594 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgwzc\" (UniqueName: \"kubernetes.io/projected/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-kube-api-access-jgwzc\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.732785 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda\") pod \"rabbitmq-server-0\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: W0127 09:05:21.791427 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e9901af_5957_43d7_a8a2_dd341614a031.slice/crio-24af5f3da41eba7e586e328c950a485a8f0d6f3875bf5afdf07686d6128a582c WatchSource:0}: Error finding container 24af5f3da41eba7e586e328c950a485a8f0d6f3875bf5afdf07686d6128a582c: Status 404 returned error can't find the container with id 24af5f3da41eba7e586e328c950a485a8f0d6f3875bf5afdf07686d6128a582c Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.795191 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-5r5hj"] Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.800932 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.831347 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.832623 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.836492 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.837254 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.837375 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.837492 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.846745 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ctk4h" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.851125 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.897517 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6ec25425-8dd3-4458-afcf-02ab3f166e97-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.897594 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6ec25425-8dd3-4458-afcf-02ab3f166e97-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.897619 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6ec25425-8dd3-4458-afcf-02ab3f166e97-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.897652 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6ec25425-8dd3-4458-afcf-02ab3f166e97-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.897730 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6ec25425-8dd3-4458-afcf-02ab3f166e97-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.897777 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-992aa1be-297c-4607-86fd-1bdeae25a355\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-992aa1be-297c-4607-86fd-1bdeae25a355\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.897806 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6ec25425-8dd3-4458-afcf-02ab3f166e97-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.897838 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58s5v\" (UniqueName: \"kubernetes.io/projected/6ec25425-8dd3-4458-afcf-02ab3f166e97-kube-api-access-58s5v\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.897857 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6ec25425-8dd3-4458-afcf-02ab3f166e97-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.999228 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6ec25425-8dd3-4458-afcf-02ab3f166e97-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:21 crc kubenswrapper[4799]: I0127 09:05:21.999788 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6ec25425-8dd3-4458-afcf-02ab3f166e97-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:21.999816 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6ec25425-8dd3-4458-afcf-02ab3f166e97-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:21.999882 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6ec25425-8dd3-4458-afcf-02ab3f166e97-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:21.999930 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-992aa1be-297c-4607-86fd-1bdeae25a355\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-992aa1be-297c-4607-86fd-1bdeae25a355\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:21.999955 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6ec25425-8dd3-4458-afcf-02ab3f166e97-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:21.999974 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58s5v\" (UniqueName: \"kubernetes.io/projected/6ec25425-8dd3-4458-afcf-02ab3f166e97-kube-api-access-58s5v\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:22.000008 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6ec25425-8dd3-4458-afcf-02ab3f166e97-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:22.000036 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6ec25425-8dd3-4458-afcf-02ab3f166e97-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:22.001402 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6ec25425-8dd3-4458-afcf-02ab3f166e97-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:22.001761 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6ec25425-8dd3-4458-afcf-02ab3f166e97-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:22.001753 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6ec25425-8dd3-4458-afcf-02ab3f166e97-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:22.002678 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6ec25425-8dd3-4458-afcf-02ab3f166e97-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:22.006898 4799 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:22.006942 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-992aa1be-297c-4607-86fd-1bdeae25a355\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-992aa1be-297c-4607-86fd-1bdeae25a355\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c240c2644e246a982d55ead9dac1e2ab192baebc70b563e210e646a1188c2985/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:22.006948 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6ec25425-8dd3-4458-afcf-02ab3f166e97-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:22.008395 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6ec25425-8dd3-4458-afcf-02ab3f166e97-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:22.009871 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6ec25425-8dd3-4458-afcf-02ab3f166e97-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:22.025684 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58s5v\" (UniqueName: \"kubernetes.io/projected/6ec25425-8dd3-4458-afcf-02ab3f166e97-kube-api-access-58s5v\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:22.055858 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-992aa1be-297c-4607-86fd-1bdeae25a355\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-992aa1be-297c-4607-86fd-1bdeae25a355\") pod \"rabbitmq-cell1-server-0\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:22.200610 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:22.262598 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 09:05:22 crc kubenswrapper[4799]: W0127 09:05:22.272198 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a39570f_ce03_4a9a_9c3b_5f7b7bb86d24.slice/crio-747483f6b03e53c9f95e39f799aca1bfaf9811adac8646afb4149b2a43eb5e3c WatchSource:0}: Error finding container 747483f6b03e53c9f95e39f799aca1bfaf9811adac8646afb4149b2a43eb5e3c: Status 404 returned error can't find the container with id 747483f6b03e53c9f95e39f799aca1bfaf9811adac8646afb4149b2a43eb5e3c Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:22.369441 4799 generic.go:334] "Generic (PLEG): container finished" podID="aab68f84-c210-4d9a-a516-ddfd602bb371" containerID="b4f482696a494fc4fe8a0fff334d11302d0b314fdb0204456e021cfb52c30f2d" exitCode=0 Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:22.369515 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-m7sqn" event={"ID":"aab68f84-c210-4d9a-a516-ddfd602bb371","Type":"ContainerDied","Data":"b4f482696a494fc4fe8a0fff334d11302d0b314fdb0204456e021cfb52c30f2d"} Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:22.375854 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24","Type":"ContainerStarted","Data":"747483f6b03e53c9f95e39f799aca1bfaf9811adac8646afb4149b2a43eb5e3c"} Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:22.381646 4799 generic.go:334] "Generic (PLEG): container finished" podID="9e9901af-5957-43d7-a8a2-dd341614a031" containerID="73c5a4259d954c472933ee679cd22628c0f57c9aeb6183280403a83f983fcf8a" exitCode=0 Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:22.382031 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-5r5hj" event={"ID":"9e9901af-5957-43d7-a8a2-dd341614a031","Type":"ContainerDied","Data":"73c5a4259d954c472933ee679cd22628c0f57c9aeb6183280403a83f983fcf8a"} Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:22.382059 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-5r5hj" event={"ID":"9e9901af-5957-43d7-a8a2-dd341614a031","Type":"ContainerStarted","Data":"24af5f3da41eba7e586e328c950a485a8f0d6f3875bf5afdf07686d6128a582c"} Jan 27 09:05:22 crc kubenswrapper[4799]: I0127 09:05:22.722428 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.113130 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.115070 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.120426 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.120789 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.121208 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-q4sck" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.121723 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.126229 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.134518 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.223472 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fbe8b2ce-30cb-4738-b519-85e0a829bcd4-kolla-config\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.223535 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6clv\" (UniqueName: \"kubernetes.io/projected/fbe8b2ce-30cb-4738-b519-85e0a829bcd4-kube-api-access-r6clv\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.223588 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7bf3ec9e-9f99-4033-b79d-322f1d3e2bd7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7bf3ec9e-9f99-4033-b79d-322f1d3e2bd7\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.223606 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe8b2ce-30cb-4738-b519-85e0a829bcd4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.223625 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fbe8b2ce-30cb-4738-b519-85e0a829bcd4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.223643 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fbe8b2ce-30cb-4738-b519-85e0a829bcd4-config-data-default\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.223675 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe8b2ce-30cb-4738-b519-85e0a829bcd4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.223690 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe8b2ce-30cb-4738-b519-85e0a829bcd4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.324991 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fbe8b2ce-30cb-4738-b519-85e0a829bcd4-kolla-config\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.325059 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6clv\" (UniqueName: \"kubernetes.io/projected/fbe8b2ce-30cb-4738-b519-85e0a829bcd4-kube-api-access-r6clv\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.325128 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7bf3ec9e-9f99-4033-b79d-322f1d3e2bd7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7bf3ec9e-9f99-4033-b79d-322f1d3e2bd7\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.325150 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe8b2ce-30cb-4738-b519-85e0a829bcd4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.325178 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fbe8b2ce-30cb-4738-b519-85e0a829bcd4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.325197 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fbe8b2ce-30cb-4738-b519-85e0a829bcd4-config-data-default\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.325225 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe8b2ce-30cb-4738-b519-85e0a829bcd4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.325241 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe8b2ce-30cb-4738-b519-85e0a829bcd4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.326651 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fbe8b2ce-30cb-4738-b519-85e0a829bcd4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.326767 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fbe8b2ce-30cb-4738-b519-85e0a829bcd4-kolla-config\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.327135 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fbe8b2ce-30cb-4738-b519-85e0a829bcd4-config-data-default\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.327520 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe8b2ce-30cb-4738-b519-85e0a829bcd4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.329386 4799 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.329421 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7bf3ec9e-9f99-4033-b79d-322f1d3e2bd7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7bf3ec9e-9f99-4033-b79d-322f1d3e2bd7\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/aa85599fbfc3e0377f2f2a2b9a9a5f87a964d81364d90ee76817bb81f4f94c7f/globalmount\"" pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.331088 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe8b2ce-30cb-4738-b519-85e0a829bcd4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.331521 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe8b2ce-30cb-4738-b519-85e0a829bcd4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.393416 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-5r5hj" event={"ID":"9e9901af-5957-43d7-a8a2-dd341614a031","Type":"ContainerStarted","Data":"78e0cb19c7483574118d9d7b4bd19218706d0381b286f59afbab3960a8682df0"} Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.394407 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-98ddfc8f-5r5hj" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.396035 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-m7sqn" event={"ID":"aab68f84-c210-4d9a-a516-ddfd602bb371","Type":"ContainerStarted","Data":"dfcf37364af15c12cc20e39e95513603029a2c0445ab2df4f4f8fa22b5f8a66d"} Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.396769 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d7b5456f5-m7sqn" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.399218 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6ec25425-8dd3-4458-afcf-02ab3f166e97","Type":"ContainerStarted","Data":"f6874d086044f64527ffa5f6a9e85cdda468c7b949ca50c2c82cfbbd331b434e"} Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.434432 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-98ddfc8f-5r5hj" podStartSLOduration=3.434409285 podStartE2EDuration="3.434409285s" podCreationTimestamp="2026-01-27 09:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:05:23.419789135 +0000 UTC m=+4789.730893200" watchObservedRunningTime="2026-01-27 09:05:23.434409285 +0000 UTC m=+4789.745513350" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.435518 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.436746 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.438798 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-nsz52" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.438968 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.444261 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6clv\" (UniqueName: \"kubernetes.io/projected/fbe8b2ce-30cb-4738-b519-85e0a829bcd4-kube-api-access-r6clv\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.454351 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.470879 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d7b5456f5-m7sqn" podStartSLOduration=3.470842269 podStartE2EDuration="3.470842269s" podCreationTimestamp="2026-01-27 09:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:05:23.444639114 +0000 UTC m=+4789.755743189" watchObservedRunningTime="2026-01-27 09:05:23.470842269 +0000 UTC m=+4789.781946334" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.495967 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7bf3ec9e-9f99-4033-b79d-322f1d3e2bd7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7bf3ec9e-9f99-4033-b79d-322f1d3e2bd7\") pod \"openstack-galera-0\" (UID: \"fbe8b2ce-30cb-4738-b519-85e0a829bcd4\") " pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.528450 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f447dcf-0d25-4c2d-aec3-bfd8dacc5ac7-kolla-config\") pod \"memcached-0\" (UID: \"8f447dcf-0d25-4c2d-aec3-bfd8dacc5ac7\") " pod="openstack/memcached-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.528708 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f447dcf-0d25-4c2d-aec3-bfd8dacc5ac7-config-data\") pod \"memcached-0\" (UID: \"8f447dcf-0d25-4c2d-aec3-bfd8dacc5ac7\") " pod="openstack/memcached-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.528738 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rj9v\" (UniqueName: \"kubernetes.io/projected/8f447dcf-0d25-4c2d-aec3-bfd8dacc5ac7-kube-api-access-5rj9v\") pod \"memcached-0\" (UID: \"8f447dcf-0d25-4c2d-aec3-bfd8dacc5ac7\") " pod="openstack/memcached-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.630458 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rj9v\" (UniqueName: \"kubernetes.io/projected/8f447dcf-0d25-4c2d-aec3-bfd8dacc5ac7-kube-api-access-5rj9v\") pod \"memcached-0\" (UID: \"8f447dcf-0d25-4c2d-aec3-bfd8dacc5ac7\") " pod="openstack/memcached-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.630601 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f447dcf-0d25-4c2d-aec3-bfd8dacc5ac7-kolla-config\") pod \"memcached-0\" (UID: \"8f447dcf-0d25-4c2d-aec3-bfd8dacc5ac7\") " pod="openstack/memcached-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.630676 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f447dcf-0d25-4c2d-aec3-bfd8dacc5ac7-config-data\") pod \"memcached-0\" (UID: \"8f447dcf-0d25-4c2d-aec3-bfd8dacc5ac7\") " pod="openstack/memcached-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.631764 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f447dcf-0d25-4c2d-aec3-bfd8dacc5ac7-config-data\") pod \"memcached-0\" (UID: \"8f447dcf-0d25-4c2d-aec3-bfd8dacc5ac7\") " pod="openstack/memcached-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.631774 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f447dcf-0d25-4c2d-aec3-bfd8dacc5ac7-kolla-config\") pod \"memcached-0\" (UID: \"8f447dcf-0d25-4c2d-aec3-bfd8dacc5ac7\") " pod="openstack/memcached-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.735380 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.742866 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rj9v\" (UniqueName: \"kubernetes.io/projected/8f447dcf-0d25-4c2d-aec3-bfd8dacc5ac7-kube-api-access-5rj9v\") pod \"memcached-0\" (UID: \"8f447dcf-0d25-4c2d-aec3-bfd8dacc5ac7\") " pod="openstack/memcached-0" Jan 27 09:05:23 crc kubenswrapper[4799]: I0127 09:05:23.753930 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.189136 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.258654 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 09:05:24 crc kubenswrapper[4799]: W0127 09:05:24.263006 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbe8b2ce_30cb_4738_b519_85e0a829bcd4.slice/crio-f22619fccf5cd7a33d435d375386a2c67223de4659e1e46c8f1356b960da0c5c WatchSource:0}: Error finding container f22619fccf5cd7a33d435d375386a2c67223de4659e1e46c8f1356b960da0c5c: Status 404 returned error can't find the container with id f22619fccf5cd7a33d435d375386a2c67223de4659e1e46c8f1356b960da0c5c Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.408021 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24","Type":"ContainerStarted","Data":"cb2564501b5dd5f427e94f2d951bbd3e57d3ec0db012a694f656455648ef1e06"} Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.409935 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8f447dcf-0d25-4c2d-aec3-bfd8dacc5ac7","Type":"ContainerStarted","Data":"928297a55a9129bddecceaf911108fbaf2198176a28c915766d269cec7689aef"} Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.409974 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8f447dcf-0d25-4c2d-aec3-bfd8dacc5ac7","Type":"ContainerStarted","Data":"d1ad3db7fd93e4c7739795aa9eabf9d02b781f7650ae3ab5190793b98cc9c034"} Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.410057 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.411507 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6ec25425-8dd3-4458-afcf-02ab3f166e97","Type":"ContainerStarted","Data":"3412dd517df6c618fe9d08e6db03b2901c295960f8258fd8ce51f13b9c027123"} Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.413101 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fbe8b2ce-30cb-4738-b519-85e0a829bcd4","Type":"ContainerStarted","Data":"f22619fccf5cd7a33d435d375386a2c67223de4659e1e46c8f1356b960da0c5c"} Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.455072 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.455051426 podStartE2EDuration="1.455051426s" podCreationTimestamp="2026-01-27 09:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:05:24.452808924 +0000 UTC m=+4790.763913009" watchObservedRunningTime="2026-01-27 09:05:24.455051426 +0000 UTC m=+4790.766155501" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.594984 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.596215 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.601029 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.601704 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-nqllr" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.602322 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.618802 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.620063 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.747022 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1c13e78-9e9e-4b56-aba4-df7d2a77339d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.747084 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f1c13e78-9e9e-4b56-aba4-df7d2a77339d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.747123 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f1c13e78-9e9e-4b56-aba4-df7d2a77339d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.747142 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc27w\" (UniqueName: \"kubernetes.io/projected/f1c13e78-9e9e-4b56-aba4-df7d2a77339d-kube-api-access-mc27w\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.747318 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1c13e78-9e9e-4b56-aba4-df7d2a77339d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.747395 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b4aad16f-bd20-45cb-8602-84a0402840a7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4aad16f-bd20-45cb-8602-84a0402840a7\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.747437 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f1c13e78-9e9e-4b56-aba4-df7d2a77339d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.747485 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c13e78-9e9e-4b56-aba4-df7d2a77339d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.849358 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f1c13e78-9e9e-4b56-aba4-df7d2a77339d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.849509 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc27w\" (UniqueName: \"kubernetes.io/projected/f1c13e78-9e9e-4b56-aba4-df7d2a77339d-kube-api-access-mc27w\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.849554 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f1c13e78-9e9e-4b56-aba4-df7d2a77339d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.849622 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1c13e78-9e9e-4b56-aba4-df7d2a77339d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.849682 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b4aad16f-bd20-45cb-8602-84a0402840a7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4aad16f-bd20-45cb-8602-84a0402840a7\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.849741 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f1c13e78-9e9e-4b56-aba4-df7d2a77339d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.849806 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c13e78-9e9e-4b56-aba4-df7d2a77339d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.849831 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f1c13e78-9e9e-4b56-aba4-df7d2a77339d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.850035 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1c13e78-9e9e-4b56-aba4-df7d2a77339d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.850703 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f1c13e78-9e9e-4b56-aba4-df7d2a77339d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.851144 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f1c13e78-9e9e-4b56-aba4-df7d2a77339d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.853562 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1c13e78-9e9e-4b56-aba4-df7d2a77339d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.853778 4799 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.853826 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b4aad16f-bd20-45cb-8602-84a0402840a7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4aad16f-bd20-45cb-8602-84a0402840a7\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5431d667f91fcc3b9bffaeecc9550f04bb70fe744fc9086b0919e575a42f82ef/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.854258 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1c13e78-9e9e-4b56-aba4-df7d2a77339d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.857573 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c13e78-9e9e-4b56-aba4-df7d2a77339d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.866582 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc27w\" (UniqueName: \"kubernetes.io/projected/f1c13e78-9e9e-4b56-aba4-df7d2a77339d-kube-api-access-mc27w\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.892231 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b4aad16f-bd20-45cb-8602-84a0402840a7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4aad16f-bd20-45cb-8602-84a0402840a7\") pod \"openstack-cell1-galera-0\" (UID: \"f1c13e78-9e9e-4b56-aba4-df7d2a77339d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:24 crc kubenswrapper[4799]: I0127 09:05:24.948161 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:25 crc kubenswrapper[4799]: I0127 09:05:25.378681 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 09:05:25 crc kubenswrapper[4799]: I0127 09:05:25.422119 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f1c13e78-9e9e-4b56-aba4-df7d2a77339d","Type":"ContainerStarted","Data":"045be0ed78f204c124eab6bd2232fca739e6f5f5da10026f55d51fa74b8968c9"} Jan 27 09:05:25 crc kubenswrapper[4799]: I0127 09:05:25.423663 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fbe8b2ce-30cb-4738-b519-85e0a829bcd4","Type":"ContainerStarted","Data":"ac605aa435e6561f94f047c4d4f434a95c71863c32bf88defc62da63c078d74b"} Jan 27 09:05:26 crc kubenswrapper[4799]: I0127 09:05:26.431194 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f1c13e78-9e9e-4b56-aba4-df7d2a77339d","Type":"ContainerStarted","Data":"3c74d288ee02d4ec212ad08070d97c2013f8cc1d0d28fef3b5aca743e08dd4b4"} Jan 27 09:05:29 crc kubenswrapper[4799]: E0127 09:05:29.252079 4799 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1c13e78_9e9e_4b56_aba4_df7d2a77339d.slice/crio-conmon-3c74d288ee02d4ec212ad08070d97c2013f8cc1d0d28fef3b5aca743e08dd4b4.scope\": RecentStats: unable to find data in memory cache]" Jan 27 09:05:29 crc kubenswrapper[4799]: I0127 09:05:29.456124 4799 generic.go:334] "Generic (PLEG): container finished" podID="fbe8b2ce-30cb-4738-b519-85e0a829bcd4" containerID="ac605aa435e6561f94f047c4d4f434a95c71863c32bf88defc62da63c078d74b" exitCode=0 Jan 27 09:05:29 crc kubenswrapper[4799]: I0127 09:05:29.456217 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fbe8b2ce-30cb-4738-b519-85e0a829bcd4","Type":"ContainerDied","Data":"ac605aa435e6561f94f047c4d4f434a95c71863c32bf88defc62da63c078d74b"} Jan 27 09:05:29 crc kubenswrapper[4799]: I0127 09:05:29.458845 4799 generic.go:334] "Generic (PLEG): container finished" podID="f1c13e78-9e9e-4b56-aba4-df7d2a77339d" containerID="3c74d288ee02d4ec212ad08070d97c2013f8cc1d0d28fef3b5aca743e08dd4b4" exitCode=0 Jan 27 09:05:29 crc kubenswrapper[4799]: I0127 09:05:29.458911 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f1c13e78-9e9e-4b56-aba4-df7d2a77339d","Type":"ContainerDied","Data":"3c74d288ee02d4ec212ad08070d97c2013f8cc1d0d28fef3b5aca743e08dd4b4"} Jan 27 09:05:30 crc kubenswrapper[4799]: I0127 09:05:30.471868 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fbe8b2ce-30cb-4738-b519-85e0a829bcd4","Type":"ContainerStarted","Data":"65704d8df8727a112bdea83490ce3885477e29f61719b1e1837ea27efe12e81c"} Jan 27 09:05:30 crc kubenswrapper[4799]: I0127 09:05:30.475499 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f1c13e78-9e9e-4b56-aba4-df7d2a77339d","Type":"ContainerStarted","Data":"8daf943086072564ab3c63115d88e72b9c379512ac096a203d51cd2a346fb777"} Jan 27 09:05:30 crc kubenswrapper[4799]: I0127 09:05:30.502262 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.50224389 podStartE2EDuration="8.50224389s" podCreationTimestamp="2026-01-27 09:05:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:05:30.494669103 +0000 UTC m=+4796.805773188" watchObservedRunningTime="2026-01-27 09:05:30.50224389 +0000 UTC m=+4796.813347965" Jan 27 09:05:30 crc kubenswrapper[4799]: I0127 09:05:30.524042 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.524018334 podStartE2EDuration="7.524018334s" podCreationTimestamp="2026-01-27 09:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:05:30.514655879 +0000 UTC m=+4796.825759984" watchObservedRunningTime="2026-01-27 09:05:30.524018334 +0000 UTC m=+4796.835122419" Jan 27 09:05:30 crc kubenswrapper[4799]: I0127 09:05:30.754574 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d7b5456f5-m7sqn" Jan 27 09:05:31 crc kubenswrapper[4799]: I0127 09:05:31.299523 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-98ddfc8f-5r5hj" Jan 27 09:05:31 crc kubenswrapper[4799]: I0127 09:05:31.360986 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-m7sqn"] Jan 27 09:05:31 crc kubenswrapper[4799]: I0127 09:05:31.480988 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d7b5456f5-m7sqn" podUID="aab68f84-c210-4d9a-a516-ddfd602bb371" containerName="dnsmasq-dns" containerID="cri-o://dfcf37364af15c12cc20e39e95513603029a2c0445ab2df4f4f8fa22b5f8a66d" gracePeriod=10 Jan 27 09:05:31 crc kubenswrapper[4799]: I0127 09:05:31.902473 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-m7sqn" Jan 27 09:05:31 crc kubenswrapper[4799]: I0127 09:05:31.970431 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aab68f84-c210-4d9a-a516-ddfd602bb371-config\") pod \"aab68f84-c210-4d9a-a516-ddfd602bb371\" (UID: \"aab68f84-c210-4d9a-a516-ddfd602bb371\") " Jan 27 09:05:31 crc kubenswrapper[4799]: I0127 09:05:31.970799 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aab68f84-c210-4d9a-a516-ddfd602bb371-dns-svc\") pod \"aab68f84-c210-4d9a-a516-ddfd602bb371\" (UID: \"aab68f84-c210-4d9a-a516-ddfd602bb371\") " Jan 27 09:05:31 crc kubenswrapper[4799]: I0127 09:05:31.970849 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtghf\" (UniqueName: \"kubernetes.io/projected/aab68f84-c210-4d9a-a516-ddfd602bb371-kube-api-access-xtghf\") pod \"aab68f84-c210-4d9a-a516-ddfd602bb371\" (UID: \"aab68f84-c210-4d9a-a516-ddfd602bb371\") " Jan 27 09:05:31 crc kubenswrapper[4799]: I0127 09:05:31.976502 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aab68f84-c210-4d9a-a516-ddfd602bb371-kube-api-access-xtghf" (OuterVolumeSpecName: "kube-api-access-xtghf") pod "aab68f84-c210-4d9a-a516-ddfd602bb371" (UID: "aab68f84-c210-4d9a-a516-ddfd602bb371"). InnerVolumeSpecName "kube-api-access-xtghf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:05:32 crc kubenswrapper[4799]: I0127 09:05:32.000890 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aab68f84-c210-4d9a-a516-ddfd602bb371-config" (OuterVolumeSpecName: "config") pod "aab68f84-c210-4d9a-a516-ddfd602bb371" (UID: "aab68f84-c210-4d9a-a516-ddfd602bb371"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:05:32 crc kubenswrapper[4799]: I0127 09:05:32.001271 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aab68f84-c210-4d9a-a516-ddfd602bb371-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aab68f84-c210-4d9a-a516-ddfd602bb371" (UID: "aab68f84-c210-4d9a-a516-ddfd602bb371"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:05:32 crc kubenswrapper[4799]: I0127 09:05:32.071681 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aab68f84-c210-4d9a-a516-ddfd602bb371-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:32 crc kubenswrapper[4799]: I0127 09:05:32.071721 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aab68f84-c210-4d9a-a516-ddfd602bb371-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:32 crc kubenswrapper[4799]: I0127 09:05:32.071736 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtghf\" (UniqueName: \"kubernetes.io/projected/aab68f84-c210-4d9a-a516-ddfd602bb371-kube-api-access-xtghf\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:32 crc kubenswrapper[4799]: I0127 09:05:32.493209 4799 generic.go:334] "Generic (PLEG): container finished" podID="aab68f84-c210-4d9a-a516-ddfd602bb371" containerID="dfcf37364af15c12cc20e39e95513603029a2c0445ab2df4f4f8fa22b5f8a66d" exitCode=0 Jan 27 09:05:32 crc kubenswrapper[4799]: I0127 09:05:32.493262 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-m7sqn" event={"ID":"aab68f84-c210-4d9a-a516-ddfd602bb371","Type":"ContainerDied","Data":"dfcf37364af15c12cc20e39e95513603029a2c0445ab2df4f4f8fa22b5f8a66d"} Jan 27 09:05:32 crc kubenswrapper[4799]: I0127 09:05:32.493278 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-m7sqn" Jan 27 09:05:32 crc kubenswrapper[4799]: I0127 09:05:32.493293 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-m7sqn" event={"ID":"aab68f84-c210-4d9a-a516-ddfd602bb371","Type":"ContainerDied","Data":"90693b878fde7cf196c2e390f2e494fe12eceae4f761f7c55dc02d45ba48d334"} Jan 27 09:05:32 crc kubenswrapper[4799]: I0127 09:05:32.493372 4799 scope.go:117] "RemoveContainer" containerID="dfcf37364af15c12cc20e39e95513603029a2c0445ab2df4f4f8fa22b5f8a66d" Jan 27 09:05:32 crc kubenswrapper[4799]: I0127 09:05:32.528230 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-m7sqn"] Jan 27 09:05:32 crc kubenswrapper[4799]: I0127 09:05:32.534072 4799 scope.go:117] "RemoveContainer" containerID="b4f482696a494fc4fe8a0fff334d11302d0b314fdb0204456e021cfb52c30f2d" Jan 27 09:05:32 crc kubenswrapper[4799]: I0127 09:05:32.536335 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-m7sqn"] Jan 27 09:05:32 crc kubenswrapper[4799]: I0127 09:05:32.554149 4799 scope.go:117] "RemoveContainer" containerID="dfcf37364af15c12cc20e39e95513603029a2c0445ab2df4f4f8fa22b5f8a66d" Jan 27 09:05:32 crc kubenswrapper[4799]: E0127 09:05:32.554647 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfcf37364af15c12cc20e39e95513603029a2c0445ab2df4f4f8fa22b5f8a66d\": container with ID starting with dfcf37364af15c12cc20e39e95513603029a2c0445ab2df4f4f8fa22b5f8a66d not found: ID does not exist" containerID="dfcf37364af15c12cc20e39e95513603029a2c0445ab2df4f4f8fa22b5f8a66d" Jan 27 09:05:32 crc kubenswrapper[4799]: I0127 09:05:32.554693 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfcf37364af15c12cc20e39e95513603029a2c0445ab2df4f4f8fa22b5f8a66d"} err="failed to get container status \"dfcf37364af15c12cc20e39e95513603029a2c0445ab2df4f4f8fa22b5f8a66d\": rpc error: code = NotFound desc = could not find container \"dfcf37364af15c12cc20e39e95513603029a2c0445ab2df4f4f8fa22b5f8a66d\": container with ID starting with dfcf37364af15c12cc20e39e95513603029a2c0445ab2df4f4f8fa22b5f8a66d not found: ID does not exist" Jan 27 09:05:32 crc kubenswrapper[4799]: I0127 09:05:32.554715 4799 scope.go:117] "RemoveContainer" containerID="b4f482696a494fc4fe8a0fff334d11302d0b314fdb0204456e021cfb52c30f2d" Jan 27 09:05:32 crc kubenswrapper[4799]: E0127 09:05:32.555023 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4f482696a494fc4fe8a0fff334d11302d0b314fdb0204456e021cfb52c30f2d\": container with ID starting with b4f482696a494fc4fe8a0fff334d11302d0b314fdb0204456e021cfb52c30f2d not found: ID does not exist" containerID="b4f482696a494fc4fe8a0fff334d11302d0b314fdb0204456e021cfb52c30f2d" Jan 27 09:05:32 crc kubenswrapper[4799]: I0127 09:05:32.555074 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4f482696a494fc4fe8a0fff334d11302d0b314fdb0204456e021cfb52c30f2d"} err="failed to get container status \"b4f482696a494fc4fe8a0fff334d11302d0b314fdb0204456e021cfb52c30f2d\": rpc error: code = NotFound desc = could not find container \"b4f482696a494fc4fe8a0fff334d11302d0b314fdb0204456e021cfb52c30f2d\": container with ID starting with b4f482696a494fc4fe8a0fff334d11302d0b314fdb0204456e021cfb52c30f2d not found: ID does not exist" Jan 27 09:05:33 crc kubenswrapper[4799]: I0127 09:05:33.451652 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:05:33 crc kubenswrapper[4799]: E0127 09:05:33.452342 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:05:33 crc kubenswrapper[4799]: I0127 09:05:33.735563 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 09:05:33 crc kubenswrapper[4799]: I0127 09:05:33.735642 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 09:05:33 crc kubenswrapper[4799]: I0127 09:05:33.756052 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 27 09:05:34 crc kubenswrapper[4799]: I0127 09:05:34.465977 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aab68f84-c210-4d9a-a516-ddfd602bb371" path="/var/lib/kubelet/pods/aab68f84-c210-4d9a-a516-ddfd602bb371/volumes" Jan 27 09:05:34 crc kubenswrapper[4799]: I0127 09:05:34.713864 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 09:05:34 crc kubenswrapper[4799]: I0127 09:05:34.820820 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 09:05:34 crc kubenswrapper[4799]: I0127 09:05:34.948708 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:34 crc kubenswrapper[4799]: I0127 09:05:34.948765 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:37 crc kubenswrapper[4799]: I0127 09:05:37.259392 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:37 crc kubenswrapper[4799]: I0127 09:05:37.329600 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 09:05:42 crc kubenswrapper[4799]: I0127 09:05:42.085356 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9ltg7"] Jan 27 09:05:42 crc kubenswrapper[4799]: E0127 09:05:42.086294 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aab68f84-c210-4d9a-a516-ddfd602bb371" containerName="dnsmasq-dns" Jan 27 09:05:42 crc kubenswrapper[4799]: I0127 09:05:42.086332 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="aab68f84-c210-4d9a-a516-ddfd602bb371" containerName="dnsmasq-dns" Jan 27 09:05:42 crc kubenswrapper[4799]: E0127 09:05:42.086350 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aab68f84-c210-4d9a-a516-ddfd602bb371" containerName="init" Jan 27 09:05:42 crc kubenswrapper[4799]: I0127 09:05:42.086357 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="aab68f84-c210-4d9a-a516-ddfd602bb371" containerName="init" Jan 27 09:05:42 crc kubenswrapper[4799]: I0127 09:05:42.086552 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="aab68f84-c210-4d9a-a516-ddfd602bb371" containerName="dnsmasq-dns" Jan 27 09:05:42 crc kubenswrapper[4799]: I0127 09:05:42.087160 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9ltg7" Jan 27 09:05:42 crc kubenswrapper[4799]: I0127 09:05:42.090198 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 09:05:42 crc kubenswrapper[4799]: I0127 09:05:42.107286 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9ltg7"] Jan 27 09:05:42 crc kubenswrapper[4799]: I0127 09:05:42.134384 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cff744d-9c93-4187-b03e-8f5d7f269535-operator-scripts\") pod \"root-account-create-update-9ltg7\" (UID: \"8cff744d-9c93-4187-b03e-8f5d7f269535\") " pod="openstack/root-account-create-update-9ltg7" Jan 27 09:05:42 crc kubenswrapper[4799]: I0127 09:05:42.134476 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dd8s\" (UniqueName: \"kubernetes.io/projected/8cff744d-9c93-4187-b03e-8f5d7f269535-kube-api-access-8dd8s\") pod \"root-account-create-update-9ltg7\" (UID: \"8cff744d-9c93-4187-b03e-8f5d7f269535\") " pod="openstack/root-account-create-update-9ltg7" Jan 27 09:05:42 crc kubenswrapper[4799]: I0127 09:05:42.236376 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cff744d-9c93-4187-b03e-8f5d7f269535-operator-scripts\") pod \"root-account-create-update-9ltg7\" (UID: \"8cff744d-9c93-4187-b03e-8f5d7f269535\") " pod="openstack/root-account-create-update-9ltg7" Jan 27 09:05:42 crc kubenswrapper[4799]: I0127 09:05:42.236572 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dd8s\" (UniqueName: \"kubernetes.io/projected/8cff744d-9c93-4187-b03e-8f5d7f269535-kube-api-access-8dd8s\") pod \"root-account-create-update-9ltg7\" (UID: \"8cff744d-9c93-4187-b03e-8f5d7f269535\") " pod="openstack/root-account-create-update-9ltg7" Jan 27 09:05:42 crc kubenswrapper[4799]: I0127 09:05:42.237295 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cff744d-9c93-4187-b03e-8f5d7f269535-operator-scripts\") pod \"root-account-create-update-9ltg7\" (UID: \"8cff744d-9c93-4187-b03e-8f5d7f269535\") " pod="openstack/root-account-create-update-9ltg7" Jan 27 09:05:42 crc kubenswrapper[4799]: I0127 09:05:42.256087 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dd8s\" (UniqueName: \"kubernetes.io/projected/8cff744d-9c93-4187-b03e-8f5d7f269535-kube-api-access-8dd8s\") pod \"root-account-create-update-9ltg7\" (UID: \"8cff744d-9c93-4187-b03e-8f5d7f269535\") " pod="openstack/root-account-create-update-9ltg7" Jan 27 09:05:42 crc kubenswrapper[4799]: I0127 09:05:42.404228 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9ltg7" Jan 27 09:05:42 crc kubenswrapper[4799]: I0127 09:05:42.854347 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9ltg7"] Jan 27 09:05:42 crc kubenswrapper[4799]: W0127 09:05:42.867310 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cff744d_9c93_4187_b03e_8f5d7f269535.slice/crio-24a07882ab49963a16f82099cec290fab8d926ad10cd6a3582c7dd6d25ad952e WatchSource:0}: Error finding container 24a07882ab49963a16f82099cec290fab8d926ad10cd6a3582c7dd6d25ad952e: Status 404 returned error can't find the container with id 24a07882ab49963a16f82099cec290fab8d926ad10cd6a3582c7dd6d25ad952e Jan 27 09:05:43 crc kubenswrapper[4799]: I0127 09:05:43.577654 4799 generic.go:334] "Generic (PLEG): container finished" podID="8cff744d-9c93-4187-b03e-8f5d7f269535" containerID="af361d61e6fd7f84ab686353adff4fb91448fc0b9d8ebc26fa8079545a8e5189" exitCode=0 Jan 27 09:05:43 crc kubenswrapper[4799]: I0127 09:05:43.577711 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9ltg7" event={"ID":"8cff744d-9c93-4187-b03e-8f5d7f269535","Type":"ContainerDied","Data":"af361d61e6fd7f84ab686353adff4fb91448fc0b9d8ebc26fa8079545a8e5189"} Jan 27 09:05:43 crc kubenswrapper[4799]: I0127 09:05:43.577986 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9ltg7" event={"ID":"8cff744d-9c93-4187-b03e-8f5d7f269535","Type":"ContainerStarted","Data":"24a07882ab49963a16f82099cec290fab8d926ad10cd6a3582c7dd6d25ad952e"} Jan 27 09:05:45 crc kubenswrapper[4799]: I0127 09:05:45.013407 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9ltg7" Jan 27 09:05:45 crc kubenswrapper[4799]: I0127 09:05:45.098843 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dd8s\" (UniqueName: \"kubernetes.io/projected/8cff744d-9c93-4187-b03e-8f5d7f269535-kube-api-access-8dd8s\") pod \"8cff744d-9c93-4187-b03e-8f5d7f269535\" (UID: \"8cff744d-9c93-4187-b03e-8f5d7f269535\") " Jan 27 09:05:45 crc kubenswrapper[4799]: I0127 09:05:45.099446 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cff744d-9c93-4187-b03e-8f5d7f269535-operator-scripts\") pod \"8cff744d-9c93-4187-b03e-8f5d7f269535\" (UID: \"8cff744d-9c93-4187-b03e-8f5d7f269535\") " Jan 27 09:05:45 crc kubenswrapper[4799]: I0127 09:05:45.100648 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cff744d-9c93-4187-b03e-8f5d7f269535-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8cff744d-9c93-4187-b03e-8f5d7f269535" (UID: "8cff744d-9c93-4187-b03e-8f5d7f269535"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:05:45 crc kubenswrapper[4799]: I0127 09:05:45.105077 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cff744d-9c93-4187-b03e-8f5d7f269535-kube-api-access-8dd8s" (OuterVolumeSpecName: "kube-api-access-8dd8s") pod "8cff744d-9c93-4187-b03e-8f5d7f269535" (UID: "8cff744d-9c93-4187-b03e-8f5d7f269535"). InnerVolumeSpecName "kube-api-access-8dd8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:05:45 crc kubenswrapper[4799]: I0127 09:05:45.201125 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cff744d-9c93-4187-b03e-8f5d7f269535-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:45 crc kubenswrapper[4799]: I0127 09:05:45.201178 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dd8s\" (UniqueName: \"kubernetes.io/projected/8cff744d-9c93-4187-b03e-8f5d7f269535-kube-api-access-8dd8s\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:45 crc kubenswrapper[4799]: I0127 09:05:45.451648 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:05:45 crc kubenswrapper[4799]: E0127 09:05:45.452158 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:05:45 crc kubenswrapper[4799]: I0127 09:05:45.597830 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9ltg7" event={"ID":"8cff744d-9c93-4187-b03e-8f5d7f269535","Type":"ContainerDied","Data":"24a07882ab49963a16f82099cec290fab8d926ad10cd6a3582c7dd6d25ad952e"} Jan 27 09:05:45 crc kubenswrapper[4799]: I0127 09:05:45.597879 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24a07882ab49963a16f82099cec290fab8d926ad10cd6a3582c7dd6d25ad952e" Jan 27 09:05:45 crc kubenswrapper[4799]: I0127 09:05:45.597903 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9ltg7" Jan 27 09:05:48 crc kubenswrapper[4799]: I0127 09:05:48.601719 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9ltg7"] Jan 27 09:05:48 crc kubenswrapper[4799]: I0127 09:05:48.610147 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9ltg7"] Jan 27 09:05:50 crc kubenswrapper[4799]: I0127 09:05:50.572464 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cff744d-9c93-4187-b03e-8f5d7f269535" path="/var/lib/kubelet/pods/8cff744d-9c93-4187-b03e-8f5d7f269535/volumes" Jan 27 09:05:53 crc kubenswrapper[4799]: I0127 09:05:53.614075 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-5476s"] Jan 27 09:05:53 crc kubenswrapper[4799]: E0127 09:05:53.614939 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cff744d-9c93-4187-b03e-8f5d7f269535" containerName="mariadb-account-create-update" Jan 27 09:05:53 crc kubenswrapper[4799]: I0127 09:05:53.614963 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cff744d-9c93-4187-b03e-8f5d7f269535" containerName="mariadb-account-create-update" Jan 27 09:05:53 crc kubenswrapper[4799]: I0127 09:05:53.615242 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cff744d-9c93-4187-b03e-8f5d7f269535" containerName="mariadb-account-create-update" Jan 27 09:05:53 crc kubenswrapper[4799]: I0127 09:05:53.616029 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5476s" Jan 27 09:05:53 crc kubenswrapper[4799]: I0127 09:05:53.624394 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5476s"] Jan 27 09:05:53 crc kubenswrapper[4799]: I0127 09:05:53.626844 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 09:05:53 crc kubenswrapper[4799]: I0127 09:05:53.637272 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t64q\" (UniqueName: \"kubernetes.io/projected/3e273bb3-f59b-4b53-a996-22631e029156-kube-api-access-7t64q\") pod \"root-account-create-update-5476s\" (UID: \"3e273bb3-f59b-4b53-a996-22631e029156\") " pod="openstack/root-account-create-update-5476s" Jan 27 09:05:53 crc kubenswrapper[4799]: I0127 09:05:53.637489 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e273bb3-f59b-4b53-a996-22631e029156-operator-scripts\") pod \"root-account-create-update-5476s\" (UID: \"3e273bb3-f59b-4b53-a996-22631e029156\") " pod="openstack/root-account-create-update-5476s" Jan 27 09:05:53 crc kubenswrapper[4799]: I0127 09:05:53.738712 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t64q\" (UniqueName: \"kubernetes.io/projected/3e273bb3-f59b-4b53-a996-22631e029156-kube-api-access-7t64q\") pod \"root-account-create-update-5476s\" (UID: \"3e273bb3-f59b-4b53-a996-22631e029156\") " pod="openstack/root-account-create-update-5476s" Jan 27 09:05:53 crc kubenswrapper[4799]: I0127 09:05:53.739058 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e273bb3-f59b-4b53-a996-22631e029156-operator-scripts\") pod \"root-account-create-update-5476s\" (UID: \"3e273bb3-f59b-4b53-a996-22631e029156\") " pod="openstack/root-account-create-update-5476s" Jan 27 09:05:53 crc kubenswrapper[4799]: I0127 09:05:53.740052 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e273bb3-f59b-4b53-a996-22631e029156-operator-scripts\") pod \"root-account-create-update-5476s\" (UID: \"3e273bb3-f59b-4b53-a996-22631e029156\") " pod="openstack/root-account-create-update-5476s" Jan 27 09:05:53 crc kubenswrapper[4799]: I0127 09:05:53.761647 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t64q\" (UniqueName: \"kubernetes.io/projected/3e273bb3-f59b-4b53-a996-22631e029156-kube-api-access-7t64q\") pod \"root-account-create-update-5476s\" (UID: \"3e273bb3-f59b-4b53-a996-22631e029156\") " pod="openstack/root-account-create-update-5476s" Jan 27 09:05:53 crc kubenswrapper[4799]: I0127 09:05:53.933895 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5476s" Jan 27 09:05:54 crc kubenswrapper[4799]: I0127 09:05:54.386449 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5476s"] Jan 27 09:05:54 crc kubenswrapper[4799]: W0127 09:05:54.394546 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e273bb3_f59b_4b53_a996_22631e029156.slice/crio-0ce82e3fb7673730d6ccbda3e6d8b7ed664e6b6765e6fc5f22eb53f686280182 WatchSource:0}: Error finding container 0ce82e3fb7673730d6ccbda3e6d8b7ed664e6b6765e6fc5f22eb53f686280182: Status 404 returned error can't find the container with id 0ce82e3fb7673730d6ccbda3e6d8b7ed664e6b6765e6fc5f22eb53f686280182 Jan 27 09:05:54 crc kubenswrapper[4799]: I0127 09:05:54.669946 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5476s" event={"ID":"3e273bb3-f59b-4b53-a996-22631e029156","Type":"ContainerStarted","Data":"7b2a5cc6166e1513bb4fe1521862a85f0225c13454f06f867841f72953d7fb3f"} Jan 27 09:05:54 crc kubenswrapper[4799]: I0127 09:05:54.670287 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5476s" event={"ID":"3e273bb3-f59b-4b53-a996-22631e029156","Type":"ContainerStarted","Data":"0ce82e3fb7673730d6ccbda3e6d8b7ed664e6b6765e6fc5f22eb53f686280182"} Jan 27 09:05:54 crc kubenswrapper[4799]: I0127 09:05:54.698475 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-5476s" podStartSLOduration=1.698452741 podStartE2EDuration="1.698452741s" podCreationTimestamp="2026-01-27 09:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:05:54.688480339 +0000 UTC m=+4820.999584404" watchObservedRunningTime="2026-01-27 09:05:54.698452741 +0000 UTC m=+4821.009556816" Jan 27 09:05:55 crc kubenswrapper[4799]: I0127 09:05:55.693931 4799 generic.go:334] "Generic (PLEG): container finished" podID="3e273bb3-f59b-4b53-a996-22631e029156" containerID="7b2a5cc6166e1513bb4fe1521862a85f0225c13454f06f867841f72953d7fb3f" exitCode=0 Jan 27 09:05:55 crc kubenswrapper[4799]: I0127 09:05:55.698506 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5476s" event={"ID":"3e273bb3-f59b-4b53-a996-22631e029156","Type":"ContainerDied","Data":"7b2a5cc6166e1513bb4fe1521862a85f0225c13454f06f867841f72953d7fb3f"} Jan 27 09:05:56 crc kubenswrapper[4799]: I0127 09:05:56.451235 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:05:56 crc kubenswrapper[4799]: E0127 09:05:56.452011 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:05:56 crc kubenswrapper[4799]: I0127 09:05:56.704180 4799 generic.go:334] "Generic (PLEG): container finished" podID="5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24" containerID="cb2564501b5dd5f427e94f2d951bbd3e57d3ec0db012a694f656455648ef1e06" exitCode=0 Jan 27 09:05:56 crc kubenswrapper[4799]: I0127 09:05:56.704255 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24","Type":"ContainerDied","Data":"cb2564501b5dd5f427e94f2d951bbd3e57d3ec0db012a694f656455648ef1e06"} Jan 27 09:05:56 crc kubenswrapper[4799]: I0127 09:05:56.705900 4799 generic.go:334] "Generic (PLEG): container finished" podID="6ec25425-8dd3-4458-afcf-02ab3f166e97" containerID="3412dd517df6c618fe9d08e6db03b2901c295960f8258fd8ce51f13b9c027123" exitCode=0 Jan 27 09:05:56 crc kubenswrapper[4799]: I0127 09:05:56.706081 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6ec25425-8dd3-4458-afcf-02ab3f166e97","Type":"ContainerDied","Data":"3412dd517df6c618fe9d08e6db03b2901c295960f8258fd8ce51f13b9c027123"} Jan 27 09:05:57 crc kubenswrapper[4799]: I0127 09:05:57.012209 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5476s" Jan 27 09:05:57 crc kubenswrapper[4799]: I0127 09:05:57.099672 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t64q\" (UniqueName: \"kubernetes.io/projected/3e273bb3-f59b-4b53-a996-22631e029156-kube-api-access-7t64q\") pod \"3e273bb3-f59b-4b53-a996-22631e029156\" (UID: \"3e273bb3-f59b-4b53-a996-22631e029156\") " Jan 27 09:05:57 crc kubenswrapper[4799]: I0127 09:05:57.099734 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e273bb3-f59b-4b53-a996-22631e029156-operator-scripts\") pod \"3e273bb3-f59b-4b53-a996-22631e029156\" (UID: \"3e273bb3-f59b-4b53-a996-22631e029156\") " Jan 27 09:05:57 crc kubenswrapper[4799]: I0127 09:05:57.100535 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e273bb3-f59b-4b53-a996-22631e029156-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3e273bb3-f59b-4b53-a996-22631e029156" (UID: "3e273bb3-f59b-4b53-a996-22631e029156"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:05:57 crc kubenswrapper[4799]: I0127 09:05:57.104560 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e273bb3-f59b-4b53-a996-22631e029156-kube-api-access-7t64q" (OuterVolumeSpecName: "kube-api-access-7t64q") pod "3e273bb3-f59b-4b53-a996-22631e029156" (UID: "3e273bb3-f59b-4b53-a996-22631e029156"). InnerVolumeSpecName "kube-api-access-7t64q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:05:57 crc kubenswrapper[4799]: I0127 09:05:57.201537 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t64q\" (UniqueName: \"kubernetes.io/projected/3e273bb3-f59b-4b53-a996-22631e029156-kube-api-access-7t64q\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:57 crc kubenswrapper[4799]: I0127 09:05:57.201585 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e273bb3-f59b-4b53-a996-22631e029156-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:05:57 crc kubenswrapper[4799]: I0127 09:05:57.715899 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6ec25425-8dd3-4458-afcf-02ab3f166e97","Type":"ContainerStarted","Data":"0691c44cc5fc7f990413d8e3db8fff9bd3696b70fc533168f4e90ae4ab693e4f"} Jan 27 09:05:57 crc kubenswrapper[4799]: I0127 09:05:57.716211 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:05:57 crc kubenswrapper[4799]: I0127 09:05:57.718620 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24","Type":"ContainerStarted","Data":"5e8a8ad4f96ddd475aed5b9ab1e9ba3fb900e0ee513291ed4fe92f58023a18d7"} Jan 27 09:05:57 crc kubenswrapper[4799]: I0127 09:05:57.718855 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 09:05:57 crc kubenswrapper[4799]: I0127 09:05:57.720512 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5476s" event={"ID":"3e273bb3-f59b-4b53-a996-22631e029156","Type":"ContainerDied","Data":"0ce82e3fb7673730d6ccbda3e6d8b7ed664e6b6765e6fc5f22eb53f686280182"} Jan 27 09:05:57 crc kubenswrapper[4799]: I0127 09:05:57.720551 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ce82e3fb7673730d6ccbda3e6d8b7ed664e6b6765e6fc5f22eb53f686280182" Jan 27 09:05:57 crc kubenswrapper[4799]: I0127 09:05:57.720559 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5476s" Jan 27 09:05:57 crc kubenswrapper[4799]: I0127 09:05:57.761900 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.761874685 podStartE2EDuration="37.761874685s" podCreationTimestamp="2026-01-27 09:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:05:57.754114673 +0000 UTC m=+4824.065218748" watchObservedRunningTime="2026-01-27 09:05:57.761874685 +0000 UTC m=+4824.072978750" Jan 27 09:05:57 crc kubenswrapper[4799]: I0127 09:05:57.784823 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.78480314 podStartE2EDuration="37.78480314s" podCreationTimestamp="2026-01-27 09:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:05:57.777915642 +0000 UTC m=+4824.089019707" watchObservedRunningTime="2026-01-27 09:05:57.78480314 +0000 UTC m=+4824.095907205" Jan 27 09:06:07 crc kubenswrapper[4799]: I0127 09:06:07.451116 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:06:07 crc kubenswrapper[4799]: E0127 09:06:07.451988 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:06:11 crc kubenswrapper[4799]: I0127 09:06:11.804570 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 09:06:12 crc kubenswrapper[4799]: I0127 09:06:12.203520 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:17 crc kubenswrapper[4799]: I0127 09:06:17.667136 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-vzt79"] Jan 27 09:06:17 crc kubenswrapper[4799]: E0127 09:06:17.668187 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e273bb3-f59b-4b53-a996-22631e029156" containerName="mariadb-account-create-update" Jan 27 09:06:17 crc kubenswrapper[4799]: I0127 09:06:17.668205 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e273bb3-f59b-4b53-a996-22631e029156" containerName="mariadb-account-create-update" Jan 27 09:06:17 crc kubenswrapper[4799]: I0127 09:06:17.668429 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e273bb3-f59b-4b53-a996-22631e029156" containerName="mariadb-account-create-update" Jan 27 09:06:17 crc kubenswrapper[4799]: I0127 09:06:17.669490 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-vzt79" Jan 27 09:06:17 crc kubenswrapper[4799]: I0127 09:06:17.692384 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-vzt79"] Jan 27 09:06:17 crc kubenswrapper[4799]: I0127 09:06:17.841506 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl54c\" (UniqueName: \"kubernetes.io/projected/1fdce98d-d75c-4d02-98b5-6f19b41b0228-kube-api-access-gl54c\") pod \"dnsmasq-dns-5b7946d7b9-vzt79\" (UID: \"1fdce98d-d75c-4d02-98b5-6f19b41b0228\") " pod="openstack/dnsmasq-dns-5b7946d7b9-vzt79" Jan 27 09:06:17 crc kubenswrapper[4799]: I0127 09:06:17.841596 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fdce98d-d75c-4d02-98b5-6f19b41b0228-config\") pod \"dnsmasq-dns-5b7946d7b9-vzt79\" (UID: \"1fdce98d-d75c-4d02-98b5-6f19b41b0228\") " pod="openstack/dnsmasq-dns-5b7946d7b9-vzt79" Jan 27 09:06:17 crc kubenswrapper[4799]: I0127 09:06:17.841656 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fdce98d-d75c-4d02-98b5-6f19b41b0228-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-vzt79\" (UID: \"1fdce98d-d75c-4d02-98b5-6f19b41b0228\") " pod="openstack/dnsmasq-dns-5b7946d7b9-vzt79" Jan 27 09:06:17 crc kubenswrapper[4799]: I0127 09:06:17.942977 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl54c\" (UniqueName: \"kubernetes.io/projected/1fdce98d-d75c-4d02-98b5-6f19b41b0228-kube-api-access-gl54c\") pod \"dnsmasq-dns-5b7946d7b9-vzt79\" (UID: \"1fdce98d-d75c-4d02-98b5-6f19b41b0228\") " pod="openstack/dnsmasq-dns-5b7946d7b9-vzt79" Jan 27 09:06:17 crc kubenswrapper[4799]: I0127 09:06:17.943032 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fdce98d-d75c-4d02-98b5-6f19b41b0228-config\") pod \"dnsmasq-dns-5b7946d7b9-vzt79\" (UID: \"1fdce98d-d75c-4d02-98b5-6f19b41b0228\") " pod="openstack/dnsmasq-dns-5b7946d7b9-vzt79" Jan 27 09:06:17 crc kubenswrapper[4799]: I0127 09:06:17.943050 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fdce98d-d75c-4d02-98b5-6f19b41b0228-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-vzt79\" (UID: \"1fdce98d-d75c-4d02-98b5-6f19b41b0228\") " pod="openstack/dnsmasq-dns-5b7946d7b9-vzt79" Jan 27 09:06:17 crc kubenswrapper[4799]: I0127 09:06:17.944026 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fdce98d-d75c-4d02-98b5-6f19b41b0228-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-vzt79\" (UID: \"1fdce98d-d75c-4d02-98b5-6f19b41b0228\") " pod="openstack/dnsmasq-dns-5b7946d7b9-vzt79" Jan 27 09:06:17 crc kubenswrapper[4799]: I0127 09:06:17.944054 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fdce98d-d75c-4d02-98b5-6f19b41b0228-config\") pod \"dnsmasq-dns-5b7946d7b9-vzt79\" (UID: \"1fdce98d-d75c-4d02-98b5-6f19b41b0228\") " pod="openstack/dnsmasq-dns-5b7946d7b9-vzt79" Jan 27 09:06:17 crc kubenswrapper[4799]: I0127 09:06:17.971954 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl54c\" (UniqueName: \"kubernetes.io/projected/1fdce98d-d75c-4d02-98b5-6f19b41b0228-kube-api-access-gl54c\") pod \"dnsmasq-dns-5b7946d7b9-vzt79\" (UID: \"1fdce98d-d75c-4d02-98b5-6f19b41b0228\") " pod="openstack/dnsmasq-dns-5b7946d7b9-vzt79" Jan 27 09:06:17 crc kubenswrapper[4799]: I0127 09:06:17.989199 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-vzt79" Jan 27 09:06:18 crc kubenswrapper[4799]: I0127 09:06:18.302504 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 09:06:18 crc kubenswrapper[4799]: I0127 09:06:18.421421 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-vzt79"] Jan 27 09:06:18 crc kubenswrapper[4799]: W0127 09:06:18.424964 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fdce98d_d75c_4d02_98b5_6f19b41b0228.slice/crio-da394a6027560258e50724108bef7a29568e29166c39394e5b43b5a025213d05 WatchSource:0}: Error finding container da394a6027560258e50724108bef7a29568e29166c39394e5b43b5a025213d05: Status 404 returned error can't find the container with id da394a6027560258e50724108bef7a29568e29166c39394e5b43b5a025213d05 Jan 27 09:06:18 crc kubenswrapper[4799]: I0127 09:06:18.881916 4799 generic.go:334] "Generic (PLEG): container finished" podID="1fdce98d-d75c-4d02-98b5-6f19b41b0228" containerID="f77a3427b4014f4a80ed7dae4db9e0f64e101f94ff069046403d66c1cb0d222e" exitCode=0 Jan 27 09:06:18 crc kubenswrapper[4799]: I0127 09:06:18.881964 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-vzt79" event={"ID":"1fdce98d-d75c-4d02-98b5-6f19b41b0228","Type":"ContainerDied","Data":"f77a3427b4014f4a80ed7dae4db9e0f64e101f94ff069046403d66c1cb0d222e"} Jan 27 09:06:18 crc kubenswrapper[4799]: I0127 09:06:18.881998 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-vzt79" event={"ID":"1fdce98d-d75c-4d02-98b5-6f19b41b0228","Type":"ContainerStarted","Data":"da394a6027560258e50724108bef7a29568e29166c39394e5b43b5a025213d05"} Jan 27 09:06:18 crc kubenswrapper[4799]: I0127 09:06:18.978715 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 09:06:19 crc kubenswrapper[4799]: I0127 09:06:19.889539 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-vzt79" event={"ID":"1fdce98d-d75c-4d02-98b5-6f19b41b0228","Type":"ContainerStarted","Data":"1aab06edda484e4f8531ceeb4f02bf7bb586140b1d70c3d22ec32fd7f0807478"} Jan 27 09:06:19 crc kubenswrapper[4799]: I0127 09:06:19.889697 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b7946d7b9-vzt79" Jan 27 09:06:19 crc kubenswrapper[4799]: I0127 09:06:19.912046 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b7946d7b9-vzt79" podStartSLOduration=2.912027515 podStartE2EDuration="2.912027515s" podCreationTimestamp="2026-01-27 09:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:06:19.909380863 +0000 UTC m=+4846.220484948" watchObservedRunningTime="2026-01-27 09:06:19.912027515 +0000 UTC m=+4846.223131570" Jan 27 09:06:20 crc kubenswrapper[4799]: I0127 09:06:20.146478 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24" containerName="rabbitmq" containerID="cri-o://5e8a8ad4f96ddd475aed5b9ab1e9ba3fb900e0ee513291ed4fe92f58023a18d7" gracePeriod=604799 Jan 27 09:06:20 crc kubenswrapper[4799]: I0127 09:06:20.452034 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:06:20 crc kubenswrapper[4799]: E0127 09:06:20.452427 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:06:20 crc kubenswrapper[4799]: I0127 09:06:20.794647 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="6ec25425-8dd3-4458-afcf-02ab3f166e97" containerName="rabbitmq" containerID="cri-o://0691c44cc5fc7f990413d8e3db8fff9bd3696b70fc533168f4e90ae4ab693e4f" gracePeriod=604799 Jan 27 09:06:21 crc kubenswrapper[4799]: I0127 09:06:21.801627 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.236:5672: connect: connection refused" Jan 27 09:06:22 crc kubenswrapper[4799]: I0127 09:06:22.202858 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="6ec25425-8dd3-4458-afcf-02ab3f166e97" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.237:5672: connect: connection refused" Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.784564 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.873346 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda\") pod \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.873424 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgwzc\" (UniqueName: \"kubernetes.io/projected/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-kube-api-access-jgwzc\") pod \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.873501 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-plugins-conf\") pod \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.873569 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-rabbitmq-plugins\") pod \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.873597 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-rabbitmq-confd\") pod \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.873678 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-erlang-cookie-secret\") pod \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.873727 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-pod-info\") pod \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.873762 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-server-conf\") pod \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.873846 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-rabbitmq-erlang-cookie\") pod \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\" (UID: \"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24\") " Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.874675 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24" (UID: "5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.875062 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24" (UID: "5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.875075 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24" (UID: "5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.879492 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24" (UID: "5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.881220 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-kube-api-access-jgwzc" (OuterVolumeSpecName: "kube-api-access-jgwzc") pod "5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24" (UID: "5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24"). InnerVolumeSpecName "kube-api-access-jgwzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.882475 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-pod-info" (OuterVolumeSpecName: "pod-info") pod "5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24" (UID: "5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.885859 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda" (OuterVolumeSpecName: "persistence") pod "5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24" (UID: "5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24"). InnerVolumeSpecName "pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.895604 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-server-conf" (OuterVolumeSpecName: "server-conf") pod "5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24" (UID: "5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.949791 4799 generic.go:334] "Generic (PLEG): container finished" podID="5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24" containerID="5e8a8ad4f96ddd475aed5b9ab1e9ba3fb900e0ee513291ed4fe92f58023a18d7" exitCode=0 Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.949841 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24","Type":"ContainerDied","Data":"5e8a8ad4f96ddd475aed5b9ab1e9ba3fb900e0ee513291ed4fe92f58023a18d7"} Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.949880 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24","Type":"ContainerDied","Data":"747483f6b03e53c9f95e39f799aca1bfaf9811adac8646afb4149b2a43eb5e3c"} Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.949908 4799 scope.go:117] "RemoveContainer" containerID="5e8a8ad4f96ddd475aed5b9ab1e9ba3fb900e0ee513291ed4fe92f58023a18d7" Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.950066 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.961292 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24" (UID: "5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.974598 4799 scope.go:117] "RemoveContainer" containerID="cb2564501b5dd5f427e94f2d951bbd3e57d3ec0db012a694f656455648ef1e06" Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.976114 4799 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.976142 4799 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.976155 4799 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.976167 4799 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.976178 4799 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.976188 4799 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.976199 4799 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.976235 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda\") on node \"crc\" " Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.976250 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgwzc\" (UniqueName: \"kubernetes.io/projected/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24-kube-api-access-jgwzc\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.993922 4799 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 09:06:26 crc kubenswrapper[4799]: I0127 09:06:26.994094 4799 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda") on node "crc" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.066204 4799 scope.go:117] "RemoveContainer" containerID="5e8a8ad4f96ddd475aed5b9ab1e9ba3fb900e0ee513291ed4fe92f58023a18d7" Jan 27 09:06:27 crc kubenswrapper[4799]: E0127 09:06:27.066721 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e8a8ad4f96ddd475aed5b9ab1e9ba3fb900e0ee513291ed4fe92f58023a18d7\": container with ID starting with 5e8a8ad4f96ddd475aed5b9ab1e9ba3fb900e0ee513291ed4fe92f58023a18d7 not found: ID does not exist" containerID="5e8a8ad4f96ddd475aed5b9ab1e9ba3fb900e0ee513291ed4fe92f58023a18d7" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.066765 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e8a8ad4f96ddd475aed5b9ab1e9ba3fb900e0ee513291ed4fe92f58023a18d7"} err="failed to get container status \"5e8a8ad4f96ddd475aed5b9ab1e9ba3fb900e0ee513291ed4fe92f58023a18d7\": rpc error: code = NotFound desc = could not find container \"5e8a8ad4f96ddd475aed5b9ab1e9ba3fb900e0ee513291ed4fe92f58023a18d7\": container with ID starting with 5e8a8ad4f96ddd475aed5b9ab1e9ba3fb900e0ee513291ed4fe92f58023a18d7 not found: ID does not exist" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.066790 4799 scope.go:117] "RemoveContainer" containerID="cb2564501b5dd5f427e94f2d951bbd3e57d3ec0db012a694f656455648ef1e06" Jan 27 09:06:27 crc kubenswrapper[4799]: E0127 09:06:27.067079 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb2564501b5dd5f427e94f2d951bbd3e57d3ec0db012a694f656455648ef1e06\": container with ID starting with cb2564501b5dd5f427e94f2d951bbd3e57d3ec0db012a694f656455648ef1e06 not found: ID does not exist" containerID="cb2564501b5dd5f427e94f2d951bbd3e57d3ec0db012a694f656455648ef1e06" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.067121 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb2564501b5dd5f427e94f2d951bbd3e57d3ec0db012a694f656455648ef1e06"} err="failed to get container status \"cb2564501b5dd5f427e94f2d951bbd3e57d3ec0db012a694f656455648ef1e06\": rpc error: code = NotFound desc = could not find container \"cb2564501b5dd5f427e94f2d951bbd3e57d3ec0db012a694f656455648ef1e06\": container with ID starting with cb2564501b5dd5f427e94f2d951bbd3e57d3ec0db012a694f656455648ef1e06 not found: ID does not exist" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.078208 4799 reconciler_common.go:293] "Volume detached for volume \"pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.281337 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.288493 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.291957 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.314763 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 09:06:27 crc kubenswrapper[4799]: E0127 09:06:27.315218 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ec25425-8dd3-4458-afcf-02ab3f166e97" containerName="setup-container" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.315241 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ec25425-8dd3-4458-afcf-02ab3f166e97" containerName="setup-container" Jan 27 09:06:27 crc kubenswrapper[4799]: E0127 09:06:27.315267 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ec25425-8dd3-4458-afcf-02ab3f166e97" containerName="rabbitmq" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.315277 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ec25425-8dd3-4458-afcf-02ab3f166e97" containerName="rabbitmq" Jan 27 09:06:27 crc kubenswrapper[4799]: E0127 09:06:27.315319 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24" containerName="setup-container" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.315329 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24" containerName="setup-container" Jan 27 09:06:27 crc kubenswrapper[4799]: E0127 09:06:27.315343 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24" containerName="rabbitmq" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.315351 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24" containerName="rabbitmq" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.315537 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ec25425-8dd3-4458-afcf-02ab3f166e97" containerName="rabbitmq" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.315554 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24" containerName="rabbitmq" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.316459 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.318425 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.318681 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.318928 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.324751 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-46llv" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.325504 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.347443 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.381729 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58s5v\" (UniqueName: \"kubernetes.io/projected/6ec25425-8dd3-4458-afcf-02ab3f166e97-kube-api-access-58s5v\") pod \"6ec25425-8dd3-4458-afcf-02ab3f166e97\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.381872 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-992aa1be-297c-4607-86fd-1bdeae25a355\") pod \"6ec25425-8dd3-4458-afcf-02ab3f166e97\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.381928 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6ec25425-8dd3-4458-afcf-02ab3f166e97-pod-info\") pod \"6ec25425-8dd3-4458-afcf-02ab3f166e97\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.382004 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6ec25425-8dd3-4458-afcf-02ab3f166e97-rabbitmq-plugins\") pod \"6ec25425-8dd3-4458-afcf-02ab3f166e97\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.382038 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6ec25425-8dd3-4458-afcf-02ab3f166e97-rabbitmq-erlang-cookie\") pod \"6ec25425-8dd3-4458-afcf-02ab3f166e97\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.382071 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6ec25425-8dd3-4458-afcf-02ab3f166e97-server-conf\") pod \"6ec25425-8dd3-4458-afcf-02ab3f166e97\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.382091 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6ec25425-8dd3-4458-afcf-02ab3f166e97-plugins-conf\") pod \"6ec25425-8dd3-4458-afcf-02ab3f166e97\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.382129 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6ec25425-8dd3-4458-afcf-02ab3f166e97-erlang-cookie-secret\") pod \"6ec25425-8dd3-4458-afcf-02ab3f166e97\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.382196 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6ec25425-8dd3-4458-afcf-02ab3f166e97-rabbitmq-confd\") pod \"6ec25425-8dd3-4458-afcf-02ab3f166e97\" (UID: \"6ec25425-8dd3-4458-afcf-02ab3f166e97\") " Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.382831 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ec25425-8dd3-4458-afcf-02ab3f166e97-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "6ec25425-8dd3-4458-afcf-02ab3f166e97" (UID: "6ec25425-8dd3-4458-afcf-02ab3f166e97"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.383010 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ec25425-8dd3-4458-afcf-02ab3f166e97-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "6ec25425-8dd3-4458-afcf-02ab3f166e97" (UID: "6ec25425-8dd3-4458-afcf-02ab3f166e97"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.385092 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ec25425-8dd3-4458-afcf-02ab3f166e97-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "6ec25425-8dd3-4458-afcf-02ab3f166e97" (UID: "6ec25425-8dd3-4458-afcf-02ab3f166e97"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.386012 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ec25425-8dd3-4458-afcf-02ab3f166e97-kube-api-access-58s5v" (OuterVolumeSpecName: "kube-api-access-58s5v") pod "6ec25425-8dd3-4458-afcf-02ab3f166e97" (UID: "6ec25425-8dd3-4458-afcf-02ab3f166e97"). InnerVolumeSpecName "kube-api-access-58s5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.386023 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ec25425-8dd3-4458-afcf-02ab3f166e97-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "6ec25425-8dd3-4458-afcf-02ab3f166e97" (UID: "6ec25425-8dd3-4458-afcf-02ab3f166e97"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.389478 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/6ec25425-8dd3-4458-afcf-02ab3f166e97-pod-info" (OuterVolumeSpecName: "pod-info") pod "6ec25425-8dd3-4458-afcf-02ab3f166e97" (UID: "6ec25425-8dd3-4458-afcf-02ab3f166e97"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.395931 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-992aa1be-297c-4607-86fd-1bdeae25a355" (OuterVolumeSpecName: "persistence") pod "6ec25425-8dd3-4458-afcf-02ab3f166e97" (UID: "6ec25425-8dd3-4458-afcf-02ab3f166e97"). InnerVolumeSpecName "pvc-992aa1be-297c-4607-86fd-1bdeae25a355". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.411747 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ec25425-8dd3-4458-afcf-02ab3f166e97-server-conf" (OuterVolumeSpecName: "server-conf") pod "6ec25425-8dd3-4458-afcf-02ab3f166e97" (UID: "6ec25425-8dd3-4458-afcf-02ab3f166e97"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.456434 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ec25425-8dd3-4458-afcf-02ab3f166e97-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "6ec25425-8dd3-4458-afcf-02ab3f166e97" (UID: "6ec25425-8dd3-4458-afcf-02ab3f166e97"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.483267 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/64f8476f-48e6-4190-8c0d-436a672f8e62-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.483321 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/64f8476f-48e6-4190-8c0d-436a672f8e62-server-conf\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.483343 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/64f8476f-48e6-4190-8c0d-436a672f8e62-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.483370 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/64f8476f-48e6-4190-8c0d-436a672f8e62-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.483387 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/64f8476f-48e6-4190-8c0d-436a672f8e62-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.483482 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.483519 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzgb2\" (UniqueName: \"kubernetes.io/projected/64f8476f-48e6-4190-8c0d-436a672f8e62-kube-api-access-pzgb2\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.483543 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/64f8476f-48e6-4190-8c0d-436a672f8e62-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.483564 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/64f8476f-48e6-4190-8c0d-436a672f8e62-pod-info\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.483610 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58s5v\" (UniqueName: \"kubernetes.io/projected/6ec25425-8dd3-4458-afcf-02ab3f166e97-kube-api-access-58s5v\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.483631 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-992aa1be-297c-4607-86fd-1bdeae25a355\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-992aa1be-297c-4607-86fd-1bdeae25a355\") on node \"crc\" " Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.483643 4799 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6ec25425-8dd3-4458-afcf-02ab3f166e97-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.483653 4799 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6ec25425-8dd3-4458-afcf-02ab3f166e97-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.483661 4799 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6ec25425-8dd3-4458-afcf-02ab3f166e97-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.483669 4799 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6ec25425-8dd3-4458-afcf-02ab3f166e97-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.483677 4799 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6ec25425-8dd3-4458-afcf-02ab3f166e97-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.483685 4799 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6ec25425-8dd3-4458-afcf-02ab3f166e97-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.483692 4799 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6ec25425-8dd3-4458-afcf-02ab3f166e97-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.501665 4799 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.501995 4799 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-992aa1be-297c-4607-86fd-1bdeae25a355" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-992aa1be-297c-4607-86fd-1bdeae25a355") on node "crc" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.584737 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/64f8476f-48e6-4190-8c0d-436a672f8e62-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.584792 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/64f8476f-48e6-4190-8c0d-436a672f8e62-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.584858 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.584887 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzgb2\" (UniqueName: \"kubernetes.io/projected/64f8476f-48e6-4190-8c0d-436a672f8e62-kube-api-access-pzgb2\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.584910 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/64f8476f-48e6-4190-8c0d-436a672f8e62-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.584928 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/64f8476f-48e6-4190-8c0d-436a672f8e62-pod-info\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.584966 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/64f8476f-48e6-4190-8c0d-436a672f8e62-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.584982 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/64f8476f-48e6-4190-8c0d-436a672f8e62-server-conf\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.585000 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/64f8476f-48e6-4190-8c0d-436a672f8e62-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.585040 4799 reconciler_common.go:293] "Volume detached for volume \"pvc-992aa1be-297c-4607-86fd-1bdeae25a355\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-992aa1be-297c-4607-86fd-1bdeae25a355\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.586041 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/64f8476f-48e6-4190-8c0d-436a672f8e62-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.586076 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/64f8476f-48e6-4190-8c0d-436a672f8e62-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.586109 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/64f8476f-48e6-4190-8c0d-436a672f8e62-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.587101 4799 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.587130 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/78a41ffdc2eea7b69c2b06ff892c0ca32b72f90ef3ddbfbd963acc48ca8f6c16/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.587242 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/64f8476f-48e6-4190-8c0d-436a672f8e62-server-conf\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.588478 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/64f8476f-48e6-4190-8c0d-436a672f8e62-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.588482 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/64f8476f-48e6-4190-8c0d-436a672f8e62-pod-info\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.589081 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/64f8476f-48e6-4190-8c0d-436a672f8e62-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.604520 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzgb2\" (UniqueName: \"kubernetes.io/projected/64f8476f-48e6-4190-8c0d-436a672f8e62-kube-api-access-pzgb2\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.614183 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3aa560bb-a0c4-4fac-8064-97f32414cdda\") pod \"rabbitmq-server-0\" (UID: \"64f8476f-48e6-4190-8c0d-436a672f8e62\") " pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.635283 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.965146 4799 generic.go:334] "Generic (PLEG): container finished" podID="6ec25425-8dd3-4458-afcf-02ab3f166e97" containerID="0691c44cc5fc7f990413d8e3db8fff9bd3696b70fc533168f4e90ae4ab693e4f" exitCode=0 Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.965248 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6ec25425-8dd3-4458-afcf-02ab3f166e97","Type":"ContainerDied","Data":"0691c44cc5fc7f990413d8e3db8fff9bd3696b70fc533168f4e90ae4ab693e4f"} Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.965283 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.965426 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6ec25425-8dd3-4458-afcf-02ab3f166e97","Type":"ContainerDied","Data":"f6874d086044f64527ffa5f6a9e85cdda468c7b949ca50c2c82cfbbd331b434e"} Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.965453 4799 scope.go:117] "RemoveContainer" containerID="0691c44cc5fc7f990413d8e3db8fff9bd3696b70fc533168f4e90ae4ab693e4f" Jan 27 09:06:27 crc kubenswrapper[4799]: I0127 09:06:27.993682 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b7946d7b9-vzt79" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.000599 4799 scope.go:117] "RemoveContainer" containerID="3412dd517df6c618fe9d08e6db03b2901c295960f8258fd8ce51f13b9c027123" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.000746 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.008206 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.031634 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.033185 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.036994 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.037236 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.037256 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.037615 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.037635 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ctk4h" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.038089 4799 scope.go:117] "RemoveContainer" containerID="0691c44cc5fc7f990413d8e3db8fff9bd3696b70fc533168f4e90ae4ab693e4f" Jan 27 09:06:28 crc kubenswrapper[4799]: E0127 09:06:28.041279 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0691c44cc5fc7f990413d8e3db8fff9bd3696b70fc533168f4e90ae4ab693e4f\": container with ID starting with 0691c44cc5fc7f990413d8e3db8fff9bd3696b70fc533168f4e90ae4ab693e4f not found: ID does not exist" containerID="0691c44cc5fc7f990413d8e3db8fff9bd3696b70fc533168f4e90ae4ab693e4f" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.041380 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0691c44cc5fc7f990413d8e3db8fff9bd3696b70fc533168f4e90ae4ab693e4f"} err="failed to get container status \"0691c44cc5fc7f990413d8e3db8fff9bd3696b70fc533168f4e90ae4ab693e4f\": rpc error: code = NotFound desc = could not find container \"0691c44cc5fc7f990413d8e3db8fff9bd3696b70fc533168f4e90ae4ab693e4f\": container with ID starting with 0691c44cc5fc7f990413d8e3db8fff9bd3696b70fc533168f4e90ae4ab693e4f not found: ID does not exist" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.041419 4799 scope.go:117] "RemoveContainer" containerID="3412dd517df6c618fe9d08e6db03b2901c295960f8258fd8ce51f13b9c027123" Jan 27 09:06:28 crc kubenswrapper[4799]: E0127 09:06:28.044761 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3412dd517df6c618fe9d08e6db03b2901c295960f8258fd8ce51f13b9c027123\": container with ID starting with 3412dd517df6c618fe9d08e6db03b2901c295960f8258fd8ce51f13b9c027123 not found: ID does not exist" containerID="3412dd517df6c618fe9d08e6db03b2901c295960f8258fd8ce51f13b9c027123" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.044799 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3412dd517df6c618fe9d08e6db03b2901c295960f8258fd8ce51f13b9c027123"} err="failed to get container status \"3412dd517df6c618fe9d08e6db03b2901c295960f8258fd8ce51f13b9c027123\": rpc error: code = NotFound desc = could not find container \"3412dd517df6c618fe9d08e6db03b2901c295960f8258fd8ce51f13b9c027123\": container with ID starting with 3412dd517df6c618fe9d08e6db03b2901c295960f8258fd8ce51f13b9c027123 not found: ID does not exist" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.049179 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.098983 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73389771-5c03-49f6-96ac-a57864153a5f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.099064 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73389771-5c03-49f6-96ac-a57864153a5f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.099123 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73389771-5c03-49f6-96ac-a57864153a5f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.099175 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73389771-5c03-49f6-96ac-a57864153a5f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.099207 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vznkm\" (UniqueName: \"kubernetes.io/projected/73389771-5c03-49f6-96ac-a57864153a5f-kube-api-access-vznkm\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.099283 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-992aa1be-297c-4607-86fd-1bdeae25a355\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-992aa1be-297c-4607-86fd-1bdeae25a355\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.099324 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73389771-5c03-49f6-96ac-a57864153a5f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.099436 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73389771-5c03-49f6-96ac-a57864153a5f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.099498 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73389771-5c03-49f6-96ac-a57864153a5f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.101523 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-5r5hj"] Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.102028 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-98ddfc8f-5r5hj" podUID="9e9901af-5957-43d7-a8a2-dd341614a031" containerName="dnsmasq-dns" containerID="cri-o://78e0cb19c7483574118d9d7b4bd19218706d0381b286f59afbab3960a8682df0" gracePeriod=10 Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.134173 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.200253 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73389771-5c03-49f6-96ac-a57864153a5f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.200496 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73389771-5c03-49f6-96ac-a57864153a5f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.200585 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vznkm\" (UniqueName: \"kubernetes.io/projected/73389771-5c03-49f6-96ac-a57864153a5f-kube-api-access-vznkm\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.200753 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-992aa1be-297c-4607-86fd-1bdeae25a355\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-992aa1be-297c-4607-86fd-1bdeae25a355\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.201358 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73389771-5c03-49f6-96ac-a57864153a5f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.201511 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73389771-5c03-49f6-96ac-a57864153a5f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.201088 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73389771-5c03-49f6-96ac-a57864153a5f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.201742 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73389771-5c03-49f6-96ac-a57864153a5f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.202760 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73389771-5c03-49f6-96ac-a57864153a5f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.203680 4799 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.203839 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-992aa1be-297c-4607-86fd-1bdeae25a355\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-992aa1be-297c-4607-86fd-1bdeae25a355\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c240c2644e246a982d55ead9dac1e2ab192baebc70b563e210e646a1188c2985/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.204262 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73389771-5c03-49f6-96ac-a57864153a5f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.204443 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73389771-5c03-49f6-96ac-a57864153a5f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.204547 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73389771-5c03-49f6-96ac-a57864153a5f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.205440 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73389771-5c03-49f6-96ac-a57864153a5f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.206099 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73389771-5c03-49f6-96ac-a57864153a5f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.209828 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73389771-5c03-49f6-96ac-a57864153a5f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.210205 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73389771-5c03-49f6-96ac-a57864153a5f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.217722 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vznkm\" (UniqueName: \"kubernetes.io/projected/73389771-5c03-49f6-96ac-a57864153a5f-kube-api-access-vznkm\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.240760 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-992aa1be-297c-4607-86fd-1bdeae25a355\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-992aa1be-297c-4607-86fd-1bdeae25a355\") pod \"rabbitmq-cell1-server-0\" (UID: \"73389771-5c03-49f6-96ac-a57864153a5f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.370137 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.467891 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24" path="/var/lib/kubelet/pods/5a39570f-ce03-4a9a-9c3b-5f7b7bb86d24/volumes" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.469390 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ec25425-8dd3-4458-afcf-02ab3f166e97" path="/var/lib/kubelet/pods/6ec25425-8dd3-4458-afcf-02ab3f166e97/volumes" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.549140 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-5r5hj" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.611522 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e9901af-5957-43d7-a8a2-dd341614a031-dns-svc\") pod \"9e9901af-5957-43d7-a8a2-dd341614a031\" (UID: \"9e9901af-5957-43d7-a8a2-dd341614a031\") " Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.611566 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6ggk\" (UniqueName: \"kubernetes.io/projected/9e9901af-5957-43d7-a8a2-dd341614a031-kube-api-access-d6ggk\") pod \"9e9901af-5957-43d7-a8a2-dd341614a031\" (UID: \"9e9901af-5957-43d7-a8a2-dd341614a031\") " Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.611592 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e9901af-5957-43d7-a8a2-dd341614a031-config\") pod \"9e9901af-5957-43d7-a8a2-dd341614a031\" (UID: \"9e9901af-5957-43d7-a8a2-dd341614a031\") " Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.615161 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9901af-5957-43d7-a8a2-dd341614a031-kube-api-access-d6ggk" (OuterVolumeSpecName: "kube-api-access-d6ggk") pod "9e9901af-5957-43d7-a8a2-dd341614a031" (UID: "9e9901af-5957-43d7-a8a2-dd341614a031"). InnerVolumeSpecName "kube-api-access-d6ggk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:06:28 crc kubenswrapper[4799]: E0127 09:06:28.643063 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e9901af-5957-43d7-a8a2-dd341614a031-config podName:9e9901af-5957-43d7-a8a2-dd341614a031 nodeName:}" failed. No retries permitted until 2026-01-27 09:06:29.143031819 +0000 UTC m=+4855.454135884 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config" (UniqueName: "kubernetes.io/configmap/9e9901af-5957-43d7-a8a2-dd341614a031-config") pod "9e9901af-5957-43d7-a8a2-dd341614a031" (UID: "9e9901af-5957-43d7-a8a2-dd341614a031") : error deleting /var/lib/kubelet/pods/9e9901af-5957-43d7-a8a2-dd341614a031/volume-subpaths: remove /var/lib/kubelet/pods/9e9901af-5957-43d7-a8a2-dd341614a031/volume-subpaths: no such file or directory Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.643310 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9901af-5957-43d7-a8a2-dd341614a031-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9e9901af-5957-43d7-a8a2-dd341614a031" (UID: "9e9901af-5957-43d7-a8a2-dd341614a031"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.713133 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e9901af-5957-43d7-a8a2-dd341614a031-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.713167 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6ggk\" (UniqueName: \"kubernetes.io/projected/9e9901af-5957-43d7-a8a2-dd341614a031-kube-api-access-d6ggk\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.807666 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 09:06:28 crc kubenswrapper[4799]: W0127 09:06:28.818399 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73389771_5c03_49f6_96ac_a57864153a5f.slice/crio-482be040d86248eaf4bcf9254580f23ff3dd73d7ce2cba2856333b9f60e4a1b8 WatchSource:0}: Error finding container 482be040d86248eaf4bcf9254580f23ff3dd73d7ce2cba2856333b9f60e4a1b8: Status 404 returned error can't find the container with id 482be040d86248eaf4bcf9254580f23ff3dd73d7ce2cba2856333b9f60e4a1b8 Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.977987 4799 generic.go:334] "Generic (PLEG): container finished" podID="9e9901af-5957-43d7-a8a2-dd341614a031" containerID="78e0cb19c7483574118d9d7b4bd19218706d0381b286f59afbab3960a8682df0" exitCode=0 Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.978075 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-5r5hj" event={"ID":"9e9901af-5957-43d7-a8a2-dd341614a031","Type":"ContainerDied","Data":"78e0cb19c7483574118d9d7b4bd19218706d0381b286f59afbab3960a8682df0"} Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.978109 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-5r5hj" event={"ID":"9e9901af-5957-43d7-a8a2-dd341614a031","Type":"ContainerDied","Data":"24af5f3da41eba7e586e328c950a485a8f0d6f3875bf5afdf07686d6128a582c"} Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.978128 4799 scope.go:117] "RemoveContainer" containerID="78e0cb19c7483574118d9d7b4bd19218706d0381b286f59afbab3960a8682df0" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.979576 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-5r5hj" Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.983190 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"64f8476f-48e6-4190-8c0d-436a672f8e62","Type":"ContainerStarted","Data":"8d70cc0b748b411327b549afecbe1a8b4c6a5d42325036a6017e4e5eef624422"} Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.984305 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73389771-5c03-49f6-96ac-a57864153a5f","Type":"ContainerStarted","Data":"482be040d86248eaf4bcf9254580f23ff3dd73d7ce2cba2856333b9f60e4a1b8"} Jan 27 09:06:28 crc kubenswrapper[4799]: I0127 09:06:28.996834 4799 scope.go:117] "RemoveContainer" containerID="73c5a4259d954c472933ee679cd22628c0f57c9aeb6183280403a83f983fcf8a" Jan 27 09:06:29 crc kubenswrapper[4799]: I0127 09:06:29.027708 4799 scope.go:117] "RemoveContainer" containerID="78e0cb19c7483574118d9d7b4bd19218706d0381b286f59afbab3960a8682df0" Jan 27 09:06:29 crc kubenswrapper[4799]: E0127 09:06:29.028179 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78e0cb19c7483574118d9d7b4bd19218706d0381b286f59afbab3960a8682df0\": container with ID starting with 78e0cb19c7483574118d9d7b4bd19218706d0381b286f59afbab3960a8682df0 not found: ID does not exist" containerID="78e0cb19c7483574118d9d7b4bd19218706d0381b286f59afbab3960a8682df0" Jan 27 09:06:29 crc kubenswrapper[4799]: I0127 09:06:29.028220 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78e0cb19c7483574118d9d7b4bd19218706d0381b286f59afbab3960a8682df0"} err="failed to get container status \"78e0cb19c7483574118d9d7b4bd19218706d0381b286f59afbab3960a8682df0\": rpc error: code = NotFound desc = could not find container \"78e0cb19c7483574118d9d7b4bd19218706d0381b286f59afbab3960a8682df0\": container with ID starting with 78e0cb19c7483574118d9d7b4bd19218706d0381b286f59afbab3960a8682df0 not found: ID does not exist" Jan 27 09:06:29 crc kubenswrapper[4799]: I0127 09:06:29.028244 4799 scope.go:117] "RemoveContainer" containerID="73c5a4259d954c472933ee679cd22628c0f57c9aeb6183280403a83f983fcf8a" Jan 27 09:06:29 crc kubenswrapper[4799]: E0127 09:06:29.028666 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73c5a4259d954c472933ee679cd22628c0f57c9aeb6183280403a83f983fcf8a\": container with ID starting with 73c5a4259d954c472933ee679cd22628c0f57c9aeb6183280403a83f983fcf8a not found: ID does not exist" containerID="73c5a4259d954c472933ee679cd22628c0f57c9aeb6183280403a83f983fcf8a" Jan 27 09:06:29 crc kubenswrapper[4799]: I0127 09:06:29.028740 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73c5a4259d954c472933ee679cd22628c0f57c9aeb6183280403a83f983fcf8a"} err="failed to get container status \"73c5a4259d954c472933ee679cd22628c0f57c9aeb6183280403a83f983fcf8a\": rpc error: code = NotFound desc = could not find container \"73c5a4259d954c472933ee679cd22628c0f57c9aeb6183280403a83f983fcf8a\": container with ID starting with 73c5a4259d954c472933ee679cd22628c0f57c9aeb6183280403a83f983fcf8a not found: ID does not exist" Jan 27 09:06:29 crc kubenswrapper[4799]: I0127 09:06:29.223157 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e9901af-5957-43d7-a8a2-dd341614a031-config\") pod \"9e9901af-5957-43d7-a8a2-dd341614a031\" (UID: \"9e9901af-5957-43d7-a8a2-dd341614a031\") " Jan 27 09:06:29 crc kubenswrapper[4799]: I0127 09:06:29.223920 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9901af-5957-43d7-a8a2-dd341614a031-config" (OuterVolumeSpecName: "config") pod "9e9901af-5957-43d7-a8a2-dd341614a031" (UID: "9e9901af-5957-43d7-a8a2-dd341614a031"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:06:29 crc kubenswrapper[4799]: I0127 09:06:29.314000 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-5r5hj"] Jan 27 09:06:29 crc kubenswrapper[4799]: I0127 09:06:29.320515 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-5r5hj"] Jan 27 09:06:29 crc kubenswrapper[4799]: I0127 09:06:29.325346 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e9901af-5957-43d7-a8a2-dd341614a031-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:06:29 crc kubenswrapper[4799]: I0127 09:06:29.996821 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"64f8476f-48e6-4190-8c0d-436a672f8e62","Type":"ContainerStarted","Data":"d59a852f9d2c57ccdbb1c831b355b704dd73aa8a910822bf79ec35f2597c093d"} Jan 27 09:06:29 crc kubenswrapper[4799]: I0127 09:06:29.998812 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73389771-5c03-49f6-96ac-a57864153a5f","Type":"ContainerStarted","Data":"22b7042b4725805c4d20d265ba19ebb3ae07c830fadcaa56b98112e520c306e1"} Jan 27 09:06:30 crc kubenswrapper[4799]: I0127 09:06:30.462715 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9901af-5957-43d7-a8a2-dd341614a031" path="/var/lib/kubelet/pods/9e9901af-5957-43d7-a8a2-dd341614a031/volumes" Jan 27 09:06:34 crc kubenswrapper[4799]: I0127 09:06:34.456001 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:06:34 crc kubenswrapper[4799]: E0127 09:06:34.456592 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:06:48 crc kubenswrapper[4799]: I0127 09:06:48.451759 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:06:48 crc kubenswrapper[4799]: E0127 09:06:48.453854 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:07:02 crc kubenswrapper[4799]: I0127 09:07:02.280734 4799 generic.go:334] "Generic (PLEG): container finished" podID="64f8476f-48e6-4190-8c0d-436a672f8e62" containerID="d59a852f9d2c57ccdbb1c831b355b704dd73aa8a910822bf79ec35f2597c093d" exitCode=0 Jan 27 09:07:02 crc kubenswrapper[4799]: I0127 09:07:02.280855 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"64f8476f-48e6-4190-8c0d-436a672f8e62","Type":"ContainerDied","Data":"d59a852f9d2c57ccdbb1c831b355b704dd73aa8a910822bf79ec35f2597c093d"} Jan 27 09:07:02 crc kubenswrapper[4799]: I0127 09:07:02.285204 4799 generic.go:334] "Generic (PLEG): container finished" podID="73389771-5c03-49f6-96ac-a57864153a5f" containerID="22b7042b4725805c4d20d265ba19ebb3ae07c830fadcaa56b98112e520c306e1" exitCode=0 Jan 27 09:07:02 crc kubenswrapper[4799]: I0127 09:07:02.285255 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73389771-5c03-49f6-96ac-a57864153a5f","Type":"ContainerDied","Data":"22b7042b4725805c4d20d265ba19ebb3ae07c830fadcaa56b98112e520c306e1"} Jan 27 09:07:02 crc kubenswrapper[4799]: I0127 09:07:02.451130 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:07:03 crc kubenswrapper[4799]: I0127 09:07:03.294715 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73389771-5c03-49f6-96ac-a57864153a5f","Type":"ContainerStarted","Data":"63bc69fdc0b2964c89048ce35449f2a5bc0b263c906941c5443efa630a9ad2a7"} Jan 27 09:07:03 crc kubenswrapper[4799]: I0127 09:07:03.296462 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:07:03 crc kubenswrapper[4799]: I0127 09:07:03.297598 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"64f8476f-48e6-4190-8c0d-436a672f8e62","Type":"ContainerStarted","Data":"7f3ad946841bf295a36db14d008ce46f8880b7c583125f268ed29e704428b74e"} Jan 27 09:07:03 crc kubenswrapper[4799]: I0127 09:07:03.297817 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 09:07:03 crc kubenswrapper[4799]: I0127 09:07:03.299990 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"1d935470e42da2a88c4f12bf0eaba98305672f9b4f487c92db4e981fe15a0e50"} Jan 27 09:07:03 crc kubenswrapper[4799]: I0127 09:07:03.317358 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=35.317336316 podStartE2EDuration="35.317336316s" podCreationTimestamp="2026-01-27 09:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:07:03.312512695 +0000 UTC m=+4889.623616770" watchObservedRunningTime="2026-01-27 09:07:03.317336316 +0000 UTC m=+4889.628440391" Jan 27 09:07:03 crc kubenswrapper[4799]: I0127 09:07:03.341710 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.341688511 podStartE2EDuration="36.341688511s" podCreationTimestamp="2026-01-27 09:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:07:03.33397678 +0000 UTC m=+4889.645080865" watchObservedRunningTime="2026-01-27 09:07:03.341688511 +0000 UTC m=+4889.652792576" Jan 27 09:07:17 crc kubenswrapper[4799]: I0127 09:07:17.638537 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 09:07:18 crc kubenswrapper[4799]: I0127 09:07:18.373226 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 09:07:24 crc kubenswrapper[4799]: I0127 09:07:24.818971 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 27 09:07:24 crc kubenswrapper[4799]: E0127 09:07:24.821619 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e9901af-5957-43d7-a8a2-dd341614a031" containerName="init" Jan 27 09:07:24 crc kubenswrapper[4799]: I0127 09:07:24.821814 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9901af-5957-43d7-a8a2-dd341614a031" containerName="init" Jan 27 09:07:24 crc kubenswrapper[4799]: E0127 09:07:24.821981 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e9901af-5957-43d7-a8a2-dd341614a031" containerName="dnsmasq-dns" Jan 27 09:07:24 crc kubenswrapper[4799]: I0127 09:07:24.822122 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9901af-5957-43d7-a8a2-dd341614a031" containerName="dnsmasq-dns" Jan 27 09:07:24 crc kubenswrapper[4799]: I0127 09:07:24.822662 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e9901af-5957-43d7-a8a2-dd341614a031" containerName="dnsmasq-dns" Jan 27 09:07:24 crc kubenswrapper[4799]: I0127 09:07:24.823689 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 09:07:24 crc kubenswrapper[4799]: I0127 09:07:24.828264 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fx482" Jan 27 09:07:24 crc kubenswrapper[4799]: I0127 09:07:24.832095 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 27 09:07:24 crc kubenswrapper[4799]: I0127 09:07:24.920551 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lspsx\" (UniqueName: \"kubernetes.io/projected/24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24-kube-api-access-lspsx\") pod \"mariadb-client\" (UID: \"24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24\") " pod="openstack/mariadb-client" Jan 27 09:07:25 crc kubenswrapper[4799]: I0127 09:07:25.023188 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lspsx\" (UniqueName: \"kubernetes.io/projected/24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24-kube-api-access-lspsx\") pod \"mariadb-client\" (UID: \"24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24\") " pod="openstack/mariadb-client" Jan 27 09:07:25 crc kubenswrapper[4799]: I0127 09:07:25.050349 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lspsx\" (UniqueName: \"kubernetes.io/projected/24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24-kube-api-access-lspsx\") pod \"mariadb-client\" (UID: \"24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24\") " pod="openstack/mariadb-client" Jan 27 09:07:25 crc kubenswrapper[4799]: I0127 09:07:25.157977 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 09:07:25 crc kubenswrapper[4799]: I0127 09:07:25.727631 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 27 09:07:26 crc kubenswrapper[4799]: I0127 09:07:26.506190 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24","Type":"ContainerStarted","Data":"db1d8ccca2e173bdd385df423066d335abf5a66bf26174805817223c8c1ca9c2"} Jan 27 09:07:26 crc kubenswrapper[4799]: I0127 09:07:26.506542 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24","Type":"ContainerStarted","Data":"fbd3539d8c4fa9f7dd9478294efc62bd552c5840f1e4ea9afc643821765c8c5f"} Jan 27 09:07:26 crc kubenswrapper[4799]: I0127 09:07:26.525401 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=2.525383105 podStartE2EDuration="2.525383105s" podCreationTimestamp="2026-01-27 09:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:07:26.520972385 +0000 UTC m=+4912.832076490" watchObservedRunningTime="2026-01-27 09:07:26.525383105 +0000 UTC m=+4912.836487180" Jan 27 09:07:39 crc kubenswrapper[4799]: I0127 09:07:39.965716 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 27 09:07:39 crc kubenswrapper[4799]: I0127 09:07:39.966573 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24" containerName="mariadb-client" containerID="cri-o://db1d8ccca2e173bdd385df423066d335abf5a66bf26174805817223c8c1ca9c2" gracePeriod=30 Jan 27 09:07:40 crc kubenswrapper[4799]: I0127 09:07:40.429667 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 09:07:40 crc kubenswrapper[4799]: I0127 09:07:40.470721 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lspsx\" (UniqueName: \"kubernetes.io/projected/24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24-kube-api-access-lspsx\") pod \"24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24\" (UID: \"24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24\") " Jan 27 09:07:40 crc kubenswrapper[4799]: I0127 09:07:40.477722 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24-kube-api-access-lspsx" (OuterVolumeSpecName: "kube-api-access-lspsx") pod "24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24" (UID: "24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24"). InnerVolumeSpecName "kube-api-access-lspsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:07:40 crc kubenswrapper[4799]: I0127 09:07:40.572315 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lspsx\" (UniqueName: \"kubernetes.io/projected/24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24-kube-api-access-lspsx\") on node \"crc\" DevicePath \"\"" Jan 27 09:07:40 crc kubenswrapper[4799]: I0127 09:07:40.624369 4799 generic.go:334] "Generic (PLEG): container finished" podID="24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24" containerID="db1d8ccca2e173bdd385df423066d335abf5a66bf26174805817223c8c1ca9c2" exitCode=143 Jan 27 09:07:40 crc kubenswrapper[4799]: I0127 09:07:40.624626 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24","Type":"ContainerDied","Data":"db1d8ccca2e173bdd385df423066d335abf5a66bf26174805817223c8c1ca9c2"} Jan 27 09:07:40 crc kubenswrapper[4799]: I0127 09:07:40.624678 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24","Type":"ContainerDied","Data":"fbd3539d8c4fa9f7dd9478294efc62bd552c5840f1e4ea9afc643821765c8c5f"} Jan 27 09:07:40 crc kubenswrapper[4799]: I0127 09:07:40.624675 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 09:07:40 crc kubenswrapper[4799]: I0127 09:07:40.625144 4799 scope.go:117] "RemoveContainer" containerID="db1d8ccca2e173bdd385df423066d335abf5a66bf26174805817223c8c1ca9c2" Jan 27 09:07:40 crc kubenswrapper[4799]: I0127 09:07:40.646650 4799 scope.go:117] "RemoveContainer" containerID="db1d8ccca2e173bdd385df423066d335abf5a66bf26174805817223c8c1ca9c2" Jan 27 09:07:40 crc kubenswrapper[4799]: E0127 09:07:40.647141 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db1d8ccca2e173bdd385df423066d335abf5a66bf26174805817223c8c1ca9c2\": container with ID starting with db1d8ccca2e173bdd385df423066d335abf5a66bf26174805817223c8c1ca9c2 not found: ID does not exist" containerID="db1d8ccca2e173bdd385df423066d335abf5a66bf26174805817223c8c1ca9c2" Jan 27 09:07:40 crc kubenswrapper[4799]: I0127 09:07:40.647177 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db1d8ccca2e173bdd385df423066d335abf5a66bf26174805817223c8c1ca9c2"} err="failed to get container status \"db1d8ccca2e173bdd385df423066d335abf5a66bf26174805817223c8c1ca9c2\": rpc error: code = NotFound desc = could not find container \"db1d8ccca2e173bdd385df423066d335abf5a66bf26174805817223c8c1ca9c2\": container with ID starting with db1d8ccca2e173bdd385df423066d335abf5a66bf26174805817223c8c1ca9c2 not found: ID does not exist" Jan 27 09:07:40 crc kubenswrapper[4799]: I0127 09:07:40.661789 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 27 09:07:40 crc kubenswrapper[4799]: I0127 09:07:40.667402 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 27 09:07:42 crc kubenswrapper[4799]: I0127 09:07:42.462587 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24" path="/var/lib/kubelet/pods/24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24/volumes" Jan 27 09:08:39 crc kubenswrapper[4799]: I0127 09:08:39.741361 4799 scope.go:117] "RemoveContainer" containerID="607c0d7ea684b8229f809164cd1162587bdcee77fa82bdad5a0d7779ebd9693b" Jan 27 09:09:23 crc kubenswrapper[4799]: I0127 09:09:23.731189 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:09:23 crc kubenswrapper[4799]: I0127 09:09:23.732040 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:09:53 crc kubenswrapper[4799]: I0127 09:09:53.731942 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:09:53 crc kubenswrapper[4799]: I0127 09:09:53.732822 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:10:23 crc kubenswrapper[4799]: I0127 09:10:23.731911 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:10:23 crc kubenswrapper[4799]: I0127 09:10:23.733973 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:10:23 crc kubenswrapper[4799]: I0127 09:10:23.734118 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 09:10:23 crc kubenswrapper[4799]: I0127 09:10:23.735231 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d935470e42da2a88c4f12bf0eaba98305672f9b4f487c92db4e981fe15a0e50"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 09:10:23 crc kubenswrapper[4799]: I0127 09:10:23.735432 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://1d935470e42da2a88c4f12bf0eaba98305672f9b4f487c92db4e981fe15a0e50" gracePeriod=600 Jan 27 09:10:24 crc kubenswrapper[4799]: I0127 09:10:24.048830 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="1d935470e42da2a88c4f12bf0eaba98305672f9b4f487c92db4e981fe15a0e50" exitCode=0 Jan 27 09:10:24 crc kubenswrapper[4799]: I0127 09:10:24.048931 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"1d935470e42da2a88c4f12bf0eaba98305672f9b4f487c92db4e981fe15a0e50"} Jan 27 09:10:24 crc kubenswrapper[4799]: I0127 09:10:24.049129 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47"} Jan 27 09:10:24 crc kubenswrapper[4799]: I0127 09:10:24.049155 4799 scope.go:117] "RemoveContainer" containerID="0ae2b074b2aecd3d975e00313530552484a4aac325b082f40c90da954e3e6f0e" Jan 27 09:11:39 crc kubenswrapper[4799]: I0127 09:11:39.850702 4799 scope.go:117] "RemoveContainer" containerID="7721abeec753fd11cea9587708d8a3ba76c6fa8a58d524688e57bbd6a8080ae5" Jan 27 09:11:39 crc kubenswrapper[4799]: I0127 09:11:39.888101 4799 scope.go:117] "RemoveContainer" containerID="a93379cbabe10c4773e15c75fe098a96618e4de326ff981fca1578e409749aa0" Jan 27 09:11:39 crc kubenswrapper[4799]: I0127 09:11:39.928343 4799 scope.go:117] "RemoveContainer" containerID="49632e0c5da7df88e0c1e989a68db73211997faf35005d4e3a6eb2a8604bcc3b" Jan 27 09:11:55 crc kubenswrapper[4799]: I0127 09:11:55.335479 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z6jk4"] Jan 27 09:11:55 crc kubenswrapper[4799]: E0127 09:11:55.337386 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24" containerName="mariadb-client" Jan 27 09:11:55 crc kubenswrapper[4799]: I0127 09:11:55.337416 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24" containerName="mariadb-client" Jan 27 09:11:55 crc kubenswrapper[4799]: I0127 09:11:55.337828 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d3dd8d-bdc3-42e3-9b9b-1a2cc00aac24" containerName="mariadb-client" Jan 27 09:11:55 crc kubenswrapper[4799]: I0127 09:11:55.341556 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z6jk4" Jan 27 09:11:55 crc kubenswrapper[4799]: I0127 09:11:55.365658 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z6jk4"] Jan 27 09:11:55 crc kubenswrapper[4799]: I0127 09:11:55.475437 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6h4m\" (UniqueName: \"kubernetes.io/projected/b16321d5-dd18-47d5-8df0-ec6885b376fa-kube-api-access-d6h4m\") pod \"redhat-marketplace-z6jk4\" (UID: \"b16321d5-dd18-47d5-8df0-ec6885b376fa\") " pod="openshift-marketplace/redhat-marketplace-z6jk4" Jan 27 09:11:55 crc kubenswrapper[4799]: I0127 09:11:55.475545 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b16321d5-dd18-47d5-8df0-ec6885b376fa-catalog-content\") pod \"redhat-marketplace-z6jk4\" (UID: \"b16321d5-dd18-47d5-8df0-ec6885b376fa\") " pod="openshift-marketplace/redhat-marketplace-z6jk4" Jan 27 09:11:55 crc kubenswrapper[4799]: I0127 09:11:55.475594 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b16321d5-dd18-47d5-8df0-ec6885b376fa-utilities\") pod \"redhat-marketplace-z6jk4\" (UID: \"b16321d5-dd18-47d5-8df0-ec6885b376fa\") " pod="openshift-marketplace/redhat-marketplace-z6jk4" Jan 27 09:11:55 crc kubenswrapper[4799]: I0127 09:11:55.577374 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6h4m\" (UniqueName: \"kubernetes.io/projected/b16321d5-dd18-47d5-8df0-ec6885b376fa-kube-api-access-d6h4m\") pod \"redhat-marketplace-z6jk4\" (UID: \"b16321d5-dd18-47d5-8df0-ec6885b376fa\") " pod="openshift-marketplace/redhat-marketplace-z6jk4" Jan 27 09:11:55 crc kubenswrapper[4799]: I0127 09:11:55.577476 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b16321d5-dd18-47d5-8df0-ec6885b376fa-catalog-content\") pod \"redhat-marketplace-z6jk4\" (UID: \"b16321d5-dd18-47d5-8df0-ec6885b376fa\") " pod="openshift-marketplace/redhat-marketplace-z6jk4" Jan 27 09:11:55 crc kubenswrapper[4799]: I0127 09:11:55.577521 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b16321d5-dd18-47d5-8df0-ec6885b376fa-utilities\") pod \"redhat-marketplace-z6jk4\" (UID: \"b16321d5-dd18-47d5-8df0-ec6885b376fa\") " pod="openshift-marketplace/redhat-marketplace-z6jk4" Jan 27 09:11:55 crc kubenswrapper[4799]: I0127 09:11:55.577972 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b16321d5-dd18-47d5-8df0-ec6885b376fa-utilities\") pod \"redhat-marketplace-z6jk4\" (UID: \"b16321d5-dd18-47d5-8df0-ec6885b376fa\") " pod="openshift-marketplace/redhat-marketplace-z6jk4" Jan 27 09:11:55 crc kubenswrapper[4799]: I0127 09:11:55.578390 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b16321d5-dd18-47d5-8df0-ec6885b376fa-catalog-content\") pod \"redhat-marketplace-z6jk4\" (UID: \"b16321d5-dd18-47d5-8df0-ec6885b376fa\") " pod="openshift-marketplace/redhat-marketplace-z6jk4" Jan 27 09:11:55 crc kubenswrapper[4799]: I0127 09:11:55.597346 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6h4m\" (UniqueName: \"kubernetes.io/projected/b16321d5-dd18-47d5-8df0-ec6885b376fa-kube-api-access-d6h4m\") pod \"redhat-marketplace-z6jk4\" (UID: \"b16321d5-dd18-47d5-8df0-ec6885b376fa\") " pod="openshift-marketplace/redhat-marketplace-z6jk4" Jan 27 09:11:55 crc kubenswrapper[4799]: I0127 09:11:55.690222 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z6jk4" Jan 27 09:11:56 crc kubenswrapper[4799]: I0127 09:11:56.118023 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z6jk4"] Jan 27 09:11:56 crc kubenswrapper[4799]: I0127 09:11:56.941601 4799 generic.go:334] "Generic (PLEG): container finished" podID="b16321d5-dd18-47d5-8df0-ec6885b376fa" containerID="a20f6e465fff6d619147a4d1c2306c4527806a423338d6c3792ad2d27ffc38c3" exitCode=0 Jan 27 09:11:56 crc kubenswrapper[4799]: I0127 09:11:56.941712 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6jk4" event={"ID":"b16321d5-dd18-47d5-8df0-ec6885b376fa","Type":"ContainerDied","Data":"a20f6e465fff6d619147a4d1c2306c4527806a423338d6c3792ad2d27ffc38c3"} Jan 27 09:11:56 crc kubenswrapper[4799]: I0127 09:11:56.941941 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6jk4" event={"ID":"b16321d5-dd18-47d5-8df0-ec6885b376fa","Type":"ContainerStarted","Data":"1d15d39cb9b30df3df1d489645c1917f57102e94e3263db5d6dfaad31d38ed3f"} Jan 27 09:11:56 crc kubenswrapper[4799]: I0127 09:11:56.944713 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 09:11:58 crc kubenswrapper[4799]: I0127 09:11:58.963051 4799 generic.go:334] "Generic (PLEG): container finished" podID="b16321d5-dd18-47d5-8df0-ec6885b376fa" containerID="acaa644901e4a619692a1ee779f50240122e2faf6047210a23ec9719c81b1427" exitCode=0 Jan 27 09:11:58 crc kubenswrapper[4799]: I0127 09:11:58.963197 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6jk4" event={"ID":"b16321d5-dd18-47d5-8df0-ec6885b376fa","Type":"ContainerDied","Data":"acaa644901e4a619692a1ee779f50240122e2faf6047210a23ec9719c81b1427"} Jan 27 09:11:59 crc kubenswrapper[4799]: I0127 09:11:59.975597 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6jk4" event={"ID":"b16321d5-dd18-47d5-8df0-ec6885b376fa","Type":"ContainerStarted","Data":"c11c590111c6426194a8bea9fde8b9f20a7b1d7296effc28ed3fe44f2f50a26b"} Jan 27 09:11:59 crc kubenswrapper[4799]: I0127 09:11:59.995958 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z6jk4" podStartSLOduration=2.533585669 podStartE2EDuration="4.995938567s" podCreationTimestamp="2026-01-27 09:11:55 +0000 UTC" firstStartedPulling="2026-01-27 09:11:56.944354635 +0000 UTC m=+5183.255458700" lastFinishedPulling="2026-01-27 09:11:59.406707533 +0000 UTC m=+5185.717811598" observedRunningTime="2026-01-27 09:11:59.993632863 +0000 UTC m=+5186.304736968" watchObservedRunningTime="2026-01-27 09:11:59.995938567 +0000 UTC m=+5186.307042632" Jan 27 09:12:05 crc kubenswrapper[4799]: I0127 09:12:05.691243 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z6jk4" Jan 27 09:12:05 crc kubenswrapper[4799]: I0127 09:12:05.693423 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z6jk4" Jan 27 09:12:05 crc kubenswrapper[4799]: I0127 09:12:05.744872 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z6jk4" Jan 27 09:12:06 crc kubenswrapper[4799]: I0127 09:12:06.073046 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z6jk4" Jan 27 09:12:06 crc kubenswrapper[4799]: I0127 09:12:06.135134 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z6jk4"] Jan 27 09:12:08 crc kubenswrapper[4799]: I0127 09:12:08.044684 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z6jk4" podUID="b16321d5-dd18-47d5-8df0-ec6885b376fa" containerName="registry-server" containerID="cri-o://c11c590111c6426194a8bea9fde8b9f20a7b1d7296effc28ed3fe44f2f50a26b" gracePeriod=2 Jan 27 09:12:09 crc kubenswrapper[4799]: I0127 09:12:09.060256 4799 generic.go:334] "Generic (PLEG): container finished" podID="b16321d5-dd18-47d5-8df0-ec6885b376fa" containerID="c11c590111c6426194a8bea9fde8b9f20a7b1d7296effc28ed3fe44f2f50a26b" exitCode=0 Jan 27 09:12:09 crc kubenswrapper[4799]: I0127 09:12:09.060704 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6jk4" event={"ID":"b16321d5-dd18-47d5-8df0-ec6885b376fa","Type":"ContainerDied","Data":"c11c590111c6426194a8bea9fde8b9f20a7b1d7296effc28ed3fe44f2f50a26b"} Jan 27 09:12:09 crc kubenswrapper[4799]: I0127 09:12:09.205381 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z6jk4" Jan 27 09:12:09 crc kubenswrapper[4799]: I0127 09:12:09.319514 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6h4m\" (UniqueName: \"kubernetes.io/projected/b16321d5-dd18-47d5-8df0-ec6885b376fa-kube-api-access-d6h4m\") pod \"b16321d5-dd18-47d5-8df0-ec6885b376fa\" (UID: \"b16321d5-dd18-47d5-8df0-ec6885b376fa\") " Jan 27 09:12:09 crc kubenswrapper[4799]: I0127 09:12:09.319660 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b16321d5-dd18-47d5-8df0-ec6885b376fa-catalog-content\") pod \"b16321d5-dd18-47d5-8df0-ec6885b376fa\" (UID: \"b16321d5-dd18-47d5-8df0-ec6885b376fa\") " Jan 27 09:12:09 crc kubenswrapper[4799]: I0127 09:12:09.319753 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b16321d5-dd18-47d5-8df0-ec6885b376fa-utilities\") pod \"b16321d5-dd18-47d5-8df0-ec6885b376fa\" (UID: \"b16321d5-dd18-47d5-8df0-ec6885b376fa\") " Jan 27 09:12:09 crc kubenswrapper[4799]: I0127 09:12:09.321379 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b16321d5-dd18-47d5-8df0-ec6885b376fa-utilities" (OuterVolumeSpecName: "utilities") pod "b16321d5-dd18-47d5-8df0-ec6885b376fa" (UID: "b16321d5-dd18-47d5-8df0-ec6885b376fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:12:09 crc kubenswrapper[4799]: I0127 09:12:09.328444 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b16321d5-dd18-47d5-8df0-ec6885b376fa-kube-api-access-d6h4m" (OuterVolumeSpecName: "kube-api-access-d6h4m") pod "b16321d5-dd18-47d5-8df0-ec6885b376fa" (UID: "b16321d5-dd18-47d5-8df0-ec6885b376fa"). InnerVolumeSpecName "kube-api-access-d6h4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:12:09 crc kubenswrapper[4799]: I0127 09:12:09.422438 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6h4m\" (UniqueName: \"kubernetes.io/projected/b16321d5-dd18-47d5-8df0-ec6885b376fa-kube-api-access-d6h4m\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:09 crc kubenswrapper[4799]: I0127 09:12:09.422493 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b16321d5-dd18-47d5-8df0-ec6885b376fa-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:10 crc kubenswrapper[4799]: I0127 09:12:10.071928 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z6jk4" event={"ID":"b16321d5-dd18-47d5-8df0-ec6885b376fa","Type":"ContainerDied","Data":"1d15d39cb9b30df3df1d489645c1917f57102e94e3263db5d6dfaad31d38ed3f"} Jan 27 09:12:10 crc kubenswrapper[4799]: I0127 09:12:10.072349 4799 scope.go:117] "RemoveContainer" containerID="c11c590111c6426194a8bea9fde8b9f20a7b1d7296effc28ed3fe44f2f50a26b" Jan 27 09:12:10 crc kubenswrapper[4799]: I0127 09:12:10.072014 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z6jk4" Jan 27 09:12:10 crc kubenswrapper[4799]: I0127 09:12:10.100540 4799 scope.go:117] "RemoveContainer" containerID="acaa644901e4a619692a1ee779f50240122e2faf6047210a23ec9719c81b1427" Jan 27 09:12:10 crc kubenswrapper[4799]: I0127 09:12:10.135843 4799 scope.go:117] "RemoveContainer" containerID="a20f6e465fff6d619147a4d1c2306c4527806a423338d6c3792ad2d27ffc38c3" Jan 27 09:12:10 crc kubenswrapper[4799]: I0127 09:12:10.686590 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b16321d5-dd18-47d5-8df0-ec6885b376fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b16321d5-dd18-47d5-8df0-ec6885b376fa" (UID: "b16321d5-dd18-47d5-8df0-ec6885b376fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:12:10 crc kubenswrapper[4799]: I0127 09:12:10.748403 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b16321d5-dd18-47d5-8df0-ec6885b376fa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:11 crc kubenswrapper[4799]: I0127 09:12:11.009532 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z6jk4"] Jan 27 09:12:11 crc kubenswrapper[4799]: I0127 09:12:11.020024 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z6jk4"] Jan 27 09:12:12 crc kubenswrapper[4799]: I0127 09:12:12.470360 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b16321d5-dd18-47d5-8df0-ec6885b376fa" path="/var/lib/kubelet/pods/b16321d5-dd18-47d5-8df0-ec6885b376fa/volumes" Jan 27 09:12:18 crc kubenswrapper[4799]: I0127 09:12:18.094064 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Jan 27 09:12:18 crc kubenswrapper[4799]: E0127 09:12:18.094985 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16321d5-dd18-47d5-8df0-ec6885b376fa" containerName="registry-server" Jan 27 09:12:18 crc kubenswrapper[4799]: I0127 09:12:18.095000 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16321d5-dd18-47d5-8df0-ec6885b376fa" containerName="registry-server" Jan 27 09:12:18 crc kubenswrapper[4799]: E0127 09:12:18.095015 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16321d5-dd18-47d5-8df0-ec6885b376fa" containerName="extract-utilities" Jan 27 09:12:18 crc kubenswrapper[4799]: I0127 09:12:18.095021 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16321d5-dd18-47d5-8df0-ec6885b376fa" containerName="extract-utilities" Jan 27 09:12:18 crc kubenswrapper[4799]: E0127 09:12:18.095045 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16321d5-dd18-47d5-8df0-ec6885b376fa" containerName="extract-content" Jan 27 09:12:18 crc kubenswrapper[4799]: I0127 09:12:18.095052 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16321d5-dd18-47d5-8df0-ec6885b376fa" containerName="extract-content" Jan 27 09:12:18 crc kubenswrapper[4799]: I0127 09:12:18.095204 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16321d5-dd18-47d5-8df0-ec6885b376fa" containerName="registry-server" Jan 27 09:12:18 crc kubenswrapper[4799]: I0127 09:12:18.095721 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 27 09:12:18 crc kubenswrapper[4799]: I0127 09:12:18.099565 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fx482" Jan 27 09:12:18 crc kubenswrapper[4799]: I0127 09:12:18.109206 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 27 09:12:18 crc kubenswrapper[4799]: I0127 09:12:18.179767 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-08ac92dd-f09e-4f4a-ad45-3b7ffc182ad8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08ac92dd-f09e-4f4a-ad45-3b7ffc182ad8\") pod \"mariadb-copy-data\" (UID: \"cd96bf26-b547-49bf-8ee2-15c25fe611fc\") " pod="openstack/mariadb-copy-data" Jan 27 09:12:18 crc kubenswrapper[4799]: I0127 09:12:18.180170 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sfj2\" (UniqueName: \"kubernetes.io/projected/cd96bf26-b547-49bf-8ee2-15c25fe611fc-kube-api-access-9sfj2\") pod \"mariadb-copy-data\" (UID: \"cd96bf26-b547-49bf-8ee2-15c25fe611fc\") " pod="openstack/mariadb-copy-data" Jan 27 09:12:18 crc kubenswrapper[4799]: I0127 09:12:18.282140 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sfj2\" (UniqueName: \"kubernetes.io/projected/cd96bf26-b547-49bf-8ee2-15c25fe611fc-kube-api-access-9sfj2\") pod \"mariadb-copy-data\" (UID: \"cd96bf26-b547-49bf-8ee2-15c25fe611fc\") " pod="openstack/mariadb-copy-data" Jan 27 09:12:18 crc kubenswrapper[4799]: I0127 09:12:18.282203 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-08ac92dd-f09e-4f4a-ad45-3b7ffc182ad8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08ac92dd-f09e-4f4a-ad45-3b7ffc182ad8\") pod \"mariadb-copy-data\" (UID: \"cd96bf26-b547-49bf-8ee2-15c25fe611fc\") " pod="openstack/mariadb-copy-data" Jan 27 09:12:18 crc kubenswrapper[4799]: I0127 09:12:18.285485 4799 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 09:12:18 crc kubenswrapper[4799]: I0127 09:12:18.285543 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-08ac92dd-f09e-4f4a-ad45-3b7ffc182ad8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08ac92dd-f09e-4f4a-ad45-3b7ffc182ad8\") pod \"mariadb-copy-data\" (UID: \"cd96bf26-b547-49bf-8ee2-15c25fe611fc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c16efc06fafae81804cd4f5bc52db349829da370c186b84c9bd6de528a8dcf87/globalmount\"" pod="openstack/mariadb-copy-data" Jan 27 09:12:18 crc kubenswrapper[4799]: I0127 09:12:18.301781 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sfj2\" (UniqueName: \"kubernetes.io/projected/cd96bf26-b547-49bf-8ee2-15c25fe611fc-kube-api-access-9sfj2\") pod \"mariadb-copy-data\" (UID: \"cd96bf26-b547-49bf-8ee2-15c25fe611fc\") " pod="openstack/mariadb-copy-data" Jan 27 09:12:18 crc kubenswrapper[4799]: I0127 09:12:18.312134 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-08ac92dd-f09e-4f4a-ad45-3b7ffc182ad8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08ac92dd-f09e-4f4a-ad45-3b7ffc182ad8\") pod \"mariadb-copy-data\" (UID: \"cd96bf26-b547-49bf-8ee2-15c25fe611fc\") " pod="openstack/mariadb-copy-data" Jan 27 09:12:18 crc kubenswrapper[4799]: I0127 09:12:18.421607 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 27 09:12:19 crc kubenswrapper[4799]: I0127 09:12:19.017592 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 27 09:12:19 crc kubenswrapper[4799]: I0127 09:12:19.145982 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"cd96bf26-b547-49bf-8ee2-15c25fe611fc","Type":"ContainerStarted","Data":"0f9a052fa133a17d263012a1425dacf74105687405d2b9923448aa874c785c3f"} Jan 27 09:12:20 crc kubenswrapper[4799]: I0127 09:12:20.154735 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"cd96bf26-b547-49bf-8ee2-15c25fe611fc","Type":"ContainerStarted","Data":"e77b9a3b81ead50e11f93ab87b4e08c9f073057549b2a4f895909aa38123f52a"} Jan 27 09:12:22 crc kubenswrapper[4799]: I0127 09:12:22.941452 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=5.941428301 podStartE2EDuration="5.941428301s" podCreationTimestamp="2026-01-27 09:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:12:20.175712038 +0000 UTC m=+5206.486816103" watchObservedRunningTime="2026-01-27 09:12:22.941428301 +0000 UTC m=+5209.252532386" Jan 27 09:12:22 crc kubenswrapper[4799]: I0127 09:12:22.944747 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 27 09:12:22 crc kubenswrapper[4799]: I0127 09:12:22.945673 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 09:12:22 crc kubenswrapper[4799]: I0127 09:12:22.954995 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 27 09:12:23 crc kubenswrapper[4799]: I0127 09:12:23.063043 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzt56\" (UniqueName: \"kubernetes.io/projected/94bc228d-6818-445b-8056-466ccbe3960c-kube-api-access-rzt56\") pod \"mariadb-client\" (UID: \"94bc228d-6818-445b-8056-466ccbe3960c\") " pod="openstack/mariadb-client" Jan 27 09:12:23 crc kubenswrapper[4799]: I0127 09:12:23.164438 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzt56\" (UniqueName: \"kubernetes.io/projected/94bc228d-6818-445b-8056-466ccbe3960c-kube-api-access-rzt56\") pod \"mariadb-client\" (UID: \"94bc228d-6818-445b-8056-466ccbe3960c\") " pod="openstack/mariadb-client" Jan 27 09:12:23 crc kubenswrapper[4799]: I0127 09:12:23.188594 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzt56\" (UniqueName: \"kubernetes.io/projected/94bc228d-6818-445b-8056-466ccbe3960c-kube-api-access-rzt56\") pod \"mariadb-client\" (UID: \"94bc228d-6818-445b-8056-466ccbe3960c\") " pod="openstack/mariadb-client" Jan 27 09:12:23 crc kubenswrapper[4799]: I0127 09:12:23.282796 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 09:12:23 crc kubenswrapper[4799]: I0127 09:12:23.693693 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 27 09:12:23 crc kubenswrapper[4799]: W0127 09:12:23.695929 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94bc228d_6818_445b_8056_466ccbe3960c.slice/crio-624c510d0174228382ae11295adcc1459cfba3d0f3954db8d4e4a5129d035f7c WatchSource:0}: Error finding container 624c510d0174228382ae11295adcc1459cfba3d0f3954db8d4e4a5129d035f7c: Status 404 returned error can't find the container with id 624c510d0174228382ae11295adcc1459cfba3d0f3954db8d4e4a5129d035f7c Jan 27 09:12:24 crc kubenswrapper[4799]: I0127 09:12:24.197801 4799 generic.go:334] "Generic (PLEG): container finished" podID="94bc228d-6818-445b-8056-466ccbe3960c" containerID="b7158ecd5088e1489baf596a70b53b667c35a12b56f093612c92a19fc89ff77e" exitCode=0 Jan 27 09:12:24 crc kubenswrapper[4799]: I0127 09:12:24.197916 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"94bc228d-6818-445b-8056-466ccbe3960c","Type":"ContainerDied","Data":"b7158ecd5088e1489baf596a70b53b667c35a12b56f093612c92a19fc89ff77e"} Jan 27 09:12:24 crc kubenswrapper[4799]: I0127 09:12:24.200968 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"94bc228d-6818-445b-8056-466ccbe3960c","Type":"ContainerStarted","Data":"624c510d0174228382ae11295adcc1459cfba3d0f3954db8d4e4a5129d035f7c"} Jan 27 09:12:25 crc kubenswrapper[4799]: I0127 09:12:25.490757 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 09:12:25 crc kubenswrapper[4799]: I0127 09:12:25.518671 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_94bc228d-6818-445b-8056-466ccbe3960c/mariadb-client/0.log" Jan 27 09:12:25 crc kubenswrapper[4799]: I0127 09:12:25.545062 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 27 09:12:25 crc kubenswrapper[4799]: I0127 09:12:25.552680 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 27 09:12:25 crc kubenswrapper[4799]: I0127 09:12:25.603487 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt56\" (UniqueName: \"kubernetes.io/projected/94bc228d-6818-445b-8056-466ccbe3960c-kube-api-access-rzt56\") pod \"94bc228d-6818-445b-8056-466ccbe3960c\" (UID: \"94bc228d-6818-445b-8056-466ccbe3960c\") " Jan 27 09:12:25 crc kubenswrapper[4799]: I0127 09:12:25.608340 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94bc228d-6818-445b-8056-466ccbe3960c-kube-api-access-rzt56" (OuterVolumeSpecName: "kube-api-access-rzt56") pod "94bc228d-6818-445b-8056-466ccbe3960c" (UID: "94bc228d-6818-445b-8056-466ccbe3960c"). InnerVolumeSpecName "kube-api-access-rzt56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:12:25 crc kubenswrapper[4799]: I0127 09:12:25.671223 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 27 09:12:25 crc kubenswrapper[4799]: E0127 09:12:25.672894 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94bc228d-6818-445b-8056-466ccbe3960c" containerName="mariadb-client" Jan 27 09:12:25 crc kubenswrapper[4799]: I0127 09:12:25.672945 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="94bc228d-6818-445b-8056-466ccbe3960c" containerName="mariadb-client" Jan 27 09:12:25 crc kubenswrapper[4799]: I0127 09:12:25.673422 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="94bc228d-6818-445b-8056-466ccbe3960c" containerName="mariadb-client" Jan 27 09:12:25 crc kubenswrapper[4799]: I0127 09:12:25.674460 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 09:12:25 crc kubenswrapper[4799]: I0127 09:12:25.678768 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 27 09:12:25 crc kubenswrapper[4799]: I0127 09:12:25.705768 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzt56\" (UniqueName: \"kubernetes.io/projected/94bc228d-6818-445b-8056-466ccbe3960c-kube-api-access-rzt56\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:25 crc kubenswrapper[4799]: I0127 09:12:25.807688 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf7c8\" (UniqueName: \"kubernetes.io/projected/86dca6e4-4f85-46b6-a14c-97b78fe2b2cd-kube-api-access-vf7c8\") pod \"mariadb-client\" (UID: \"86dca6e4-4f85-46b6-a14c-97b78fe2b2cd\") " pod="openstack/mariadb-client" Jan 27 09:12:25 crc kubenswrapper[4799]: I0127 09:12:25.909139 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf7c8\" (UniqueName: \"kubernetes.io/projected/86dca6e4-4f85-46b6-a14c-97b78fe2b2cd-kube-api-access-vf7c8\") pod \"mariadb-client\" (UID: \"86dca6e4-4f85-46b6-a14c-97b78fe2b2cd\") " pod="openstack/mariadb-client" Jan 27 09:12:25 crc kubenswrapper[4799]: I0127 09:12:25.933082 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf7c8\" (UniqueName: \"kubernetes.io/projected/86dca6e4-4f85-46b6-a14c-97b78fe2b2cd-kube-api-access-vf7c8\") pod \"mariadb-client\" (UID: \"86dca6e4-4f85-46b6-a14c-97b78fe2b2cd\") " pod="openstack/mariadb-client" Jan 27 09:12:25 crc kubenswrapper[4799]: I0127 09:12:25.992019 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 09:12:26 crc kubenswrapper[4799]: I0127 09:12:26.216040 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="624c510d0174228382ae11295adcc1459cfba3d0f3954db8d4e4a5129d035f7c" Jan 27 09:12:26 crc kubenswrapper[4799]: I0127 09:12:26.216119 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 09:12:26 crc kubenswrapper[4799]: I0127 09:12:26.233975 4799 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="94bc228d-6818-445b-8056-466ccbe3960c" podUID="86dca6e4-4f85-46b6-a14c-97b78fe2b2cd" Jan 27 09:12:26 crc kubenswrapper[4799]: I0127 09:12:26.421067 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 27 09:12:26 crc kubenswrapper[4799]: W0127 09:12:26.425835 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86dca6e4_4f85_46b6_a14c_97b78fe2b2cd.slice/crio-acc30c6678e1d2fcf3351136a1c1ad69ec5a240f46a7dff3f042ba275631c657 WatchSource:0}: Error finding container acc30c6678e1d2fcf3351136a1c1ad69ec5a240f46a7dff3f042ba275631c657: Status 404 returned error can't find the container with id acc30c6678e1d2fcf3351136a1c1ad69ec5a240f46a7dff3f042ba275631c657 Jan 27 09:12:26 crc kubenswrapper[4799]: I0127 09:12:26.460863 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94bc228d-6818-445b-8056-466ccbe3960c" path="/var/lib/kubelet/pods/94bc228d-6818-445b-8056-466ccbe3960c/volumes" Jan 27 09:12:27 crc kubenswrapper[4799]: I0127 09:12:27.228404 4799 generic.go:334] "Generic (PLEG): container finished" podID="86dca6e4-4f85-46b6-a14c-97b78fe2b2cd" containerID="eef2cd17b4b5022e908e1a96b9120ea50b652e8ca3046f5298cb91026d255390" exitCode=0 Jan 27 09:12:27 crc kubenswrapper[4799]: I0127 09:12:27.228474 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"86dca6e4-4f85-46b6-a14c-97b78fe2b2cd","Type":"ContainerDied","Data":"eef2cd17b4b5022e908e1a96b9120ea50b652e8ca3046f5298cb91026d255390"} Jan 27 09:12:27 crc kubenswrapper[4799]: I0127 09:12:27.228541 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"86dca6e4-4f85-46b6-a14c-97b78fe2b2cd","Type":"ContainerStarted","Data":"acc30c6678e1d2fcf3351136a1c1ad69ec5a240f46a7dff3f042ba275631c657"} Jan 27 09:12:28 crc kubenswrapper[4799]: I0127 09:12:28.555869 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 09:12:28 crc kubenswrapper[4799]: I0127 09:12:28.579297 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_86dca6e4-4f85-46b6-a14c-97b78fe2b2cd/mariadb-client/0.log" Jan 27 09:12:28 crc kubenswrapper[4799]: I0127 09:12:28.615281 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 27 09:12:28 crc kubenswrapper[4799]: I0127 09:12:28.621037 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 27 09:12:28 crc kubenswrapper[4799]: I0127 09:12:28.649979 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf7c8\" (UniqueName: \"kubernetes.io/projected/86dca6e4-4f85-46b6-a14c-97b78fe2b2cd-kube-api-access-vf7c8\") pod \"86dca6e4-4f85-46b6-a14c-97b78fe2b2cd\" (UID: \"86dca6e4-4f85-46b6-a14c-97b78fe2b2cd\") " Jan 27 09:12:28 crc kubenswrapper[4799]: I0127 09:12:28.657791 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86dca6e4-4f85-46b6-a14c-97b78fe2b2cd-kube-api-access-vf7c8" (OuterVolumeSpecName: "kube-api-access-vf7c8") pod "86dca6e4-4f85-46b6-a14c-97b78fe2b2cd" (UID: "86dca6e4-4f85-46b6-a14c-97b78fe2b2cd"). InnerVolumeSpecName "kube-api-access-vf7c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:12:28 crc kubenswrapper[4799]: I0127 09:12:28.751474 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf7c8\" (UniqueName: \"kubernetes.io/projected/86dca6e4-4f85-46b6-a14c-97b78fe2b2cd-kube-api-access-vf7c8\") on node \"crc\" DevicePath \"\"" Jan 27 09:12:29 crc kubenswrapper[4799]: I0127 09:12:29.247130 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acc30c6678e1d2fcf3351136a1c1ad69ec5a240f46a7dff3f042ba275631c657" Jan 27 09:12:29 crc kubenswrapper[4799]: I0127 09:12:29.247195 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 09:12:30 crc kubenswrapper[4799]: I0127 09:12:30.467186 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86dca6e4-4f85-46b6-a14c-97b78fe2b2cd" path="/var/lib/kubelet/pods/86dca6e4-4f85-46b6-a14c-97b78fe2b2cd/volumes" Jan 27 09:12:39 crc kubenswrapper[4799]: I0127 09:12:39.994802 4799 scope.go:117] "RemoveContainer" containerID="af361d61e6fd7f84ab686353adff4fb91448fc0b9d8ebc26fa8079545a8e5189" Jan 27 09:12:53 crc kubenswrapper[4799]: I0127 09:12:53.732058 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:12:53 crc kubenswrapper[4799]: I0127 09:12:53.732762 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:13:02 crc kubenswrapper[4799]: I0127 09:13:02.972989 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 09:13:02 crc kubenswrapper[4799]: E0127 09:13:02.973954 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86dca6e4-4f85-46b6-a14c-97b78fe2b2cd" containerName="mariadb-client" Jan 27 09:13:02 crc kubenswrapper[4799]: I0127 09:13:02.973972 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="86dca6e4-4f85-46b6-a14c-97b78fe2b2cd" containerName="mariadb-client" Jan 27 09:13:02 crc kubenswrapper[4799]: I0127 09:13:02.974161 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="86dca6e4-4f85-46b6-a14c-97b78fe2b2cd" containerName="mariadb-client" Jan 27 09:13:02 crc kubenswrapper[4799]: I0127 09:13:02.975119 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:02 crc kubenswrapper[4799]: I0127 09:13:02.978613 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-689g2" Jan 27 09:13:02 crc kubenswrapper[4799]: I0127 09:13:02.978615 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 27 09:13:02 crc kubenswrapper[4799]: I0127 09:13:02.980227 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 27 09:13:02 crc kubenswrapper[4799]: I0127 09:13:02.991617 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 27 09:13:02 crc kubenswrapper[4799]: I0127 09:13:02.994104 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.012490 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.014757 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.026906 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.062951 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.073858 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.160023 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db6c91f2-0dfa-4126-becf-cbd05d330a85-config\") pod \"ovsdbserver-nb-0\" (UID: \"db6c91f2-0dfa-4126-becf-cbd05d330a85\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.160596 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dacd9f52-7f23-4655-aa11-dde805853c86\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dacd9f52-7f23-4655-aa11-dde805853c86\") pod \"ovsdbserver-nb-1\" (UID: \"52aa5460-8ac3-46cf-bd19-cb2384cb1740\") " pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.160640 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrtzv\" (UniqueName: \"kubernetes.io/projected/db6c91f2-0dfa-4126-becf-cbd05d330a85-kube-api-access-wrtzv\") pod \"ovsdbserver-nb-0\" (UID: \"db6c91f2-0dfa-4126-becf-cbd05d330a85\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.160708 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d2b3b2a-25ed-4404-a93f-ebae05f98ba3-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"7d2b3b2a-25ed-4404-a93f-ebae05f98ba3\") " pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.160745 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m2p5\" (UniqueName: \"kubernetes.io/projected/52aa5460-8ac3-46cf-bd19-cb2384cb1740-kube-api-access-4m2p5\") pod \"ovsdbserver-nb-1\" (UID: \"52aa5460-8ac3-46cf-bd19-cb2384cb1740\") " pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.160775 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/52aa5460-8ac3-46cf-bd19-cb2384cb1740-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"52aa5460-8ac3-46cf-bd19-cb2384cb1740\") " pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.160818 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7d2b3b2a-25ed-4404-a93f-ebae05f98ba3-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"7d2b3b2a-25ed-4404-a93f-ebae05f98ba3\") " pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.160849 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52aa5460-8ac3-46cf-bd19-cb2384cb1740-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"52aa5460-8ac3-46cf-bd19-cb2384cb1740\") " pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.160913 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db6c91f2-0dfa-4126-becf-cbd05d330a85-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"db6c91f2-0dfa-4126-becf-cbd05d330a85\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.160960 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2556a043-50a7-44ea-9369-db7fc8cd7b05\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2556a043-50a7-44ea-9369-db7fc8cd7b05\") pod \"ovsdbserver-nb-0\" (UID: \"db6c91f2-0dfa-4126-becf-cbd05d330a85\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.160997 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxz7m\" (UniqueName: \"kubernetes.io/projected/7d2b3b2a-25ed-4404-a93f-ebae05f98ba3-kube-api-access-nxz7m\") pod \"ovsdbserver-nb-2\" (UID: \"7d2b3b2a-25ed-4404-a93f-ebae05f98ba3\") " pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.161028 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cd028bb3-cacd-406d-a6b5-25983ccbcbaa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd028bb3-cacd-406d-a6b5-25983ccbcbaa\") pod \"ovsdbserver-nb-2\" (UID: \"7d2b3b2a-25ed-4404-a93f-ebae05f98ba3\") " pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.161098 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db6c91f2-0dfa-4126-becf-cbd05d330a85-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"db6c91f2-0dfa-4126-becf-cbd05d330a85\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.161128 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d2b3b2a-25ed-4404-a93f-ebae05f98ba3-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"7d2b3b2a-25ed-4404-a93f-ebae05f98ba3\") " pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.161161 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/db6c91f2-0dfa-4126-becf-cbd05d330a85-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"db6c91f2-0dfa-4126-becf-cbd05d330a85\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.161190 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52aa5460-8ac3-46cf-bd19-cb2384cb1740-config\") pod \"ovsdbserver-nb-1\" (UID: \"52aa5460-8ac3-46cf-bd19-cb2384cb1740\") " pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.161232 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52aa5460-8ac3-46cf-bd19-cb2384cb1740-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"52aa5460-8ac3-46cf-bd19-cb2384cb1740\") " pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.161259 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2b3b2a-25ed-4404-a93f-ebae05f98ba3-config\") pod \"ovsdbserver-nb-2\" (UID: \"7d2b3b2a-25ed-4404-a93f-ebae05f98ba3\") " pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.166109 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.167844 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.170734 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-bz725" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.170956 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.172563 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.174685 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.176499 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.183519 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.185673 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.199265 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.212552 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.231410 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.262929 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68c601d3-ce33-4bf0-9b39-811233938733-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"68c601d3-ce33-4bf0-9b39-811233938733\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263011 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2556a043-50a7-44ea-9369-db7fc8cd7b05\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2556a043-50a7-44ea-9369-db7fc8cd7b05\") pod \"ovsdbserver-nb-0\" (UID: \"db6c91f2-0dfa-4126-becf-cbd05d330a85\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263038 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxz7m\" (UniqueName: \"kubernetes.io/projected/7d2b3b2a-25ed-4404-a93f-ebae05f98ba3-kube-api-access-nxz7m\") pod \"ovsdbserver-nb-2\" (UID: \"7d2b3b2a-25ed-4404-a93f-ebae05f98ba3\") " pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263065 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c601d3-ce33-4bf0-9b39-811233938733-config\") pod \"ovsdbserver-sb-0\" (UID: \"68c601d3-ce33-4bf0-9b39-811233938733\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263090 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cd028bb3-cacd-406d-a6b5-25983ccbcbaa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd028bb3-cacd-406d-a6b5-25983ccbcbaa\") pod \"ovsdbserver-nb-2\" (UID: \"7d2b3b2a-25ed-4404-a93f-ebae05f98ba3\") " pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263132 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xddht\" (UniqueName: \"kubernetes.io/projected/68c601d3-ce33-4bf0-9b39-811233938733-kube-api-access-xddht\") pod \"ovsdbserver-sb-0\" (UID: \"68c601d3-ce33-4bf0-9b39-811233938733\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263158 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c601d3-ce33-4bf0-9b39-811233938733-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"68c601d3-ce33-4bf0-9b39-811233938733\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263187 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db6c91f2-0dfa-4126-becf-cbd05d330a85-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"db6c91f2-0dfa-4126-becf-cbd05d330a85\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263216 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d2b3b2a-25ed-4404-a93f-ebae05f98ba3-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"7d2b3b2a-25ed-4404-a93f-ebae05f98ba3\") " pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263238 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/db6c91f2-0dfa-4126-becf-cbd05d330a85-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"db6c91f2-0dfa-4126-becf-cbd05d330a85\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263258 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52aa5460-8ac3-46cf-bd19-cb2384cb1740-config\") pod \"ovsdbserver-nb-1\" (UID: \"52aa5460-8ac3-46cf-bd19-cb2384cb1740\") " pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263283 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52aa5460-8ac3-46cf-bd19-cb2384cb1740-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"52aa5460-8ac3-46cf-bd19-cb2384cb1740\") " pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263356 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2b3b2a-25ed-4404-a93f-ebae05f98ba3-config\") pod \"ovsdbserver-nb-2\" (UID: \"7d2b3b2a-25ed-4404-a93f-ebae05f98ba3\") " pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263381 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db6c91f2-0dfa-4126-becf-cbd05d330a85-config\") pod \"ovsdbserver-nb-0\" (UID: \"db6c91f2-0dfa-4126-becf-cbd05d330a85\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263408 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dacd9f52-7f23-4655-aa11-dde805853c86\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dacd9f52-7f23-4655-aa11-dde805853c86\") pod \"ovsdbserver-nb-1\" (UID: \"52aa5460-8ac3-46cf-bd19-cb2384cb1740\") " pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263430 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrtzv\" (UniqueName: \"kubernetes.io/projected/db6c91f2-0dfa-4126-becf-cbd05d330a85-kube-api-access-wrtzv\") pod \"ovsdbserver-nb-0\" (UID: \"db6c91f2-0dfa-4126-becf-cbd05d330a85\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263466 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d2b3b2a-25ed-4404-a93f-ebae05f98ba3-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"7d2b3b2a-25ed-4404-a93f-ebae05f98ba3\") " pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263491 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m2p5\" (UniqueName: \"kubernetes.io/projected/52aa5460-8ac3-46cf-bd19-cb2384cb1740-kube-api-access-4m2p5\") pod \"ovsdbserver-nb-1\" (UID: \"52aa5460-8ac3-46cf-bd19-cb2384cb1740\") " pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263513 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/52aa5460-8ac3-46cf-bd19-cb2384cb1740-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"52aa5460-8ac3-46cf-bd19-cb2384cb1740\") " pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263543 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7d2b3b2a-25ed-4404-a93f-ebae05f98ba3-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"7d2b3b2a-25ed-4404-a93f-ebae05f98ba3\") " pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263563 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52aa5460-8ac3-46cf-bd19-cb2384cb1740-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"52aa5460-8ac3-46cf-bd19-cb2384cb1740\") " pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263593 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68c601d3-ce33-4bf0-9b39-811233938733-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"68c601d3-ce33-4bf0-9b39-811233938733\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263621 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bdbbe023-7b7b-40b3-9af7-be2221a88fe5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bdbbe023-7b7b-40b3-9af7-be2221a88fe5\") pod \"ovsdbserver-sb-0\" (UID: \"68c601d3-ce33-4bf0-9b39-811233938733\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.263649 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db6c91f2-0dfa-4126-becf-cbd05d330a85-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"db6c91f2-0dfa-4126-becf-cbd05d330a85\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.265216 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52aa5460-8ac3-46cf-bd19-cb2384cb1740-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"52aa5460-8ac3-46cf-bd19-cb2384cb1740\") " pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.265565 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2b3b2a-25ed-4404-a93f-ebae05f98ba3-config\") pod \"ovsdbserver-nb-2\" (UID: \"7d2b3b2a-25ed-4404-a93f-ebae05f98ba3\") " pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.266124 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d2b3b2a-25ed-4404-a93f-ebae05f98ba3-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"7d2b3b2a-25ed-4404-a93f-ebae05f98ba3\") " pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.266220 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/52aa5460-8ac3-46cf-bd19-cb2384cb1740-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"52aa5460-8ac3-46cf-bd19-cb2384cb1740\") " pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.266256 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db6c91f2-0dfa-4126-becf-cbd05d330a85-config\") pod \"ovsdbserver-nb-0\" (UID: \"db6c91f2-0dfa-4126-becf-cbd05d330a85\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.266496 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/db6c91f2-0dfa-4126-becf-cbd05d330a85-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"db6c91f2-0dfa-4126-becf-cbd05d330a85\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.266627 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7d2b3b2a-25ed-4404-a93f-ebae05f98ba3-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"7d2b3b2a-25ed-4404-a93f-ebae05f98ba3\") " pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.266640 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db6c91f2-0dfa-4126-becf-cbd05d330a85-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"db6c91f2-0dfa-4126-becf-cbd05d330a85\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.267160 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52aa5460-8ac3-46cf-bd19-cb2384cb1740-config\") pod \"ovsdbserver-nb-1\" (UID: \"52aa5460-8ac3-46cf-bd19-cb2384cb1740\") " pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.269714 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db6c91f2-0dfa-4126-becf-cbd05d330a85-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"db6c91f2-0dfa-4126-becf-cbd05d330a85\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.271293 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d2b3b2a-25ed-4404-a93f-ebae05f98ba3-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"7d2b3b2a-25ed-4404-a93f-ebae05f98ba3\") " pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.271588 4799 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.271592 4799 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.271614 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2556a043-50a7-44ea-9369-db7fc8cd7b05\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2556a043-50a7-44ea-9369-db7fc8cd7b05\") pod \"ovsdbserver-nb-0\" (UID: \"db6c91f2-0dfa-4126-becf-cbd05d330a85\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f33623fc2ee42c9bf9ff2daa814de3432584a9984ce1742a63f0fd457ec028ea/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.271615 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dacd9f52-7f23-4655-aa11-dde805853c86\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dacd9f52-7f23-4655-aa11-dde805853c86\") pod \"ovsdbserver-nb-1\" (UID: \"52aa5460-8ac3-46cf-bd19-cb2384cb1740\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e888adffcf982869b7818fb76bc36be00b767a3353f388b1c7675efc55f65d73/globalmount\"" pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.275931 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52aa5460-8ac3-46cf-bd19-cb2384cb1740-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"52aa5460-8ac3-46cf-bd19-cb2384cb1740\") " pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.278338 4799 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.278386 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cd028bb3-cacd-406d-a6b5-25983ccbcbaa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd028bb3-cacd-406d-a6b5-25983ccbcbaa\") pod \"ovsdbserver-nb-2\" (UID: \"7d2b3b2a-25ed-4404-a93f-ebae05f98ba3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f305d92f70f49c7761499469c837053a200de8b2893029fd3fb2755ab0642bfe/globalmount\"" pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.284397 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrtzv\" (UniqueName: \"kubernetes.io/projected/db6c91f2-0dfa-4126-becf-cbd05d330a85-kube-api-access-wrtzv\") pod \"ovsdbserver-nb-0\" (UID: \"db6c91f2-0dfa-4126-becf-cbd05d330a85\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.291448 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m2p5\" (UniqueName: \"kubernetes.io/projected/52aa5460-8ac3-46cf-bd19-cb2384cb1740-kube-api-access-4m2p5\") pod \"ovsdbserver-nb-1\" (UID: \"52aa5460-8ac3-46cf-bd19-cb2384cb1740\") " pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.302137 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxz7m\" (UniqueName: \"kubernetes.io/projected/7d2b3b2a-25ed-4404-a93f-ebae05f98ba3-kube-api-access-nxz7m\") pod \"ovsdbserver-nb-2\" (UID: \"7d2b3b2a-25ed-4404-a93f-ebae05f98ba3\") " pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.305413 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dacd9f52-7f23-4655-aa11-dde805853c86\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dacd9f52-7f23-4655-aa11-dde805853c86\") pod \"ovsdbserver-nb-1\" (UID: \"52aa5460-8ac3-46cf-bd19-cb2384cb1740\") " pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.306966 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2556a043-50a7-44ea-9369-db7fc8cd7b05\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2556a043-50a7-44ea-9369-db7fc8cd7b05\") pod \"ovsdbserver-nb-0\" (UID: \"db6c91f2-0dfa-4126-becf-cbd05d330a85\") " pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.320464 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cd028bb3-cacd-406d-a6b5-25983ccbcbaa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd028bb3-cacd-406d-a6b5-25983ccbcbaa\") pod \"ovsdbserver-nb-2\" (UID: \"7d2b3b2a-25ed-4404-a93f-ebae05f98ba3\") " pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.327716 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.348087 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.364757 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d0cbe319-884e-4ddf-b7e5-95711b219241-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"d0cbe319-884e-4ddf-b7e5-95711b219241\") " pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.364814 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0cbe319-884e-4ddf-b7e5-95711b219241-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"d0cbe319-884e-4ddf-b7e5-95711b219241\") " pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.364844 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-aaa38ae9-2b49-4ae4-9497-9c2acacd18d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aaa38ae9-2b49-4ae4-9497-9c2acacd18d1\") pod \"ovsdbserver-sb-1\" (UID: \"f4461ea5-728b-42b7-a411-b160417adb11\") " pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.364879 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68c601d3-ce33-4bf0-9b39-811233938733-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"68c601d3-ce33-4bf0-9b39-811233938733\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.364911 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bdbbe023-7b7b-40b3-9af7-be2221a88fe5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bdbbe023-7b7b-40b3-9af7-be2221a88fe5\") pod \"ovsdbserver-sb-0\" (UID: \"68c601d3-ce33-4bf0-9b39-811233938733\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.364940 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68c601d3-ce33-4bf0-9b39-811233938733-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"68c601d3-ce33-4bf0-9b39-811233938733\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.364972 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4461ea5-728b-42b7-a411-b160417adb11-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"f4461ea5-728b-42b7-a411-b160417adb11\") " pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.364999 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c601d3-ce33-4bf0-9b39-811233938733-config\") pod \"ovsdbserver-sb-0\" (UID: \"68c601d3-ce33-4bf0-9b39-811233938733\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.365029 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4461ea5-728b-42b7-a411-b160417adb11-config\") pod \"ovsdbserver-sb-1\" (UID: \"f4461ea5-728b-42b7-a411-b160417adb11\") " pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.365061 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xddht\" (UniqueName: \"kubernetes.io/projected/68c601d3-ce33-4bf0-9b39-811233938733-kube-api-access-xddht\") pod \"ovsdbserver-sb-0\" (UID: \"68c601d3-ce33-4bf0-9b39-811233938733\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.365085 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx9l6\" (UniqueName: \"kubernetes.io/projected/d0cbe319-884e-4ddf-b7e5-95711b219241-kube-api-access-cx9l6\") pod \"ovsdbserver-sb-2\" (UID: \"d0cbe319-884e-4ddf-b7e5-95711b219241\") " pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.365111 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c601d3-ce33-4bf0-9b39-811233938733-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"68c601d3-ce33-4bf0-9b39-811233938733\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.365132 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4461ea5-728b-42b7-a411-b160417adb11-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"f4461ea5-728b-42b7-a411-b160417adb11\") " pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.365158 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqq4p\" (UniqueName: \"kubernetes.io/projected/f4461ea5-728b-42b7-a411-b160417adb11-kube-api-access-fqq4p\") pod \"ovsdbserver-sb-1\" (UID: \"f4461ea5-728b-42b7-a411-b160417adb11\") " pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.365184 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4461ea5-728b-42b7-a411-b160417adb11-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"f4461ea5-728b-42b7-a411-b160417adb11\") " pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.365213 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-17a93864-9485-4b98-a348-cf76cccb20bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17a93864-9485-4b98-a348-cf76cccb20bd\") pod \"ovsdbserver-sb-2\" (UID: \"d0cbe319-884e-4ddf-b7e5-95711b219241\") " pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.365270 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0cbe319-884e-4ddf-b7e5-95711b219241-config\") pod \"ovsdbserver-sb-2\" (UID: \"d0cbe319-884e-4ddf-b7e5-95711b219241\") " pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.365295 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cbe319-884e-4ddf-b7e5-95711b219241-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"d0cbe319-884e-4ddf-b7e5-95711b219241\") " pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.366156 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68c601d3-ce33-4bf0-9b39-811233938733-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"68c601d3-ce33-4bf0-9b39-811233938733\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.367073 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68c601d3-ce33-4bf0-9b39-811233938733-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"68c601d3-ce33-4bf0-9b39-811233938733\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.367633 4799 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.367663 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c601d3-ce33-4bf0-9b39-811233938733-config\") pod \"ovsdbserver-sb-0\" (UID: \"68c601d3-ce33-4bf0-9b39-811233938733\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.367672 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bdbbe023-7b7b-40b3-9af7-be2221a88fe5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bdbbe023-7b7b-40b3-9af7-be2221a88fe5\") pod \"ovsdbserver-sb-0\" (UID: \"68c601d3-ce33-4bf0-9b39-811233938733\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/76dd60e9af6957c9d3a0c66ce34fe064548bb2d2abe0741bff49800712223138/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.374076 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c601d3-ce33-4bf0-9b39-811233938733-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"68c601d3-ce33-4bf0-9b39-811233938733\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.389621 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xddht\" (UniqueName: \"kubernetes.io/projected/68c601d3-ce33-4bf0-9b39-811233938733-kube-api-access-xddht\") pod \"ovsdbserver-sb-0\" (UID: \"68c601d3-ce33-4bf0-9b39-811233938733\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.407874 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bdbbe023-7b7b-40b3-9af7-be2221a88fe5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bdbbe023-7b7b-40b3-9af7-be2221a88fe5\") pod \"ovsdbserver-sb-0\" (UID: \"68c601d3-ce33-4bf0-9b39-811233938733\") " pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.466875 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4461ea5-728b-42b7-a411-b160417adb11-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"f4461ea5-728b-42b7-a411-b160417adb11\") " pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.467242 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4461ea5-728b-42b7-a411-b160417adb11-config\") pod \"ovsdbserver-sb-1\" (UID: \"f4461ea5-728b-42b7-a411-b160417adb11\") " pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.467283 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx9l6\" (UniqueName: \"kubernetes.io/projected/d0cbe319-884e-4ddf-b7e5-95711b219241-kube-api-access-cx9l6\") pod \"ovsdbserver-sb-2\" (UID: \"d0cbe319-884e-4ddf-b7e5-95711b219241\") " pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.467310 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4461ea5-728b-42b7-a411-b160417adb11-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"f4461ea5-728b-42b7-a411-b160417adb11\") " pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.467373 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqq4p\" (UniqueName: \"kubernetes.io/projected/f4461ea5-728b-42b7-a411-b160417adb11-kube-api-access-fqq4p\") pod \"ovsdbserver-sb-1\" (UID: \"f4461ea5-728b-42b7-a411-b160417adb11\") " pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.467400 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4461ea5-728b-42b7-a411-b160417adb11-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"f4461ea5-728b-42b7-a411-b160417adb11\") " pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.467429 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-17a93864-9485-4b98-a348-cf76cccb20bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17a93864-9485-4b98-a348-cf76cccb20bd\") pod \"ovsdbserver-sb-2\" (UID: \"d0cbe319-884e-4ddf-b7e5-95711b219241\") " pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.467503 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0cbe319-884e-4ddf-b7e5-95711b219241-config\") pod \"ovsdbserver-sb-2\" (UID: \"d0cbe319-884e-4ddf-b7e5-95711b219241\") " pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.467527 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cbe319-884e-4ddf-b7e5-95711b219241-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"d0cbe319-884e-4ddf-b7e5-95711b219241\") " pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.467590 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d0cbe319-884e-4ddf-b7e5-95711b219241-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"d0cbe319-884e-4ddf-b7e5-95711b219241\") " pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.467615 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0cbe319-884e-4ddf-b7e5-95711b219241-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"d0cbe319-884e-4ddf-b7e5-95711b219241\") " pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.467638 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-aaa38ae9-2b49-4ae4-9497-9c2acacd18d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aaa38ae9-2b49-4ae4-9497-9c2acacd18d1\") pod \"ovsdbserver-sb-1\" (UID: \"f4461ea5-728b-42b7-a411-b160417adb11\") " pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.469046 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4461ea5-728b-42b7-a411-b160417adb11-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"f4461ea5-728b-42b7-a411-b160417adb11\") " pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.469753 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4461ea5-728b-42b7-a411-b160417adb11-config\") pod \"ovsdbserver-sb-1\" (UID: \"f4461ea5-728b-42b7-a411-b160417adb11\") " pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.470706 4799 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.470731 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-17a93864-9485-4b98-a348-cf76cccb20bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17a93864-9485-4b98-a348-cf76cccb20bd\") pod \"ovsdbserver-sb-2\" (UID: \"d0cbe319-884e-4ddf-b7e5-95711b219241\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/327a726ba5f5d7e0990bab8b20a4653c32a30d50144e10235d4a7072698184b2/globalmount\"" pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.471553 4799 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.471578 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-aaa38ae9-2b49-4ae4-9497-9c2acacd18d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aaa38ae9-2b49-4ae4-9497-9c2acacd18d1\") pod \"ovsdbserver-sb-1\" (UID: \"f4461ea5-728b-42b7-a411-b160417adb11\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ff549075de0bd1df8d97fcd10b144b798d494c67c8b800c46a7926c2b81c2531/globalmount\"" pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.472025 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d0cbe319-884e-4ddf-b7e5-95711b219241-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"d0cbe319-884e-4ddf-b7e5-95711b219241\") " pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.472307 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4461ea5-728b-42b7-a411-b160417adb11-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"f4461ea5-728b-42b7-a411-b160417adb11\") " pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.474304 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cbe319-884e-4ddf-b7e5-95711b219241-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"d0cbe319-884e-4ddf-b7e5-95711b219241\") " pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.475799 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0cbe319-884e-4ddf-b7e5-95711b219241-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"d0cbe319-884e-4ddf-b7e5-95711b219241\") " pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.476026 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0cbe319-884e-4ddf-b7e5-95711b219241-config\") pod \"ovsdbserver-sb-2\" (UID: \"d0cbe319-884e-4ddf-b7e5-95711b219241\") " pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.476920 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4461ea5-728b-42b7-a411-b160417adb11-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"f4461ea5-728b-42b7-a411-b160417adb11\") " pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.487828 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqq4p\" (UniqueName: \"kubernetes.io/projected/f4461ea5-728b-42b7-a411-b160417adb11-kube-api-access-fqq4p\") pod \"ovsdbserver-sb-1\" (UID: \"f4461ea5-728b-42b7-a411-b160417adb11\") " pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.492224 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx9l6\" (UniqueName: \"kubernetes.io/projected/d0cbe319-884e-4ddf-b7e5-95711b219241-kube-api-access-cx9l6\") pod \"ovsdbserver-sb-2\" (UID: \"d0cbe319-884e-4ddf-b7e5-95711b219241\") " pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.494270 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.499519 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-17a93864-9485-4b98-a348-cf76cccb20bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17a93864-9485-4b98-a348-cf76cccb20bd\") pod \"ovsdbserver-sb-2\" (UID: \"d0cbe319-884e-4ddf-b7e5-95711b219241\") " pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.505385 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-aaa38ae9-2b49-4ae4-9497-9c2acacd18d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aaa38ae9-2b49-4ae4-9497-9c2acacd18d1\") pod \"ovsdbserver-sb-1\" (UID: \"f4461ea5-728b-42b7-a411-b160417adb11\") " pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.513666 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.521805 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.604743 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.875045 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 27 09:13:03 crc kubenswrapper[4799]: I0127 09:13:03.964250 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 27 09:13:03 crc kubenswrapper[4799]: W0127 09:13:03.971985 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4461ea5_728b_42b7_a411_b160417adb11.slice/crio-41db62c66872531baa1662c23eb0b10aec94ef7439a08bdc72e08716fb3ae5ed WatchSource:0}: Error finding container 41db62c66872531baa1662c23eb0b10aec94ef7439a08bdc72e08716fb3ae5ed: Status 404 returned error can't find the container with id 41db62c66872531baa1662c23eb0b10aec94ef7439a08bdc72e08716fb3ae5ed Jan 27 09:13:04 crc kubenswrapper[4799]: I0127 09:13:04.063123 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 09:13:04 crc kubenswrapper[4799]: I0127 09:13:04.214361 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 09:13:04 crc kubenswrapper[4799]: I0127 09:13:04.572393 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"68c601d3-ce33-4bf0-9b39-811233938733","Type":"ContainerStarted","Data":"6558a39c59d2f5c1301087bb1f2fdb8f2fb1429cac6121d0bb9a7b3ce8c009ae"} Jan 27 09:13:04 crc kubenswrapper[4799]: I0127 09:13:04.572723 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"68c601d3-ce33-4bf0-9b39-811233938733","Type":"ContainerStarted","Data":"6eb16fe1ba07852fbf5be896057590e64a692909f49a1521c0393ec851a7df4b"} Jan 27 09:13:04 crc kubenswrapper[4799]: I0127 09:13:04.572737 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"68c601d3-ce33-4bf0-9b39-811233938733","Type":"ContainerStarted","Data":"995487b86a4418ef7beddb4814a957ce7fa861d52f89c00e13bd7308b9f25b27"} Jan 27 09:13:04 crc kubenswrapper[4799]: I0127 09:13:04.580827 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"52aa5460-8ac3-46cf-bd19-cb2384cb1740","Type":"ContainerStarted","Data":"889349e327505c5eccf88c9020dc19bd74db53d91916be99f374cf999bc793fc"} Jan 27 09:13:04 crc kubenswrapper[4799]: I0127 09:13:04.580879 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"52aa5460-8ac3-46cf-bd19-cb2384cb1740","Type":"ContainerStarted","Data":"ce54fa42f603e2f9ecfb5b09d1d98e985214522e369fbd217889462f1944cdcf"} Jan 27 09:13:04 crc kubenswrapper[4799]: I0127 09:13:04.580893 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"52aa5460-8ac3-46cf-bd19-cb2384cb1740","Type":"ContainerStarted","Data":"2ac41cf10862c5a6583e07050a4414b3f8ff26c0d51d69a0fdb78fd36dc055fb"} Jan 27 09:13:04 crc kubenswrapper[4799]: I0127 09:13:04.586496 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"db6c91f2-0dfa-4126-becf-cbd05d330a85","Type":"ContainerStarted","Data":"4e23fff36022e1cb6a2050dbede6877d662546d483a54b2ea83d37c74fde0352"} Jan 27 09:13:04 crc kubenswrapper[4799]: I0127 09:13:04.586529 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"db6c91f2-0dfa-4126-becf-cbd05d330a85","Type":"ContainerStarted","Data":"f157108516a0dd55fae32e5d69aef1854dde1bbff1d5ba94cf2819df9261a1d2"} Jan 27 09:13:04 crc kubenswrapper[4799]: I0127 09:13:04.590135 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=2.590115349 podStartE2EDuration="2.590115349s" podCreationTimestamp="2026-01-27 09:13:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:04.588593448 +0000 UTC m=+5250.899697533" watchObservedRunningTime="2026-01-27 09:13:04.590115349 +0000 UTC m=+5250.901219414" Jan 27 09:13:04 crc kubenswrapper[4799]: I0127 09:13:04.591172 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"f4461ea5-728b-42b7-a411-b160417adb11","Type":"ContainerStarted","Data":"4ad586ff2661eb80fb2c1a03529e96a61c12c58f925f2bd133f0fdd9b930d954"} Jan 27 09:13:04 crc kubenswrapper[4799]: I0127 09:13:04.591209 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"f4461ea5-728b-42b7-a411-b160417adb11","Type":"ContainerStarted","Data":"079ea2bfe3eebada7b9c77e4be0f2159cdaa941c90ddc9ede74692460c358382"} Jan 27 09:13:04 crc kubenswrapper[4799]: I0127 09:13:04.591219 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"f4461ea5-728b-42b7-a411-b160417adb11","Type":"ContainerStarted","Data":"41db62c66872531baa1662c23eb0b10aec94ef7439a08bdc72e08716fb3ae5ed"} Jan 27 09:13:04 crc kubenswrapper[4799]: I0127 09:13:04.613964 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.613947439 podStartE2EDuration="3.613947439s" podCreationTimestamp="2026-01-27 09:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:04.606540976 +0000 UTC m=+5250.917645041" watchObservedRunningTime="2026-01-27 09:13:04.613947439 +0000 UTC m=+5250.925051504" Jan 27 09:13:04 crc kubenswrapper[4799]: I0127 09:13:04.630336 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=2.630296724 podStartE2EDuration="2.630296724s" podCreationTimestamp="2026-01-27 09:13:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:04.625681769 +0000 UTC m=+5250.936785834" watchObservedRunningTime="2026-01-27 09:13:04.630296724 +0000 UTC m=+5250.941400789" Jan 27 09:13:04 crc kubenswrapper[4799]: I0127 09:13:04.928013 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 27 09:13:04 crc kubenswrapper[4799]: W0127 09:13:04.928009 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d2b3b2a_25ed_4404_a93f_ebae05f98ba3.slice/crio-7fc1b72fd0267d8a930fb9afcec28699089eb64ff8578dd009b3baa67628c161 WatchSource:0}: Error finding container 7fc1b72fd0267d8a930fb9afcec28699089eb64ff8578dd009b3baa67628c161: Status 404 returned error can't find the container with id 7fc1b72fd0267d8a930fb9afcec28699089eb64ff8578dd009b3baa67628c161 Jan 27 09:13:05 crc kubenswrapper[4799]: I0127 09:13:05.078926 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 27 09:13:05 crc kubenswrapper[4799]: W0127 09:13:05.088806 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cbe319_884e_4ddf_b7e5_95711b219241.slice/crio-9a026fc96635253704ba8001c5531bb32c4adab716d8ac2db1c1a390f60830d3 WatchSource:0}: Error finding container 9a026fc96635253704ba8001c5531bb32c4adab716d8ac2db1c1a390f60830d3: Status 404 returned error can't find the container with id 9a026fc96635253704ba8001c5531bb32c4adab716d8ac2db1c1a390f60830d3 Jan 27 09:13:05 crc kubenswrapper[4799]: I0127 09:13:05.599565 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"d0cbe319-884e-4ddf-b7e5-95711b219241","Type":"ContainerStarted","Data":"863f5c627a22bee66547c90ca570cc9b3fdf85ac4133234d8e2c5d6911a36782"} Jan 27 09:13:05 crc kubenswrapper[4799]: I0127 09:13:05.599826 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"d0cbe319-884e-4ddf-b7e5-95711b219241","Type":"ContainerStarted","Data":"e4fa45150734845431f8ee32a99320d57fe57aa8b36b144448f98841e4a42810"} Jan 27 09:13:05 crc kubenswrapper[4799]: I0127 09:13:05.599841 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"d0cbe319-884e-4ddf-b7e5-95711b219241","Type":"ContainerStarted","Data":"9a026fc96635253704ba8001c5531bb32c4adab716d8ac2db1c1a390f60830d3"} Jan 27 09:13:05 crc kubenswrapper[4799]: I0127 09:13:05.601288 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"7d2b3b2a-25ed-4404-a93f-ebae05f98ba3","Type":"ContainerStarted","Data":"1eb809b40d41872d8b25fcb06e8a96b14b2ff26775861a0d3ec8885ad5668b53"} Jan 27 09:13:05 crc kubenswrapper[4799]: I0127 09:13:05.601358 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"7d2b3b2a-25ed-4404-a93f-ebae05f98ba3","Type":"ContainerStarted","Data":"92572ca8af4b8a4a02a99a5d28d33cc7978f2e2860b1caf2a362786d54ec3cb6"} Jan 27 09:13:05 crc kubenswrapper[4799]: I0127 09:13:05.601375 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"7d2b3b2a-25ed-4404-a93f-ebae05f98ba3","Type":"ContainerStarted","Data":"7fc1b72fd0267d8a930fb9afcec28699089eb64ff8578dd009b3baa67628c161"} Jan 27 09:13:05 crc kubenswrapper[4799]: I0127 09:13:05.603124 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"db6c91f2-0dfa-4126-becf-cbd05d330a85","Type":"ContainerStarted","Data":"bd5188ae4b1188ddf5cb0cd7a32ef29c35121abef5643012f27d33a091f22d59"} Jan 27 09:13:05 crc kubenswrapper[4799]: I0127 09:13:05.624415 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.624396221 podStartE2EDuration="3.624396221s" podCreationTimestamp="2026-01-27 09:13:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:05.617068892 +0000 UTC m=+5251.928172957" watchObservedRunningTime="2026-01-27 09:13:05.624396221 +0000 UTC m=+5251.935500286" Jan 27 09:13:05 crc kubenswrapper[4799]: I0127 09:13:05.635301 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=4.635279618 podStartE2EDuration="4.635279618s" podCreationTimestamp="2026-01-27 09:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:05.63240387 +0000 UTC m=+5251.943507935" watchObservedRunningTime="2026-01-27 09:13:05.635279618 +0000 UTC m=+5251.946383683" Jan 27 09:13:05 crc kubenswrapper[4799]: I0127 09:13:05.665186 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=4.665159563 podStartE2EDuration="4.665159563s" podCreationTimestamp="2026-01-27 09:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:05.658988735 +0000 UTC m=+5251.970092800" watchObservedRunningTime="2026-01-27 09:13:05.665159563 +0000 UTC m=+5251.976263648" Jan 27 09:13:06 crc kubenswrapper[4799]: I0127 09:13:06.328875 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:06 crc kubenswrapper[4799]: I0127 09:13:06.349190 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:06 crc kubenswrapper[4799]: I0127 09:13:06.495527 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:06 crc kubenswrapper[4799]: I0127 09:13:06.514449 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:06 crc kubenswrapper[4799]: I0127 09:13:06.522884 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:06 crc kubenswrapper[4799]: I0127 09:13:06.605073 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:08 crc kubenswrapper[4799]: I0127 09:13:08.327973 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:08 crc kubenswrapper[4799]: I0127 09:13:08.349435 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:08 crc kubenswrapper[4799]: I0127 09:13:08.495260 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:08 crc kubenswrapper[4799]: I0127 09:13:08.513864 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:08 crc kubenswrapper[4799]: I0127 09:13:08.522825 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:08 crc kubenswrapper[4799]: I0127 09:13:08.605542 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.390773 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.414353 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.478819 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.565661 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.574976 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.597154 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.617118 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.622202 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.664994 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.717994 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.745504 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c574c44d9-87zkn"] Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.746858 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c574c44d9-87zkn" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.748474 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.775418 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c574c44d9-87zkn"] Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.888216 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx9xf\" (UniqueName: \"kubernetes.io/projected/321118fb-4d41-4f4d-ab86-e99e00410590-kube-api-access-rx9xf\") pod \"dnsmasq-dns-c574c44d9-87zkn\" (UID: \"321118fb-4d41-4f4d-ab86-e99e00410590\") " pod="openstack/dnsmasq-dns-c574c44d9-87zkn" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.888561 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/321118fb-4d41-4f4d-ab86-e99e00410590-dns-svc\") pod \"dnsmasq-dns-c574c44d9-87zkn\" (UID: \"321118fb-4d41-4f4d-ab86-e99e00410590\") " pod="openstack/dnsmasq-dns-c574c44d9-87zkn" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.888621 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/321118fb-4d41-4f4d-ab86-e99e00410590-ovsdbserver-nb\") pod \"dnsmasq-dns-c574c44d9-87zkn\" (UID: \"321118fb-4d41-4f4d-ab86-e99e00410590\") " pod="openstack/dnsmasq-dns-c574c44d9-87zkn" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.888809 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/321118fb-4d41-4f4d-ab86-e99e00410590-config\") pod \"dnsmasq-dns-c574c44d9-87zkn\" (UID: \"321118fb-4d41-4f4d-ab86-e99e00410590\") " pod="openstack/dnsmasq-dns-c574c44d9-87zkn" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.943728 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c574c44d9-87zkn"] Jan 27 09:13:09 crc kubenswrapper[4799]: E0127 09:13:09.945042 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-rx9xf ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-c574c44d9-87zkn" podUID="321118fb-4d41-4f4d-ab86-e99e00410590" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.964986 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-864ff95797-vv7kb"] Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.966568 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864ff95797-vv7kb" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.969842 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.990630 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/321118fb-4d41-4f4d-ab86-e99e00410590-dns-svc\") pod \"dnsmasq-dns-c574c44d9-87zkn\" (UID: \"321118fb-4d41-4f4d-ab86-e99e00410590\") " pod="openstack/dnsmasq-dns-c574c44d9-87zkn" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.990671 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/321118fb-4d41-4f4d-ab86-e99e00410590-ovsdbserver-nb\") pod \"dnsmasq-dns-c574c44d9-87zkn\" (UID: \"321118fb-4d41-4f4d-ab86-e99e00410590\") " pod="openstack/dnsmasq-dns-c574c44d9-87zkn" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.990700 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/321118fb-4d41-4f4d-ab86-e99e00410590-config\") pod \"dnsmasq-dns-c574c44d9-87zkn\" (UID: \"321118fb-4d41-4f4d-ab86-e99e00410590\") " pod="openstack/dnsmasq-dns-c574c44d9-87zkn" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.990782 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx9xf\" (UniqueName: \"kubernetes.io/projected/321118fb-4d41-4f4d-ab86-e99e00410590-kube-api-access-rx9xf\") pod \"dnsmasq-dns-c574c44d9-87zkn\" (UID: \"321118fb-4d41-4f4d-ab86-e99e00410590\") " pod="openstack/dnsmasq-dns-c574c44d9-87zkn" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.991806 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/321118fb-4d41-4f4d-ab86-e99e00410590-dns-svc\") pod \"dnsmasq-dns-c574c44d9-87zkn\" (UID: \"321118fb-4d41-4f4d-ab86-e99e00410590\") " pod="openstack/dnsmasq-dns-c574c44d9-87zkn" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.992353 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/321118fb-4d41-4f4d-ab86-e99e00410590-ovsdbserver-nb\") pod \"dnsmasq-dns-c574c44d9-87zkn\" (UID: \"321118fb-4d41-4f4d-ab86-e99e00410590\") " pod="openstack/dnsmasq-dns-c574c44d9-87zkn" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.992880 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/321118fb-4d41-4f4d-ab86-e99e00410590-config\") pod \"dnsmasq-dns-c574c44d9-87zkn\" (UID: \"321118fb-4d41-4f4d-ab86-e99e00410590\") " pod="openstack/dnsmasq-dns-c574c44d9-87zkn" Jan 27 09:13:09 crc kubenswrapper[4799]: I0127 09:13:09.993825 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864ff95797-vv7kb"] Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.017650 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx9xf\" (UniqueName: \"kubernetes.io/projected/321118fb-4d41-4f4d-ab86-e99e00410590-kube-api-access-rx9xf\") pod \"dnsmasq-dns-c574c44d9-87zkn\" (UID: \"321118fb-4d41-4f4d-ab86-e99e00410590\") " pod="openstack/dnsmasq-dns-c574c44d9-87zkn" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.092702 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr8wf\" (UniqueName: \"kubernetes.io/projected/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-kube-api-access-sr8wf\") pod \"dnsmasq-dns-864ff95797-vv7kb\" (UID: \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\") " pod="openstack/dnsmasq-dns-864ff95797-vv7kb" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.092797 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-config\") pod \"dnsmasq-dns-864ff95797-vv7kb\" (UID: \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\") " pod="openstack/dnsmasq-dns-864ff95797-vv7kb" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.092838 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-dns-svc\") pod \"dnsmasq-dns-864ff95797-vv7kb\" (UID: \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\") " pod="openstack/dnsmasq-dns-864ff95797-vv7kb" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.092860 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-ovsdbserver-sb\") pod \"dnsmasq-dns-864ff95797-vv7kb\" (UID: \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\") " pod="openstack/dnsmasq-dns-864ff95797-vv7kb" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.092881 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-ovsdbserver-nb\") pod \"dnsmasq-dns-864ff95797-vv7kb\" (UID: \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\") " pod="openstack/dnsmasq-dns-864ff95797-vv7kb" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.193777 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-config\") pod \"dnsmasq-dns-864ff95797-vv7kb\" (UID: \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\") " pod="openstack/dnsmasq-dns-864ff95797-vv7kb" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.193846 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-dns-svc\") pod \"dnsmasq-dns-864ff95797-vv7kb\" (UID: \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\") " pod="openstack/dnsmasq-dns-864ff95797-vv7kb" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.193872 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-ovsdbserver-sb\") pod \"dnsmasq-dns-864ff95797-vv7kb\" (UID: \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\") " pod="openstack/dnsmasq-dns-864ff95797-vv7kb" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.193894 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-ovsdbserver-nb\") pod \"dnsmasq-dns-864ff95797-vv7kb\" (UID: \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\") " pod="openstack/dnsmasq-dns-864ff95797-vv7kb" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.193936 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr8wf\" (UniqueName: \"kubernetes.io/projected/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-kube-api-access-sr8wf\") pod \"dnsmasq-dns-864ff95797-vv7kb\" (UID: \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\") " pod="openstack/dnsmasq-dns-864ff95797-vv7kb" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.194689 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-config\") pod \"dnsmasq-dns-864ff95797-vv7kb\" (UID: \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\") " pod="openstack/dnsmasq-dns-864ff95797-vv7kb" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.194766 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-ovsdbserver-sb\") pod \"dnsmasq-dns-864ff95797-vv7kb\" (UID: \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\") " pod="openstack/dnsmasq-dns-864ff95797-vv7kb" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.195111 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-ovsdbserver-nb\") pod \"dnsmasq-dns-864ff95797-vv7kb\" (UID: \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\") " pod="openstack/dnsmasq-dns-864ff95797-vv7kb" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.195124 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-dns-svc\") pod \"dnsmasq-dns-864ff95797-vv7kb\" (UID: \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\") " pod="openstack/dnsmasq-dns-864ff95797-vv7kb" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.212262 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr8wf\" (UniqueName: \"kubernetes.io/projected/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-kube-api-access-sr8wf\") pod \"dnsmasq-dns-864ff95797-vv7kb\" (UID: \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\") " pod="openstack/dnsmasq-dns-864ff95797-vv7kb" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.284333 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864ff95797-vv7kb" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.647526 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c574c44d9-87zkn" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.658791 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c574c44d9-87zkn" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.744816 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864ff95797-vv7kb"] Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.804507 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/321118fb-4d41-4f4d-ab86-e99e00410590-ovsdbserver-nb\") pod \"321118fb-4d41-4f4d-ab86-e99e00410590\" (UID: \"321118fb-4d41-4f4d-ab86-e99e00410590\") " Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.804972 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/321118fb-4d41-4f4d-ab86-e99e00410590-config\") pod \"321118fb-4d41-4f4d-ab86-e99e00410590\" (UID: \"321118fb-4d41-4f4d-ab86-e99e00410590\") " Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.805015 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx9xf\" (UniqueName: \"kubernetes.io/projected/321118fb-4d41-4f4d-ab86-e99e00410590-kube-api-access-rx9xf\") pod \"321118fb-4d41-4f4d-ab86-e99e00410590\" (UID: \"321118fb-4d41-4f4d-ab86-e99e00410590\") " Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.805063 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/321118fb-4d41-4f4d-ab86-e99e00410590-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "321118fb-4d41-4f4d-ab86-e99e00410590" (UID: "321118fb-4d41-4f4d-ab86-e99e00410590"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.805442 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/321118fb-4d41-4f4d-ab86-e99e00410590-config" (OuterVolumeSpecName: "config") pod "321118fb-4d41-4f4d-ab86-e99e00410590" (UID: "321118fb-4d41-4f4d-ab86-e99e00410590"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.805501 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/321118fb-4d41-4f4d-ab86-e99e00410590-dns-svc\") pod \"321118fb-4d41-4f4d-ab86-e99e00410590\" (UID: \"321118fb-4d41-4f4d-ab86-e99e00410590\") " Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.805818 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/321118fb-4d41-4f4d-ab86-e99e00410590-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "321118fb-4d41-4f4d-ab86-e99e00410590" (UID: "321118fb-4d41-4f4d-ab86-e99e00410590"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.806594 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/321118fb-4d41-4f4d-ab86-e99e00410590-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.806618 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/321118fb-4d41-4f4d-ab86-e99e00410590-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.806628 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/321118fb-4d41-4f4d-ab86-e99e00410590-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.809207 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/321118fb-4d41-4f4d-ab86-e99e00410590-kube-api-access-rx9xf" (OuterVolumeSpecName: "kube-api-access-rx9xf") pod "321118fb-4d41-4f4d-ab86-e99e00410590" (UID: "321118fb-4d41-4f4d-ab86-e99e00410590"). InnerVolumeSpecName "kube-api-access-rx9xf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:10 crc kubenswrapper[4799]: I0127 09:13:10.908517 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx9xf\" (UniqueName: \"kubernetes.io/projected/321118fb-4d41-4f4d-ab86-e99e00410590-kube-api-access-rx9xf\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:11 crc kubenswrapper[4799]: I0127 09:13:11.656788 4799 generic.go:334] "Generic (PLEG): container finished" podID="f3f174ce-1bf6-4fc7-a8f2-353f9365cffd" containerID="13dde022673314111a18adffded98f932aa6045543b694ec01b7af8985d1357d" exitCode=0 Jan 27 09:13:11 crc kubenswrapper[4799]: I0127 09:13:11.656860 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c574c44d9-87zkn" Jan 27 09:13:11 crc kubenswrapper[4799]: I0127 09:13:11.657204 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864ff95797-vv7kb" event={"ID":"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd","Type":"ContainerDied","Data":"13dde022673314111a18adffded98f932aa6045543b694ec01b7af8985d1357d"} Jan 27 09:13:11 crc kubenswrapper[4799]: I0127 09:13:11.657282 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864ff95797-vv7kb" event={"ID":"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd","Type":"ContainerStarted","Data":"2258c71354fbde2f3eea4924b4956fba360541bebf0bc91e18912ec465dee6f4"} Jan 27 09:13:11 crc kubenswrapper[4799]: I0127 09:13:11.861785 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c574c44d9-87zkn"] Jan 27 09:13:11 crc kubenswrapper[4799]: I0127 09:13:11.866328 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c574c44d9-87zkn"] Jan 27 09:13:12 crc kubenswrapper[4799]: I0127 09:13:12.474446 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="321118fb-4d41-4f4d-ab86-e99e00410590" path="/var/lib/kubelet/pods/321118fb-4d41-4f4d-ab86-e99e00410590/volumes" Jan 27 09:13:12 crc kubenswrapper[4799]: I0127 09:13:12.670928 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864ff95797-vv7kb" event={"ID":"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd","Type":"ContainerStarted","Data":"01fae0e63dc6f3ecdda121c1bff940bd2cc728ab4bad771f9eb733cf62a2f098"} Jan 27 09:13:12 crc kubenswrapper[4799]: I0127 09:13:12.672044 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-864ff95797-vv7kb" Jan 27 09:13:12 crc kubenswrapper[4799]: I0127 09:13:12.710027 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-864ff95797-vv7kb" podStartSLOduration=3.710002731 podStartE2EDuration="3.710002731s" podCreationTimestamp="2026-01-27 09:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:12.699491725 +0000 UTC m=+5259.010595840" watchObservedRunningTime="2026-01-27 09:13:12.710002731 +0000 UTC m=+5259.021106816" Jan 27 09:13:13 crc kubenswrapper[4799]: I0127 09:13:13.409911 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Jan 27 09:13:13 crc kubenswrapper[4799]: I0127 09:13:13.572092 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Jan 27 09:13:16 crc kubenswrapper[4799]: I0127 09:13:16.234521 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Jan 27 09:13:16 crc kubenswrapper[4799]: I0127 09:13:16.236124 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 27 09:13:16 crc kubenswrapper[4799]: I0127 09:13:16.238420 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Jan 27 09:13:16 crc kubenswrapper[4799]: I0127 09:13:16.243542 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 27 09:13:16 crc kubenswrapper[4799]: I0127 09:13:16.311200 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3af41104-613d-4430-8b6c-8daea9e443d5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3af41104-613d-4430-8b6c-8daea9e443d5\") pod \"ovn-copy-data\" (UID: \"4c8362f6-7376-4b8b-a8ac-6bc38be236a8\") " pod="openstack/ovn-copy-data" Jan 27 09:13:16 crc kubenswrapper[4799]: I0127 09:13:16.311396 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqdrd\" (UniqueName: \"kubernetes.io/projected/4c8362f6-7376-4b8b-a8ac-6bc38be236a8-kube-api-access-jqdrd\") pod \"ovn-copy-data\" (UID: \"4c8362f6-7376-4b8b-a8ac-6bc38be236a8\") " pod="openstack/ovn-copy-data" Jan 27 09:13:16 crc kubenswrapper[4799]: I0127 09:13:16.311476 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/4c8362f6-7376-4b8b-a8ac-6bc38be236a8-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"4c8362f6-7376-4b8b-a8ac-6bc38be236a8\") " pod="openstack/ovn-copy-data" Jan 27 09:13:16 crc kubenswrapper[4799]: I0127 09:13:16.413283 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/4c8362f6-7376-4b8b-a8ac-6bc38be236a8-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"4c8362f6-7376-4b8b-a8ac-6bc38be236a8\") " pod="openstack/ovn-copy-data" Jan 27 09:13:16 crc kubenswrapper[4799]: I0127 09:13:16.413427 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3af41104-613d-4430-8b6c-8daea9e443d5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3af41104-613d-4430-8b6c-8daea9e443d5\") pod \"ovn-copy-data\" (UID: \"4c8362f6-7376-4b8b-a8ac-6bc38be236a8\") " pod="openstack/ovn-copy-data" Jan 27 09:13:16 crc kubenswrapper[4799]: I0127 09:13:16.413503 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqdrd\" (UniqueName: \"kubernetes.io/projected/4c8362f6-7376-4b8b-a8ac-6bc38be236a8-kube-api-access-jqdrd\") pod \"ovn-copy-data\" (UID: \"4c8362f6-7376-4b8b-a8ac-6bc38be236a8\") " pod="openstack/ovn-copy-data" Jan 27 09:13:16 crc kubenswrapper[4799]: I0127 09:13:16.416328 4799 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 09:13:16 crc kubenswrapper[4799]: I0127 09:13:16.416363 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3af41104-613d-4430-8b6c-8daea9e443d5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3af41104-613d-4430-8b6c-8daea9e443d5\") pod \"ovn-copy-data\" (UID: \"4c8362f6-7376-4b8b-a8ac-6bc38be236a8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/12e8df50fb8936542855acab4647191a0ff4a6407dc113c4823245e039d30ea4/globalmount\"" pod="openstack/ovn-copy-data" Jan 27 09:13:16 crc kubenswrapper[4799]: I0127 09:13:16.419899 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/4c8362f6-7376-4b8b-a8ac-6bc38be236a8-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"4c8362f6-7376-4b8b-a8ac-6bc38be236a8\") " pod="openstack/ovn-copy-data" Jan 27 09:13:16 crc kubenswrapper[4799]: I0127 09:13:16.441551 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqdrd\" (UniqueName: \"kubernetes.io/projected/4c8362f6-7376-4b8b-a8ac-6bc38be236a8-kube-api-access-jqdrd\") pod \"ovn-copy-data\" (UID: \"4c8362f6-7376-4b8b-a8ac-6bc38be236a8\") " pod="openstack/ovn-copy-data" Jan 27 09:13:16 crc kubenswrapper[4799]: I0127 09:13:16.451049 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3af41104-613d-4430-8b6c-8daea9e443d5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3af41104-613d-4430-8b6c-8daea9e443d5\") pod \"ovn-copy-data\" (UID: \"4c8362f6-7376-4b8b-a8ac-6bc38be236a8\") " pod="openstack/ovn-copy-data" Jan 27 09:13:16 crc kubenswrapper[4799]: I0127 09:13:16.560972 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 27 09:13:17 crc kubenswrapper[4799]: I0127 09:13:17.064984 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 27 09:13:17 crc kubenswrapper[4799]: I0127 09:13:17.727746 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"4c8362f6-7376-4b8b-a8ac-6bc38be236a8","Type":"ContainerStarted","Data":"bb150d4a63adcec867fe344b6df4e325f570cb88cc1c66cd3d49eb5a199b79f6"} Jan 27 09:13:17 crc kubenswrapper[4799]: I0127 09:13:17.728441 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"4c8362f6-7376-4b8b-a8ac-6bc38be236a8","Type":"ContainerStarted","Data":"938f24631a8274af9a779a62a8085f96f35484beb3e3e1aa98f018a016c5f9eb"} Jan 27 09:13:17 crc kubenswrapper[4799]: I0127 09:13:17.759245 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=2.7592274740000002 podStartE2EDuration="2.759227474s" podCreationTimestamp="2026-01-27 09:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:17.757412964 +0000 UTC m=+5264.068517069" watchObservedRunningTime="2026-01-27 09:13:17.759227474 +0000 UTC m=+5264.070331539" Jan 27 09:13:20 crc kubenswrapper[4799]: I0127 09:13:20.288217 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-864ff95797-vv7kb" Jan 27 09:13:20 crc kubenswrapper[4799]: I0127 09:13:20.392347 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-vzt79"] Jan 27 09:13:20 crc kubenswrapper[4799]: I0127 09:13:20.393206 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b7946d7b9-vzt79" podUID="1fdce98d-d75c-4d02-98b5-6f19b41b0228" containerName="dnsmasq-dns" containerID="cri-o://1aab06edda484e4f8531ceeb4f02bf7bb586140b1d70c3d22ec32fd7f0807478" gracePeriod=10 Jan 27 09:13:20 crc kubenswrapper[4799]: I0127 09:13:20.756380 4799 generic.go:334] "Generic (PLEG): container finished" podID="1fdce98d-d75c-4d02-98b5-6f19b41b0228" containerID="1aab06edda484e4f8531ceeb4f02bf7bb586140b1d70c3d22ec32fd7f0807478" exitCode=0 Jan 27 09:13:20 crc kubenswrapper[4799]: I0127 09:13:20.756432 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-vzt79" event={"ID":"1fdce98d-d75c-4d02-98b5-6f19b41b0228","Type":"ContainerDied","Data":"1aab06edda484e4f8531ceeb4f02bf7bb586140b1d70c3d22ec32fd7f0807478"} Jan 27 09:13:20 crc kubenswrapper[4799]: I0127 09:13:20.881770 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-vzt79" Jan 27 09:13:21 crc kubenswrapper[4799]: I0127 09:13:21.017557 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fdce98d-d75c-4d02-98b5-6f19b41b0228-config\") pod \"1fdce98d-d75c-4d02-98b5-6f19b41b0228\" (UID: \"1fdce98d-d75c-4d02-98b5-6f19b41b0228\") " Jan 27 09:13:21 crc kubenswrapper[4799]: I0127 09:13:21.017607 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl54c\" (UniqueName: \"kubernetes.io/projected/1fdce98d-d75c-4d02-98b5-6f19b41b0228-kube-api-access-gl54c\") pod \"1fdce98d-d75c-4d02-98b5-6f19b41b0228\" (UID: \"1fdce98d-d75c-4d02-98b5-6f19b41b0228\") " Jan 27 09:13:21 crc kubenswrapper[4799]: I0127 09:13:21.017684 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fdce98d-d75c-4d02-98b5-6f19b41b0228-dns-svc\") pod \"1fdce98d-d75c-4d02-98b5-6f19b41b0228\" (UID: \"1fdce98d-d75c-4d02-98b5-6f19b41b0228\") " Jan 27 09:13:21 crc kubenswrapper[4799]: I0127 09:13:21.028645 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fdce98d-d75c-4d02-98b5-6f19b41b0228-kube-api-access-gl54c" (OuterVolumeSpecName: "kube-api-access-gl54c") pod "1fdce98d-d75c-4d02-98b5-6f19b41b0228" (UID: "1fdce98d-d75c-4d02-98b5-6f19b41b0228"). InnerVolumeSpecName "kube-api-access-gl54c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:21 crc kubenswrapper[4799]: I0127 09:13:21.053576 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fdce98d-d75c-4d02-98b5-6f19b41b0228-config" (OuterVolumeSpecName: "config") pod "1fdce98d-d75c-4d02-98b5-6f19b41b0228" (UID: "1fdce98d-d75c-4d02-98b5-6f19b41b0228"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:21 crc kubenswrapper[4799]: I0127 09:13:21.058921 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fdce98d-d75c-4d02-98b5-6f19b41b0228-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1fdce98d-d75c-4d02-98b5-6f19b41b0228" (UID: "1fdce98d-d75c-4d02-98b5-6f19b41b0228"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:21 crc kubenswrapper[4799]: I0127 09:13:21.121278 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fdce98d-d75c-4d02-98b5-6f19b41b0228-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:21 crc kubenswrapper[4799]: I0127 09:13:21.121339 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fdce98d-d75c-4d02-98b5-6f19b41b0228-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:21 crc kubenswrapper[4799]: I0127 09:13:21.121352 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl54c\" (UniqueName: \"kubernetes.io/projected/1fdce98d-d75c-4d02-98b5-6f19b41b0228-kube-api-access-gl54c\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:21 crc kubenswrapper[4799]: I0127 09:13:21.768037 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-vzt79" event={"ID":"1fdce98d-d75c-4d02-98b5-6f19b41b0228","Type":"ContainerDied","Data":"da394a6027560258e50724108bef7a29568e29166c39394e5b43b5a025213d05"} Jan 27 09:13:21 crc kubenswrapper[4799]: I0127 09:13:21.768105 4799 scope.go:117] "RemoveContainer" containerID="1aab06edda484e4f8531ceeb4f02bf7bb586140b1d70c3d22ec32fd7f0807478" Jan 27 09:13:21 crc kubenswrapper[4799]: I0127 09:13:21.768337 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-vzt79" Jan 27 09:13:21 crc kubenswrapper[4799]: I0127 09:13:21.787376 4799 scope.go:117] "RemoveContainer" containerID="f77a3427b4014f4a80ed7dae4db9e0f64e101f94ff069046403d66c1cb0d222e" Jan 27 09:13:21 crc kubenswrapper[4799]: I0127 09:13:21.817423 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-vzt79"] Jan 27 09:13:21 crc kubenswrapper[4799]: I0127 09:13:21.826294 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-vzt79"] Jan 27 09:13:22 crc kubenswrapper[4799]: I0127 09:13:22.475224 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fdce98d-d75c-4d02-98b5-6f19b41b0228" path="/var/lib/kubelet/pods/1fdce98d-d75c-4d02-98b5-6f19b41b0228/volumes" Jan 27 09:13:22 crc kubenswrapper[4799]: I0127 09:13:22.980570 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 27 09:13:22 crc kubenswrapper[4799]: E0127 09:13:22.980954 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fdce98d-d75c-4d02-98b5-6f19b41b0228" containerName="dnsmasq-dns" Jan 27 09:13:22 crc kubenswrapper[4799]: I0127 09:13:22.980972 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fdce98d-d75c-4d02-98b5-6f19b41b0228" containerName="dnsmasq-dns" Jan 27 09:13:22 crc kubenswrapper[4799]: E0127 09:13:22.980995 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fdce98d-d75c-4d02-98b5-6f19b41b0228" containerName="init" Jan 27 09:13:22 crc kubenswrapper[4799]: I0127 09:13:22.981004 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fdce98d-d75c-4d02-98b5-6f19b41b0228" containerName="init" Jan 27 09:13:22 crc kubenswrapper[4799]: I0127 09:13:22.988717 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fdce98d-d75c-4d02-98b5-6f19b41b0228" containerName="dnsmasq-dns" Jan 27 09:13:22 crc kubenswrapper[4799]: I0127 09:13:22.989720 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 09:13:22 crc kubenswrapper[4799]: I0127 09:13:22.995013 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-vhhx9" Jan 27 09:13:22 crc kubenswrapper[4799]: I0127 09:13:22.995191 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 27 09:13:22 crc kubenswrapper[4799]: I0127 09:13:22.995626 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 27 09:13:23 crc kubenswrapper[4799]: I0127 09:13:23.000628 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 09:13:23 crc kubenswrapper[4799]: I0127 09:13:23.063121 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7451de4-4685-4848-9df5-27eb6334da4e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d7451de4-4685-4848-9df5-27eb6334da4e\") " pod="openstack/ovn-northd-0" Jan 27 09:13:23 crc kubenswrapper[4799]: I0127 09:13:23.063209 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm2v5\" (UniqueName: \"kubernetes.io/projected/d7451de4-4685-4848-9df5-27eb6334da4e-kube-api-access-sm2v5\") pod \"ovn-northd-0\" (UID: \"d7451de4-4685-4848-9df5-27eb6334da4e\") " pod="openstack/ovn-northd-0" Jan 27 09:13:23 crc kubenswrapper[4799]: I0127 09:13:23.063250 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7451de4-4685-4848-9df5-27eb6334da4e-config\") pod \"ovn-northd-0\" (UID: \"d7451de4-4685-4848-9df5-27eb6334da4e\") " pod="openstack/ovn-northd-0" Jan 27 09:13:23 crc kubenswrapper[4799]: I0127 09:13:23.063308 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7451de4-4685-4848-9df5-27eb6334da4e-scripts\") pod \"ovn-northd-0\" (UID: \"d7451de4-4685-4848-9df5-27eb6334da4e\") " pod="openstack/ovn-northd-0" Jan 27 09:13:23 crc kubenswrapper[4799]: I0127 09:13:23.063326 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d7451de4-4685-4848-9df5-27eb6334da4e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d7451de4-4685-4848-9df5-27eb6334da4e\") " pod="openstack/ovn-northd-0" Jan 27 09:13:23 crc kubenswrapper[4799]: I0127 09:13:23.164742 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7451de4-4685-4848-9df5-27eb6334da4e-config\") pod \"ovn-northd-0\" (UID: \"d7451de4-4685-4848-9df5-27eb6334da4e\") " pod="openstack/ovn-northd-0" Jan 27 09:13:23 crc kubenswrapper[4799]: I0127 09:13:23.164816 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7451de4-4685-4848-9df5-27eb6334da4e-scripts\") pod \"ovn-northd-0\" (UID: \"d7451de4-4685-4848-9df5-27eb6334da4e\") " pod="openstack/ovn-northd-0" Jan 27 09:13:23 crc kubenswrapper[4799]: I0127 09:13:23.164839 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d7451de4-4685-4848-9df5-27eb6334da4e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d7451de4-4685-4848-9df5-27eb6334da4e\") " pod="openstack/ovn-northd-0" Jan 27 09:13:23 crc kubenswrapper[4799]: I0127 09:13:23.164878 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7451de4-4685-4848-9df5-27eb6334da4e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d7451de4-4685-4848-9df5-27eb6334da4e\") " pod="openstack/ovn-northd-0" Jan 27 09:13:23 crc kubenswrapper[4799]: I0127 09:13:23.164929 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm2v5\" (UniqueName: \"kubernetes.io/projected/d7451de4-4685-4848-9df5-27eb6334da4e-kube-api-access-sm2v5\") pod \"ovn-northd-0\" (UID: \"d7451de4-4685-4848-9df5-27eb6334da4e\") " pod="openstack/ovn-northd-0" Jan 27 09:13:23 crc kubenswrapper[4799]: I0127 09:13:23.167346 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d7451de4-4685-4848-9df5-27eb6334da4e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d7451de4-4685-4848-9df5-27eb6334da4e\") " pod="openstack/ovn-northd-0" Jan 27 09:13:23 crc kubenswrapper[4799]: I0127 09:13:23.167915 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7451de4-4685-4848-9df5-27eb6334da4e-config\") pod \"ovn-northd-0\" (UID: \"d7451de4-4685-4848-9df5-27eb6334da4e\") " pod="openstack/ovn-northd-0" Jan 27 09:13:23 crc kubenswrapper[4799]: I0127 09:13:23.168326 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7451de4-4685-4848-9df5-27eb6334da4e-scripts\") pod \"ovn-northd-0\" (UID: \"d7451de4-4685-4848-9df5-27eb6334da4e\") " pod="openstack/ovn-northd-0" Jan 27 09:13:23 crc kubenswrapper[4799]: I0127 09:13:23.172434 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7451de4-4685-4848-9df5-27eb6334da4e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d7451de4-4685-4848-9df5-27eb6334da4e\") " pod="openstack/ovn-northd-0" Jan 27 09:13:23 crc kubenswrapper[4799]: I0127 09:13:23.203923 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm2v5\" (UniqueName: \"kubernetes.io/projected/d7451de4-4685-4848-9df5-27eb6334da4e-kube-api-access-sm2v5\") pod \"ovn-northd-0\" (UID: \"d7451de4-4685-4848-9df5-27eb6334da4e\") " pod="openstack/ovn-northd-0" Jan 27 09:13:23 crc kubenswrapper[4799]: I0127 09:13:23.315649 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 09:13:23 crc kubenswrapper[4799]: I0127 09:13:23.731525 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:13:23 crc kubenswrapper[4799]: I0127 09:13:23.731927 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:13:23 crc kubenswrapper[4799]: I0127 09:13:23.764681 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 09:13:23 crc kubenswrapper[4799]: I0127 09:13:23.797207 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d7451de4-4685-4848-9df5-27eb6334da4e","Type":"ContainerStarted","Data":"fcb65054889c4d1b059d5733abed4ee9ca6c4d2d25df474297a961b319778bfb"} Jan 27 09:13:24 crc kubenswrapper[4799]: I0127 09:13:24.805007 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d7451de4-4685-4848-9df5-27eb6334da4e","Type":"ContainerStarted","Data":"8919bd5efd82f383e2423357883710a7fe17cc1d09d43917c7a8a58f21508968"} Jan 27 09:13:24 crc kubenswrapper[4799]: I0127 09:13:24.805451 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 27 09:13:24 crc kubenswrapper[4799]: I0127 09:13:24.805470 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d7451de4-4685-4848-9df5-27eb6334da4e","Type":"ContainerStarted","Data":"d98770ebdac0521c062b4b5bbde380f5efbf1185e9360049555afdcbcd282f15"} Jan 27 09:13:24 crc kubenswrapper[4799]: I0127 09:13:24.829533 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.829510447 podStartE2EDuration="2.829510447s" podCreationTimestamp="2026-01-27 09:13:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:24.823986206 +0000 UTC m=+5271.135090331" watchObservedRunningTime="2026-01-27 09:13:24.829510447 +0000 UTC m=+5271.140614542" Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.040312 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-98vtf"] Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.041900 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-98vtf" Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.048795 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3c45-account-create-update-8259j"] Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.050039 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3c45-account-create-update-8259j" Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.051598 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.058651 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-98vtf"] Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.067506 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3c45-account-create-update-8259j"] Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.147413 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtxxq\" (UniqueName: \"kubernetes.io/projected/747711a8-56e5-4b29-a3da-4d2b739a1cc4-kube-api-access-dtxxq\") pod \"keystone-3c45-account-create-update-8259j\" (UID: \"747711a8-56e5-4b29-a3da-4d2b739a1cc4\") " pod="openstack/keystone-3c45-account-create-update-8259j" Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.147556 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t57pw\" (UniqueName: \"kubernetes.io/projected/4410868e-475c-4bab-a660-e877aadabc59-kube-api-access-t57pw\") pod \"keystone-db-create-98vtf\" (UID: \"4410868e-475c-4bab-a660-e877aadabc59\") " pod="openstack/keystone-db-create-98vtf" Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.147586 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4410868e-475c-4bab-a660-e877aadabc59-operator-scripts\") pod \"keystone-db-create-98vtf\" (UID: \"4410868e-475c-4bab-a660-e877aadabc59\") " pod="openstack/keystone-db-create-98vtf" Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.147617 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/747711a8-56e5-4b29-a3da-4d2b739a1cc4-operator-scripts\") pod \"keystone-3c45-account-create-update-8259j\" (UID: \"747711a8-56e5-4b29-a3da-4d2b739a1cc4\") " pod="openstack/keystone-3c45-account-create-update-8259j" Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.249057 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtxxq\" (UniqueName: \"kubernetes.io/projected/747711a8-56e5-4b29-a3da-4d2b739a1cc4-kube-api-access-dtxxq\") pod \"keystone-3c45-account-create-update-8259j\" (UID: \"747711a8-56e5-4b29-a3da-4d2b739a1cc4\") " pod="openstack/keystone-3c45-account-create-update-8259j" Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.249239 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t57pw\" (UniqueName: \"kubernetes.io/projected/4410868e-475c-4bab-a660-e877aadabc59-kube-api-access-t57pw\") pod \"keystone-db-create-98vtf\" (UID: \"4410868e-475c-4bab-a660-e877aadabc59\") " pod="openstack/keystone-db-create-98vtf" Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.249278 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4410868e-475c-4bab-a660-e877aadabc59-operator-scripts\") pod \"keystone-db-create-98vtf\" (UID: \"4410868e-475c-4bab-a660-e877aadabc59\") " pod="openstack/keystone-db-create-98vtf" Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.249550 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/747711a8-56e5-4b29-a3da-4d2b739a1cc4-operator-scripts\") pod \"keystone-3c45-account-create-update-8259j\" (UID: \"747711a8-56e5-4b29-a3da-4d2b739a1cc4\") " pod="openstack/keystone-3c45-account-create-update-8259j" Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.250290 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/747711a8-56e5-4b29-a3da-4d2b739a1cc4-operator-scripts\") pod \"keystone-3c45-account-create-update-8259j\" (UID: \"747711a8-56e5-4b29-a3da-4d2b739a1cc4\") " pod="openstack/keystone-3c45-account-create-update-8259j" Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.250329 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4410868e-475c-4bab-a660-e877aadabc59-operator-scripts\") pod \"keystone-db-create-98vtf\" (UID: \"4410868e-475c-4bab-a660-e877aadabc59\") " pod="openstack/keystone-db-create-98vtf" Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.266719 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t57pw\" (UniqueName: \"kubernetes.io/projected/4410868e-475c-4bab-a660-e877aadabc59-kube-api-access-t57pw\") pod \"keystone-db-create-98vtf\" (UID: \"4410868e-475c-4bab-a660-e877aadabc59\") " pod="openstack/keystone-db-create-98vtf" Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.268087 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtxxq\" (UniqueName: \"kubernetes.io/projected/747711a8-56e5-4b29-a3da-4d2b739a1cc4-kube-api-access-dtxxq\") pod \"keystone-3c45-account-create-update-8259j\" (UID: \"747711a8-56e5-4b29-a3da-4d2b739a1cc4\") " pod="openstack/keystone-3c45-account-create-update-8259j" Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.371788 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-98vtf" Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.385907 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3c45-account-create-update-8259j" Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.814453 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-98vtf"] Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.838996 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-98vtf" event={"ID":"4410868e-475c-4bab-a660-e877aadabc59","Type":"ContainerStarted","Data":"d6d33c0b5e67a6c1714b740189cda9a20985039cf1091df83e2f1e891acea73c"} Jan 27 09:13:28 crc kubenswrapper[4799]: I0127 09:13:28.856400 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3c45-account-create-update-8259j"] Jan 27 09:13:28 crc kubenswrapper[4799]: W0127 09:13:28.864993 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod747711a8_56e5_4b29_a3da_4d2b739a1cc4.slice/crio-cb193ca41a9d36e95efb88323077e1866273c7c7f359aaec9f3476afd963d323 WatchSource:0}: Error finding container cb193ca41a9d36e95efb88323077e1866273c7c7f359aaec9f3476afd963d323: Status 404 returned error can't find the container with id cb193ca41a9d36e95efb88323077e1866273c7c7f359aaec9f3476afd963d323 Jan 27 09:13:29 crc kubenswrapper[4799]: I0127 09:13:29.851250 4799 generic.go:334] "Generic (PLEG): container finished" podID="4410868e-475c-4bab-a660-e877aadabc59" containerID="422342557a27d51130aae03a2d6675c73207f4e42f21e6ee4840afe2461ff00f" exitCode=0 Jan 27 09:13:29 crc kubenswrapper[4799]: I0127 09:13:29.851321 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-98vtf" event={"ID":"4410868e-475c-4bab-a660-e877aadabc59","Type":"ContainerDied","Data":"422342557a27d51130aae03a2d6675c73207f4e42f21e6ee4840afe2461ff00f"} Jan 27 09:13:29 crc kubenswrapper[4799]: I0127 09:13:29.853872 4799 generic.go:334] "Generic (PLEG): container finished" podID="747711a8-56e5-4b29-a3da-4d2b739a1cc4" containerID="a1ad14bfc8a25fad73859598d96c48c5aec3c3853ad97922ebe166ff62426284" exitCode=0 Jan 27 09:13:29 crc kubenswrapper[4799]: I0127 09:13:29.853918 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3c45-account-create-update-8259j" event={"ID":"747711a8-56e5-4b29-a3da-4d2b739a1cc4","Type":"ContainerDied","Data":"a1ad14bfc8a25fad73859598d96c48c5aec3c3853ad97922ebe166ff62426284"} Jan 27 09:13:29 crc kubenswrapper[4799]: I0127 09:13:29.853945 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3c45-account-create-update-8259j" event={"ID":"747711a8-56e5-4b29-a3da-4d2b739a1cc4","Type":"ContainerStarted","Data":"cb193ca41a9d36e95efb88323077e1866273c7c7f359aaec9f3476afd963d323"} Jan 27 09:13:31 crc kubenswrapper[4799]: I0127 09:13:31.289115 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3c45-account-create-update-8259j" Jan 27 09:13:31 crc kubenswrapper[4799]: I0127 09:13:31.297921 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-98vtf" Jan 27 09:13:31 crc kubenswrapper[4799]: I0127 09:13:31.404553 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4410868e-475c-4bab-a660-e877aadabc59-operator-scripts\") pod \"4410868e-475c-4bab-a660-e877aadabc59\" (UID: \"4410868e-475c-4bab-a660-e877aadabc59\") " Jan 27 09:13:31 crc kubenswrapper[4799]: I0127 09:13:31.404857 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t57pw\" (UniqueName: \"kubernetes.io/projected/4410868e-475c-4bab-a660-e877aadabc59-kube-api-access-t57pw\") pod \"4410868e-475c-4bab-a660-e877aadabc59\" (UID: \"4410868e-475c-4bab-a660-e877aadabc59\") " Jan 27 09:13:31 crc kubenswrapper[4799]: I0127 09:13:31.404937 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtxxq\" (UniqueName: \"kubernetes.io/projected/747711a8-56e5-4b29-a3da-4d2b739a1cc4-kube-api-access-dtxxq\") pod \"747711a8-56e5-4b29-a3da-4d2b739a1cc4\" (UID: \"747711a8-56e5-4b29-a3da-4d2b739a1cc4\") " Jan 27 09:13:31 crc kubenswrapper[4799]: I0127 09:13:31.404979 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/747711a8-56e5-4b29-a3da-4d2b739a1cc4-operator-scripts\") pod \"747711a8-56e5-4b29-a3da-4d2b739a1cc4\" (UID: \"747711a8-56e5-4b29-a3da-4d2b739a1cc4\") " Jan 27 09:13:31 crc kubenswrapper[4799]: I0127 09:13:31.405678 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4410868e-475c-4bab-a660-e877aadabc59-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4410868e-475c-4bab-a660-e877aadabc59" (UID: "4410868e-475c-4bab-a660-e877aadabc59"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:31 crc kubenswrapper[4799]: I0127 09:13:31.406156 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/747711a8-56e5-4b29-a3da-4d2b739a1cc4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "747711a8-56e5-4b29-a3da-4d2b739a1cc4" (UID: "747711a8-56e5-4b29-a3da-4d2b739a1cc4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:31 crc kubenswrapper[4799]: I0127 09:13:31.411768 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/747711a8-56e5-4b29-a3da-4d2b739a1cc4-kube-api-access-dtxxq" (OuterVolumeSpecName: "kube-api-access-dtxxq") pod "747711a8-56e5-4b29-a3da-4d2b739a1cc4" (UID: "747711a8-56e5-4b29-a3da-4d2b739a1cc4"). InnerVolumeSpecName "kube-api-access-dtxxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:31 crc kubenswrapper[4799]: I0127 09:13:31.413511 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4410868e-475c-4bab-a660-e877aadabc59-kube-api-access-t57pw" (OuterVolumeSpecName: "kube-api-access-t57pw") pod "4410868e-475c-4bab-a660-e877aadabc59" (UID: "4410868e-475c-4bab-a660-e877aadabc59"). InnerVolumeSpecName "kube-api-access-t57pw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:31 crc kubenswrapper[4799]: I0127 09:13:31.507342 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t57pw\" (UniqueName: \"kubernetes.io/projected/4410868e-475c-4bab-a660-e877aadabc59-kube-api-access-t57pw\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:31 crc kubenswrapper[4799]: I0127 09:13:31.507380 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtxxq\" (UniqueName: \"kubernetes.io/projected/747711a8-56e5-4b29-a3da-4d2b739a1cc4-kube-api-access-dtxxq\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:31 crc kubenswrapper[4799]: I0127 09:13:31.507395 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/747711a8-56e5-4b29-a3da-4d2b739a1cc4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:31 crc kubenswrapper[4799]: I0127 09:13:31.507407 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4410868e-475c-4bab-a660-e877aadabc59-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:31 crc kubenswrapper[4799]: I0127 09:13:31.868747 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3c45-account-create-update-8259j" Jan 27 09:13:31 crc kubenswrapper[4799]: I0127 09:13:31.868700 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3c45-account-create-update-8259j" event={"ID":"747711a8-56e5-4b29-a3da-4d2b739a1cc4","Type":"ContainerDied","Data":"cb193ca41a9d36e95efb88323077e1866273c7c7f359aaec9f3476afd963d323"} Jan 27 09:13:31 crc kubenswrapper[4799]: I0127 09:13:31.868912 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb193ca41a9d36e95efb88323077e1866273c7c7f359aaec9f3476afd963d323" Jan 27 09:13:31 crc kubenswrapper[4799]: I0127 09:13:31.870074 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-98vtf" event={"ID":"4410868e-475c-4bab-a660-e877aadabc59","Type":"ContainerDied","Data":"d6d33c0b5e67a6c1714b740189cda9a20985039cf1091df83e2f1e891acea73c"} Jan 27 09:13:31 crc kubenswrapper[4799]: I0127 09:13:31.870117 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6d33c0b5e67a6c1714b740189cda9a20985039cf1091df83e2f1e891acea73c" Jan 27 09:13:31 crc kubenswrapper[4799]: I0127 09:13:31.870121 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-98vtf" Jan 27 09:13:33 crc kubenswrapper[4799]: I0127 09:13:33.414923 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 27 09:13:33 crc kubenswrapper[4799]: I0127 09:13:33.548233 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-x6dgx"] Jan 27 09:13:33 crc kubenswrapper[4799]: E0127 09:13:33.548662 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4410868e-475c-4bab-a660-e877aadabc59" containerName="mariadb-database-create" Jan 27 09:13:33 crc kubenswrapper[4799]: I0127 09:13:33.548688 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4410868e-475c-4bab-a660-e877aadabc59" containerName="mariadb-database-create" Jan 27 09:13:33 crc kubenswrapper[4799]: E0127 09:13:33.548706 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747711a8-56e5-4b29-a3da-4d2b739a1cc4" containerName="mariadb-account-create-update" Jan 27 09:13:33 crc kubenswrapper[4799]: I0127 09:13:33.548714 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="747711a8-56e5-4b29-a3da-4d2b739a1cc4" containerName="mariadb-account-create-update" Jan 27 09:13:33 crc kubenswrapper[4799]: I0127 09:13:33.548892 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="747711a8-56e5-4b29-a3da-4d2b739a1cc4" containerName="mariadb-account-create-update" Jan 27 09:13:33 crc kubenswrapper[4799]: I0127 09:13:33.548925 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="4410868e-475c-4bab-a660-e877aadabc59" containerName="mariadb-database-create" Jan 27 09:13:33 crc kubenswrapper[4799]: I0127 09:13:33.549469 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x6dgx" Jan 27 09:13:33 crc kubenswrapper[4799]: I0127 09:13:33.554923 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 09:13:33 crc kubenswrapper[4799]: I0127 09:13:33.555353 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 09:13:33 crc kubenswrapper[4799]: I0127 09:13:33.555511 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 09:13:33 crc kubenswrapper[4799]: I0127 09:13:33.556018 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9tlk6" Jan 27 09:13:33 crc kubenswrapper[4799]: I0127 09:13:33.559106 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x6dgx"] Jan 27 09:13:33 crc kubenswrapper[4799]: I0127 09:13:33.651246 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf26015c-9724-4189-8b76-39774eb4400d-config-data\") pod \"keystone-db-sync-x6dgx\" (UID: \"cf26015c-9724-4189-8b76-39774eb4400d\") " pod="openstack/keystone-db-sync-x6dgx" Jan 27 09:13:33 crc kubenswrapper[4799]: I0127 09:13:33.651361 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf26015c-9724-4189-8b76-39774eb4400d-combined-ca-bundle\") pod \"keystone-db-sync-x6dgx\" (UID: \"cf26015c-9724-4189-8b76-39774eb4400d\") " pod="openstack/keystone-db-sync-x6dgx" Jan 27 09:13:33 crc kubenswrapper[4799]: I0127 09:13:33.651400 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xww2\" (UniqueName: \"kubernetes.io/projected/cf26015c-9724-4189-8b76-39774eb4400d-kube-api-access-8xww2\") pod \"keystone-db-sync-x6dgx\" (UID: \"cf26015c-9724-4189-8b76-39774eb4400d\") " pod="openstack/keystone-db-sync-x6dgx" Jan 27 09:13:33 crc kubenswrapper[4799]: I0127 09:13:33.753080 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf26015c-9724-4189-8b76-39774eb4400d-combined-ca-bundle\") pod \"keystone-db-sync-x6dgx\" (UID: \"cf26015c-9724-4189-8b76-39774eb4400d\") " pod="openstack/keystone-db-sync-x6dgx" Jan 27 09:13:33 crc kubenswrapper[4799]: I0127 09:13:33.753152 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xww2\" (UniqueName: \"kubernetes.io/projected/cf26015c-9724-4189-8b76-39774eb4400d-kube-api-access-8xww2\") pod \"keystone-db-sync-x6dgx\" (UID: \"cf26015c-9724-4189-8b76-39774eb4400d\") " pod="openstack/keystone-db-sync-x6dgx" Jan 27 09:13:33 crc kubenswrapper[4799]: I0127 09:13:33.753262 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf26015c-9724-4189-8b76-39774eb4400d-config-data\") pod \"keystone-db-sync-x6dgx\" (UID: \"cf26015c-9724-4189-8b76-39774eb4400d\") " pod="openstack/keystone-db-sync-x6dgx" Jan 27 09:13:33 crc kubenswrapper[4799]: I0127 09:13:33.762371 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf26015c-9724-4189-8b76-39774eb4400d-combined-ca-bundle\") pod \"keystone-db-sync-x6dgx\" (UID: \"cf26015c-9724-4189-8b76-39774eb4400d\") " pod="openstack/keystone-db-sync-x6dgx" Jan 27 09:13:33 crc kubenswrapper[4799]: I0127 09:13:33.765344 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf26015c-9724-4189-8b76-39774eb4400d-config-data\") pod \"keystone-db-sync-x6dgx\" (UID: \"cf26015c-9724-4189-8b76-39774eb4400d\") " pod="openstack/keystone-db-sync-x6dgx" Jan 27 09:13:33 crc kubenswrapper[4799]: I0127 09:13:33.775202 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xww2\" (UniqueName: \"kubernetes.io/projected/cf26015c-9724-4189-8b76-39774eb4400d-kube-api-access-8xww2\") pod \"keystone-db-sync-x6dgx\" (UID: \"cf26015c-9724-4189-8b76-39774eb4400d\") " pod="openstack/keystone-db-sync-x6dgx" Jan 27 09:13:33 crc kubenswrapper[4799]: I0127 09:13:33.872783 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x6dgx" Jan 27 09:13:34 crc kubenswrapper[4799]: I0127 09:13:34.352106 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x6dgx"] Jan 27 09:13:34 crc kubenswrapper[4799]: W0127 09:13:34.368729 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf26015c_9724_4189_8b76_39774eb4400d.slice/crio-f40ba1c8a920e52cad27d03cf2ac627e7a7b90ad9f051c0eb6a6cb194d0c1163 WatchSource:0}: Error finding container f40ba1c8a920e52cad27d03cf2ac627e7a7b90ad9f051c0eb6a6cb194d0c1163: Status 404 returned error can't find the container with id f40ba1c8a920e52cad27d03cf2ac627e7a7b90ad9f051c0eb6a6cb194d0c1163 Jan 27 09:13:34 crc kubenswrapper[4799]: I0127 09:13:34.897914 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x6dgx" event={"ID":"cf26015c-9724-4189-8b76-39774eb4400d","Type":"ContainerStarted","Data":"405436464a5d3be1fc0624f84fa662f2c0b97c14239f44b26c2e00e8ad3c1d8c"} Jan 27 09:13:34 crc kubenswrapper[4799]: I0127 09:13:34.898329 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x6dgx" event={"ID":"cf26015c-9724-4189-8b76-39774eb4400d","Type":"ContainerStarted","Data":"f40ba1c8a920e52cad27d03cf2ac627e7a7b90ad9f051c0eb6a6cb194d0c1163"} Jan 27 09:13:34 crc kubenswrapper[4799]: I0127 09:13:34.941039 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-x6dgx" podStartSLOduration=1.941020599 podStartE2EDuration="1.941020599s" podCreationTimestamp="2026-01-27 09:13:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:34.929810023 +0000 UTC m=+5281.240914158" watchObservedRunningTime="2026-01-27 09:13:34.941020599 +0000 UTC m=+5281.252124674" Jan 27 09:13:36 crc kubenswrapper[4799]: I0127 09:13:36.912488 4799 generic.go:334] "Generic (PLEG): container finished" podID="cf26015c-9724-4189-8b76-39774eb4400d" containerID="405436464a5d3be1fc0624f84fa662f2c0b97c14239f44b26c2e00e8ad3c1d8c" exitCode=0 Jan 27 09:13:36 crc kubenswrapper[4799]: I0127 09:13:36.912574 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x6dgx" event={"ID":"cf26015c-9724-4189-8b76-39774eb4400d","Type":"ContainerDied","Data":"405436464a5d3be1fc0624f84fa662f2c0b97c14239f44b26c2e00e8ad3c1d8c"} Jan 27 09:13:38 crc kubenswrapper[4799]: I0127 09:13:38.199640 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x6dgx" Jan 27 09:13:38 crc kubenswrapper[4799]: I0127 09:13:38.339440 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf26015c-9724-4189-8b76-39774eb4400d-config-data\") pod \"cf26015c-9724-4189-8b76-39774eb4400d\" (UID: \"cf26015c-9724-4189-8b76-39774eb4400d\") " Jan 27 09:13:38 crc kubenswrapper[4799]: I0127 09:13:38.339507 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf26015c-9724-4189-8b76-39774eb4400d-combined-ca-bundle\") pod \"cf26015c-9724-4189-8b76-39774eb4400d\" (UID: \"cf26015c-9724-4189-8b76-39774eb4400d\") " Jan 27 09:13:38 crc kubenswrapper[4799]: I0127 09:13:38.339543 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xww2\" (UniqueName: \"kubernetes.io/projected/cf26015c-9724-4189-8b76-39774eb4400d-kube-api-access-8xww2\") pod \"cf26015c-9724-4189-8b76-39774eb4400d\" (UID: \"cf26015c-9724-4189-8b76-39774eb4400d\") " Jan 27 09:13:38 crc kubenswrapper[4799]: I0127 09:13:38.344898 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf26015c-9724-4189-8b76-39774eb4400d-kube-api-access-8xww2" (OuterVolumeSpecName: "kube-api-access-8xww2") pod "cf26015c-9724-4189-8b76-39774eb4400d" (UID: "cf26015c-9724-4189-8b76-39774eb4400d"). InnerVolumeSpecName "kube-api-access-8xww2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:38 crc kubenswrapper[4799]: I0127 09:13:38.366723 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf26015c-9724-4189-8b76-39774eb4400d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf26015c-9724-4189-8b76-39774eb4400d" (UID: "cf26015c-9724-4189-8b76-39774eb4400d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:38 crc kubenswrapper[4799]: I0127 09:13:38.387403 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf26015c-9724-4189-8b76-39774eb4400d-config-data" (OuterVolumeSpecName: "config-data") pod "cf26015c-9724-4189-8b76-39774eb4400d" (UID: "cf26015c-9724-4189-8b76-39774eb4400d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:38 crc kubenswrapper[4799]: I0127 09:13:38.441396 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf26015c-9724-4189-8b76-39774eb4400d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:38 crc kubenswrapper[4799]: I0127 09:13:38.441426 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf26015c-9724-4189-8b76-39774eb4400d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:38 crc kubenswrapper[4799]: I0127 09:13:38.441438 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xww2\" (UniqueName: \"kubernetes.io/projected/cf26015c-9724-4189-8b76-39774eb4400d-kube-api-access-8xww2\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:38 crc kubenswrapper[4799]: I0127 09:13:38.931458 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x6dgx" event={"ID":"cf26015c-9724-4189-8b76-39774eb4400d","Type":"ContainerDied","Data":"f40ba1c8a920e52cad27d03cf2ac627e7a7b90ad9f051c0eb6a6cb194d0c1163"} Jan 27 09:13:38 crc kubenswrapper[4799]: I0127 09:13:38.931867 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f40ba1c8a920e52cad27d03cf2ac627e7a7b90ad9f051c0eb6a6cb194d0c1163" Jan 27 09:13:38 crc kubenswrapper[4799]: I0127 09:13:38.931506 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x6dgx" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.174858 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6859fc6d9c-qbc77"] Jan 27 09:13:39 crc kubenswrapper[4799]: E0127 09:13:39.175259 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf26015c-9724-4189-8b76-39774eb4400d" containerName="keystone-db-sync" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.175285 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf26015c-9724-4189-8b76-39774eb4400d" containerName="keystone-db-sync" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.176124 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf26015c-9724-4189-8b76-39774eb4400d" containerName="keystone-db-sync" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.179215 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.188648 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6859fc6d9c-qbc77"] Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.233479 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hzm8t"] Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.237366 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hzm8t" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.242288 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.242561 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.242673 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.242788 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9tlk6" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.244845 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.254719 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-config\") pod \"dnsmasq-dns-6859fc6d9c-qbc77\" (UID: \"796fb974-de10-4836-9541-96983ed8f913\") " pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.254774 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-ovsdbserver-sb\") pod \"dnsmasq-dns-6859fc6d9c-qbc77\" (UID: \"796fb974-de10-4836-9541-96983ed8f913\") " pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.254817 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-ovsdbserver-nb\") pod \"dnsmasq-dns-6859fc6d9c-qbc77\" (UID: \"796fb974-de10-4836-9541-96983ed8f913\") " pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.254855 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flrwg\" (UniqueName: \"kubernetes.io/projected/796fb974-de10-4836-9541-96983ed8f913-kube-api-access-flrwg\") pod \"dnsmasq-dns-6859fc6d9c-qbc77\" (UID: \"796fb974-de10-4836-9541-96983ed8f913\") " pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.254964 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-dns-svc\") pod \"dnsmasq-dns-6859fc6d9c-qbc77\" (UID: \"796fb974-de10-4836-9541-96983ed8f913\") " pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.260011 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hzm8t"] Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.356210 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-dns-svc\") pod \"dnsmasq-dns-6859fc6d9c-qbc77\" (UID: \"796fb974-de10-4836-9541-96983ed8f913\") " pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.356346 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-config-data\") pod \"keystone-bootstrap-hzm8t\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " pod="openstack/keystone-bootstrap-hzm8t" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.356401 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-scripts\") pod \"keystone-bootstrap-hzm8t\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " pod="openstack/keystone-bootstrap-hzm8t" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.356437 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-config\") pod \"dnsmasq-dns-6859fc6d9c-qbc77\" (UID: \"796fb974-de10-4836-9541-96983ed8f913\") " pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.356495 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-ovsdbserver-sb\") pod \"dnsmasq-dns-6859fc6d9c-qbc77\" (UID: \"796fb974-de10-4836-9541-96983ed8f913\") " pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.357513 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-dns-svc\") pod \"dnsmasq-dns-6859fc6d9c-qbc77\" (UID: \"796fb974-de10-4836-9541-96983ed8f913\") " pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.357604 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-ovsdbserver-sb\") pod \"dnsmasq-dns-6859fc6d9c-qbc77\" (UID: \"796fb974-de10-4836-9541-96983ed8f913\") " pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.357639 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-config\") pod \"dnsmasq-dns-6859fc6d9c-qbc77\" (UID: \"796fb974-de10-4836-9541-96983ed8f913\") " pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.357686 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-combined-ca-bundle\") pod \"keystone-bootstrap-hzm8t\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " pod="openstack/keystone-bootstrap-hzm8t" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.357735 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-fernet-keys\") pod \"keystone-bootstrap-hzm8t\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " pod="openstack/keystone-bootstrap-hzm8t" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.357838 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-ovsdbserver-nb\") pod \"dnsmasq-dns-6859fc6d9c-qbc77\" (UID: \"796fb974-de10-4836-9541-96983ed8f913\") " pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.358507 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-ovsdbserver-nb\") pod \"dnsmasq-dns-6859fc6d9c-qbc77\" (UID: \"796fb974-de10-4836-9541-96983ed8f913\") " pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.358579 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flrwg\" (UniqueName: \"kubernetes.io/projected/796fb974-de10-4836-9541-96983ed8f913-kube-api-access-flrwg\") pod \"dnsmasq-dns-6859fc6d9c-qbc77\" (UID: \"796fb974-de10-4836-9541-96983ed8f913\") " pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.358656 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-credential-keys\") pod \"keystone-bootstrap-hzm8t\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " pod="openstack/keystone-bootstrap-hzm8t" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.359035 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nb95\" (UniqueName: \"kubernetes.io/projected/4f507b8a-c303-45a0-88c6-ebef1addcef2-kube-api-access-5nb95\") pod \"keystone-bootstrap-hzm8t\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " pod="openstack/keystone-bootstrap-hzm8t" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.385447 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flrwg\" (UniqueName: \"kubernetes.io/projected/796fb974-de10-4836-9541-96983ed8f913-kube-api-access-flrwg\") pod \"dnsmasq-dns-6859fc6d9c-qbc77\" (UID: \"796fb974-de10-4836-9541-96983ed8f913\") " pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.460333 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-credential-keys\") pod \"keystone-bootstrap-hzm8t\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " pod="openstack/keystone-bootstrap-hzm8t" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.460722 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nb95\" (UniqueName: \"kubernetes.io/projected/4f507b8a-c303-45a0-88c6-ebef1addcef2-kube-api-access-5nb95\") pod \"keystone-bootstrap-hzm8t\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " pod="openstack/keystone-bootstrap-hzm8t" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.460821 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-config-data\") pod \"keystone-bootstrap-hzm8t\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " pod="openstack/keystone-bootstrap-hzm8t" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.460856 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-scripts\") pod \"keystone-bootstrap-hzm8t\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " pod="openstack/keystone-bootstrap-hzm8t" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.460896 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-combined-ca-bundle\") pod \"keystone-bootstrap-hzm8t\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " pod="openstack/keystone-bootstrap-hzm8t" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.460924 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-fernet-keys\") pod \"keystone-bootstrap-hzm8t\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " pod="openstack/keystone-bootstrap-hzm8t" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.464036 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-scripts\") pod \"keystone-bootstrap-hzm8t\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " pod="openstack/keystone-bootstrap-hzm8t" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.464255 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-combined-ca-bundle\") pod \"keystone-bootstrap-hzm8t\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " pod="openstack/keystone-bootstrap-hzm8t" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.464541 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-credential-keys\") pod \"keystone-bootstrap-hzm8t\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " pod="openstack/keystone-bootstrap-hzm8t" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.465550 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-fernet-keys\") pod \"keystone-bootstrap-hzm8t\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " pod="openstack/keystone-bootstrap-hzm8t" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.466251 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-config-data\") pod \"keystone-bootstrap-hzm8t\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " pod="openstack/keystone-bootstrap-hzm8t" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.482419 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nb95\" (UniqueName: \"kubernetes.io/projected/4f507b8a-c303-45a0-88c6-ebef1addcef2-kube-api-access-5nb95\") pod \"keystone-bootstrap-hzm8t\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " pod="openstack/keystone-bootstrap-hzm8t" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.502686 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" Jan 27 09:13:39 crc kubenswrapper[4799]: I0127 09:13:39.556354 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hzm8t" Jan 27 09:13:40 crc kubenswrapper[4799]: I0127 09:13:40.028429 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6859fc6d9c-qbc77"] Jan 27 09:13:40 crc kubenswrapper[4799]: W0127 09:13:40.028683 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod796fb974_de10_4836_9541_96983ed8f913.slice/crio-547543eac5e6c0fba7f1bee3b45eaf24c9411a6a58ca9c0bd21ddbe7bcd451dc WatchSource:0}: Error finding container 547543eac5e6c0fba7f1bee3b45eaf24c9411a6a58ca9c0bd21ddbe7bcd451dc: Status 404 returned error can't find the container with id 547543eac5e6c0fba7f1bee3b45eaf24c9411a6a58ca9c0bd21ddbe7bcd451dc Jan 27 09:13:40 crc kubenswrapper[4799]: I0127 09:13:40.090546 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hzm8t"] Jan 27 09:13:40 crc kubenswrapper[4799]: I0127 09:13:40.985837 4799 generic.go:334] "Generic (PLEG): container finished" podID="796fb974-de10-4836-9541-96983ed8f913" containerID="0a659ba51a6e477ebf497cb9942800a4c251471590cf6b6a6968381a4ef50c7a" exitCode=0 Jan 27 09:13:40 crc kubenswrapper[4799]: I0127 09:13:40.986243 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" event={"ID":"796fb974-de10-4836-9541-96983ed8f913","Type":"ContainerDied","Data":"0a659ba51a6e477ebf497cb9942800a4c251471590cf6b6a6968381a4ef50c7a"} Jan 27 09:13:40 crc kubenswrapper[4799]: I0127 09:13:40.986278 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" event={"ID":"796fb974-de10-4836-9541-96983ed8f913","Type":"ContainerStarted","Data":"547543eac5e6c0fba7f1bee3b45eaf24c9411a6a58ca9c0bd21ddbe7bcd451dc"} Jan 27 09:13:40 crc kubenswrapper[4799]: I0127 09:13:40.992511 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hzm8t" event={"ID":"4f507b8a-c303-45a0-88c6-ebef1addcef2","Type":"ContainerStarted","Data":"fc867c06f720d548cefd95be7eaff8e162dde88bb372af9793b97d1e44e8e54c"} Jan 27 09:13:40 crc kubenswrapper[4799]: I0127 09:13:40.992599 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hzm8t" event={"ID":"4f507b8a-c303-45a0-88c6-ebef1addcef2","Type":"ContainerStarted","Data":"75c1648ceb0bc3906f8d19f9ec7986ac2956433d2e986fc5b9cb7a7d98951ae6"} Jan 27 09:13:41 crc kubenswrapper[4799]: I0127 09:13:41.046285 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hzm8t" podStartSLOduration=2.046260236 podStartE2EDuration="2.046260236s" podCreationTimestamp="2026-01-27 09:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:41.044977511 +0000 UTC m=+5287.356081576" watchObservedRunningTime="2026-01-27 09:13:41.046260236 +0000 UTC m=+5287.357364311" Jan 27 09:13:42 crc kubenswrapper[4799]: I0127 09:13:42.007002 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" event={"ID":"796fb974-de10-4836-9541-96983ed8f913","Type":"ContainerStarted","Data":"cb5d2a8f4c87046dd3984e8467f7abaaed3e5ca6c98d4579d5b8df77b584a27f"} Jan 27 09:13:42 crc kubenswrapper[4799]: I0127 09:13:42.033664 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" podStartSLOduration=3.03363255 podStartE2EDuration="3.03363255s" podCreationTimestamp="2026-01-27 09:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:42.028252993 +0000 UTC m=+5288.339357058" watchObservedRunningTime="2026-01-27 09:13:42.03363255 +0000 UTC m=+5288.344736625" Jan 27 09:13:43 crc kubenswrapper[4799]: I0127 09:13:43.018175 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" Jan 27 09:13:44 crc kubenswrapper[4799]: I0127 09:13:44.031452 4799 generic.go:334] "Generic (PLEG): container finished" podID="4f507b8a-c303-45a0-88c6-ebef1addcef2" containerID="fc867c06f720d548cefd95be7eaff8e162dde88bb372af9793b97d1e44e8e54c" exitCode=0 Jan 27 09:13:44 crc kubenswrapper[4799]: I0127 09:13:44.031585 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hzm8t" event={"ID":"4f507b8a-c303-45a0-88c6-ebef1addcef2","Type":"ContainerDied","Data":"fc867c06f720d548cefd95be7eaff8e162dde88bb372af9793b97d1e44e8e54c"} Jan 27 09:13:45 crc kubenswrapper[4799]: I0127 09:13:45.334024 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hzm8t" Jan 27 09:13:45 crc kubenswrapper[4799]: I0127 09:13:45.379024 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-credential-keys\") pod \"4f507b8a-c303-45a0-88c6-ebef1addcef2\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " Jan 27 09:13:45 crc kubenswrapper[4799]: I0127 09:13:45.379423 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-combined-ca-bundle\") pod \"4f507b8a-c303-45a0-88c6-ebef1addcef2\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " Jan 27 09:13:45 crc kubenswrapper[4799]: I0127 09:13:45.379470 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-config-data\") pod \"4f507b8a-c303-45a0-88c6-ebef1addcef2\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " Jan 27 09:13:45 crc kubenswrapper[4799]: I0127 09:13:45.379486 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-scripts\") pod \"4f507b8a-c303-45a0-88c6-ebef1addcef2\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " Jan 27 09:13:45 crc kubenswrapper[4799]: I0127 09:13:45.379516 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-fernet-keys\") pod \"4f507b8a-c303-45a0-88c6-ebef1addcef2\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " Jan 27 09:13:45 crc kubenswrapper[4799]: I0127 09:13:45.379546 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nb95\" (UniqueName: \"kubernetes.io/projected/4f507b8a-c303-45a0-88c6-ebef1addcef2-kube-api-access-5nb95\") pod \"4f507b8a-c303-45a0-88c6-ebef1addcef2\" (UID: \"4f507b8a-c303-45a0-88c6-ebef1addcef2\") " Jan 27 09:13:45 crc kubenswrapper[4799]: I0127 09:13:45.407748 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f507b8a-c303-45a0-88c6-ebef1addcef2-kube-api-access-5nb95" (OuterVolumeSpecName: "kube-api-access-5nb95") pod "4f507b8a-c303-45a0-88c6-ebef1addcef2" (UID: "4f507b8a-c303-45a0-88c6-ebef1addcef2"). InnerVolumeSpecName "kube-api-access-5nb95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:45 crc kubenswrapper[4799]: I0127 09:13:45.408332 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4f507b8a-c303-45a0-88c6-ebef1addcef2" (UID: "4f507b8a-c303-45a0-88c6-ebef1addcef2"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:45 crc kubenswrapper[4799]: I0127 09:13:45.412590 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4f507b8a-c303-45a0-88c6-ebef1addcef2" (UID: "4f507b8a-c303-45a0-88c6-ebef1addcef2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:45 crc kubenswrapper[4799]: I0127 09:13:45.430907 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-scripts" (OuterVolumeSpecName: "scripts") pod "4f507b8a-c303-45a0-88c6-ebef1addcef2" (UID: "4f507b8a-c303-45a0-88c6-ebef1addcef2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:45 crc kubenswrapper[4799]: I0127 09:13:45.431654 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-config-data" (OuterVolumeSpecName: "config-data") pod "4f507b8a-c303-45a0-88c6-ebef1addcef2" (UID: "4f507b8a-c303-45a0-88c6-ebef1addcef2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:45 crc kubenswrapper[4799]: I0127 09:13:45.446432 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f507b8a-c303-45a0-88c6-ebef1addcef2" (UID: "4f507b8a-c303-45a0-88c6-ebef1addcef2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:45 crc kubenswrapper[4799]: I0127 09:13:45.481938 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:45 crc kubenswrapper[4799]: I0127 09:13:45.481983 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:45 crc kubenswrapper[4799]: I0127 09:13:45.481996 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:45 crc kubenswrapper[4799]: I0127 09:13:45.482007 4799 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:45 crc kubenswrapper[4799]: I0127 09:13:45.482020 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nb95\" (UniqueName: \"kubernetes.io/projected/4f507b8a-c303-45a0-88c6-ebef1addcef2-kube-api-access-5nb95\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:45 crc kubenswrapper[4799]: I0127 09:13:45.482033 4799 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4f507b8a-c303-45a0-88c6-ebef1addcef2-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.049946 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hzm8t" event={"ID":"4f507b8a-c303-45a0-88c6-ebef1addcef2","Type":"ContainerDied","Data":"75c1648ceb0bc3906f8d19f9ec7986ac2956433d2e986fc5b9cb7a7d98951ae6"} Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.049990 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75c1648ceb0bc3906f8d19f9ec7986ac2956433d2e986fc5b9cb7a7d98951ae6" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.050156 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hzm8t" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.132672 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hzm8t"] Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.140420 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hzm8t"] Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.226030 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lkkc7"] Jan 27 09:13:46 crc kubenswrapper[4799]: E0127 09:13:46.226459 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f507b8a-c303-45a0-88c6-ebef1addcef2" containerName="keystone-bootstrap" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.226483 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f507b8a-c303-45a0-88c6-ebef1addcef2" containerName="keystone-bootstrap" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.226706 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f507b8a-c303-45a0-88c6-ebef1addcef2" containerName="keystone-bootstrap" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.227479 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lkkc7" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.231854 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.232069 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.232078 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.232367 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.232610 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9tlk6" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.236504 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lkkc7"] Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.294144 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7ssn\" (UniqueName: \"kubernetes.io/projected/29f0f414-45f8-4563-b38a-0d09caab1f67-kube-api-access-c7ssn\") pod \"keystone-bootstrap-lkkc7\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " pod="openstack/keystone-bootstrap-lkkc7" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.294216 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-fernet-keys\") pod \"keystone-bootstrap-lkkc7\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " pod="openstack/keystone-bootstrap-lkkc7" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.294253 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-scripts\") pod \"keystone-bootstrap-lkkc7\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " pod="openstack/keystone-bootstrap-lkkc7" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.294291 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-config-data\") pod \"keystone-bootstrap-lkkc7\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " pod="openstack/keystone-bootstrap-lkkc7" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.294391 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-combined-ca-bundle\") pod \"keystone-bootstrap-lkkc7\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " pod="openstack/keystone-bootstrap-lkkc7" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.294462 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-credential-keys\") pod \"keystone-bootstrap-lkkc7\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " pod="openstack/keystone-bootstrap-lkkc7" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.396018 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7ssn\" (UniqueName: \"kubernetes.io/projected/29f0f414-45f8-4563-b38a-0d09caab1f67-kube-api-access-c7ssn\") pod \"keystone-bootstrap-lkkc7\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " pod="openstack/keystone-bootstrap-lkkc7" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.397467 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-fernet-keys\") pod \"keystone-bootstrap-lkkc7\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " pod="openstack/keystone-bootstrap-lkkc7" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.397583 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-scripts\") pod \"keystone-bootstrap-lkkc7\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " pod="openstack/keystone-bootstrap-lkkc7" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.397635 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-config-data\") pod \"keystone-bootstrap-lkkc7\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " pod="openstack/keystone-bootstrap-lkkc7" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.397910 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-combined-ca-bundle\") pod \"keystone-bootstrap-lkkc7\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " pod="openstack/keystone-bootstrap-lkkc7" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.398182 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-credential-keys\") pod \"keystone-bootstrap-lkkc7\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " pod="openstack/keystone-bootstrap-lkkc7" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.402559 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-scripts\") pod \"keystone-bootstrap-lkkc7\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " pod="openstack/keystone-bootstrap-lkkc7" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.404080 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-combined-ca-bundle\") pod \"keystone-bootstrap-lkkc7\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " pod="openstack/keystone-bootstrap-lkkc7" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.405408 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-credential-keys\") pod \"keystone-bootstrap-lkkc7\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " pod="openstack/keystone-bootstrap-lkkc7" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.405484 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-config-data\") pod \"keystone-bootstrap-lkkc7\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " pod="openstack/keystone-bootstrap-lkkc7" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.406605 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-fernet-keys\") pod \"keystone-bootstrap-lkkc7\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " pod="openstack/keystone-bootstrap-lkkc7" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.419691 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7ssn\" (UniqueName: \"kubernetes.io/projected/29f0f414-45f8-4563-b38a-0d09caab1f67-kube-api-access-c7ssn\") pod \"keystone-bootstrap-lkkc7\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " pod="openstack/keystone-bootstrap-lkkc7" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.462995 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f507b8a-c303-45a0-88c6-ebef1addcef2" path="/var/lib/kubelet/pods/4f507b8a-c303-45a0-88c6-ebef1addcef2/volumes" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.544588 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lkkc7" Jan 27 09:13:46 crc kubenswrapper[4799]: I0127 09:13:46.999337 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lkkc7"] Jan 27 09:13:47 crc kubenswrapper[4799]: W0127 09:13:47.003575 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29f0f414_45f8_4563_b38a_0d09caab1f67.slice/crio-ab47838778bb32cecf5eea22875df2fa6330ae661a21f5ff6cb12ef1b18b0bf6 WatchSource:0}: Error finding container ab47838778bb32cecf5eea22875df2fa6330ae661a21f5ff6cb12ef1b18b0bf6: Status 404 returned error can't find the container with id ab47838778bb32cecf5eea22875df2fa6330ae661a21f5ff6cb12ef1b18b0bf6 Jan 27 09:13:47 crc kubenswrapper[4799]: I0127 09:13:47.063539 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lkkc7" event={"ID":"29f0f414-45f8-4563-b38a-0d09caab1f67","Type":"ContainerStarted","Data":"ab47838778bb32cecf5eea22875df2fa6330ae661a21f5ff6cb12ef1b18b0bf6"} Jan 27 09:13:48 crc kubenswrapper[4799]: I0127 09:13:48.077093 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lkkc7" event={"ID":"29f0f414-45f8-4563-b38a-0d09caab1f67","Type":"ContainerStarted","Data":"46b0b8c4e4233f7b805ce62a3871409ef973b98d19b0d9dd81b337f123e5dc86"} Jan 27 09:13:48 crc kubenswrapper[4799]: I0127 09:13:48.122731 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lkkc7" podStartSLOduration=2.122704347 podStartE2EDuration="2.122704347s" podCreationTimestamp="2026-01-27 09:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:48.110238527 +0000 UTC m=+5294.421342632" watchObservedRunningTime="2026-01-27 09:13:48.122704347 +0000 UTC m=+5294.433808452" Jan 27 09:13:49 crc kubenswrapper[4799]: I0127 09:13:49.504397 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" Jan 27 09:13:49 crc kubenswrapper[4799]: I0127 09:13:49.580576 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864ff95797-vv7kb"] Jan 27 09:13:49 crc kubenswrapper[4799]: I0127 09:13:49.582814 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-864ff95797-vv7kb" podUID="f3f174ce-1bf6-4fc7-a8f2-353f9365cffd" containerName="dnsmasq-dns" containerID="cri-o://01fae0e63dc6f3ecdda121c1bff940bd2cc728ab4bad771f9eb733cf62a2f098" gracePeriod=10 Jan 27 09:13:50 crc kubenswrapper[4799]: I0127 09:13:50.095098 4799 generic.go:334] "Generic (PLEG): container finished" podID="f3f174ce-1bf6-4fc7-a8f2-353f9365cffd" containerID="01fae0e63dc6f3ecdda121c1bff940bd2cc728ab4bad771f9eb733cf62a2f098" exitCode=0 Jan 27 09:13:50 crc kubenswrapper[4799]: I0127 09:13:50.095234 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864ff95797-vv7kb" event={"ID":"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd","Type":"ContainerDied","Data":"01fae0e63dc6f3ecdda121c1bff940bd2cc728ab4bad771f9eb733cf62a2f098"} Jan 27 09:13:50 crc kubenswrapper[4799]: I0127 09:13:50.097493 4799 generic.go:334] "Generic (PLEG): container finished" podID="29f0f414-45f8-4563-b38a-0d09caab1f67" containerID="46b0b8c4e4233f7b805ce62a3871409ef973b98d19b0d9dd81b337f123e5dc86" exitCode=0 Jan 27 09:13:50 crc kubenswrapper[4799]: I0127 09:13:50.097534 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lkkc7" event={"ID":"29f0f414-45f8-4563-b38a-0d09caab1f67","Type":"ContainerDied","Data":"46b0b8c4e4233f7b805ce62a3871409ef973b98d19b0d9dd81b337f123e5dc86"} Jan 27 09:13:50 crc kubenswrapper[4799]: I0127 09:13:50.185617 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864ff95797-vv7kb" Jan 27 09:13:50 crc kubenswrapper[4799]: I0127 09:13:50.269472 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-config\") pod \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\" (UID: \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\") " Jan 27 09:13:50 crc kubenswrapper[4799]: I0127 09:13:50.269548 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-ovsdbserver-sb\") pod \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\" (UID: \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\") " Jan 27 09:13:50 crc kubenswrapper[4799]: I0127 09:13:50.269604 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-dns-svc\") pod \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\" (UID: \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\") " Jan 27 09:13:50 crc kubenswrapper[4799]: I0127 09:13:50.269664 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr8wf\" (UniqueName: \"kubernetes.io/projected/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-kube-api-access-sr8wf\") pod \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\" (UID: \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\") " Jan 27 09:13:50 crc kubenswrapper[4799]: I0127 09:13:50.269851 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-ovsdbserver-nb\") pod \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\" (UID: \"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd\") " Jan 27 09:13:50 crc kubenswrapper[4799]: I0127 09:13:50.275648 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-kube-api-access-sr8wf" (OuterVolumeSpecName: "kube-api-access-sr8wf") pod "f3f174ce-1bf6-4fc7-a8f2-353f9365cffd" (UID: "f3f174ce-1bf6-4fc7-a8f2-353f9365cffd"). InnerVolumeSpecName "kube-api-access-sr8wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:50 crc kubenswrapper[4799]: I0127 09:13:50.310133 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f3f174ce-1bf6-4fc7-a8f2-353f9365cffd" (UID: "f3f174ce-1bf6-4fc7-a8f2-353f9365cffd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:50 crc kubenswrapper[4799]: I0127 09:13:50.310146 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-config" (OuterVolumeSpecName: "config") pod "f3f174ce-1bf6-4fc7-a8f2-353f9365cffd" (UID: "f3f174ce-1bf6-4fc7-a8f2-353f9365cffd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:50 crc kubenswrapper[4799]: I0127 09:13:50.314477 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f3f174ce-1bf6-4fc7-a8f2-353f9365cffd" (UID: "f3f174ce-1bf6-4fc7-a8f2-353f9365cffd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:50 crc kubenswrapper[4799]: I0127 09:13:50.318479 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f3f174ce-1bf6-4fc7-a8f2-353f9365cffd" (UID: "f3f174ce-1bf6-4fc7-a8f2-353f9365cffd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:13:50 crc kubenswrapper[4799]: I0127 09:13:50.371541 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:50 crc kubenswrapper[4799]: I0127 09:13:50.371582 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:50 crc kubenswrapper[4799]: I0127 09:13:50.371592 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:50 crc kubenswrapper[4799]: I0127 09:13:50.371635 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:50 crc kubenswrapper[4799]: I0127 09:13:50.371645 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr8wf\" (UniqueName: \"kubernetes.io/projected/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd-kube-api-access-sr8wf\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.108686 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864ff95797-vv7kb" Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.108694 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864ff95797-vv7kb" event={"ID":"f3f174ce-1bf6-4fc7-a8f2-353f9365cffd","Type":"ContainerDied","Data":"2258c71354fbde2f3eea4924b4956fba360541bebf0bc91e18912ec465dee6f4"} Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.109189 4799 scope.go:117] "RemoveContainer" containerID="01fae0e63dc6f3ecdda121c1bff940bd2cc728ab4bad771f9eb733cf62a2f098" Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.142148 4799 scope.go:117] "RemoveContainer" containerID="13dde022673314111a18adffded98f932aa6045543b694ec01b7af8985d1357d" Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.150256 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864ff95797-vv7kb"] Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.160995 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-864ff95797-vv7kb"] Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.486972 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lkkc7" Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.628087 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-credential-keys\") pod \"29f0f414-45f8-4563-b38a-0d09caab1f67\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.628157 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7ssn\" (UniqueName: \"kubernetes.io/projected/29f0f414-45f8-4563-b38a-0d09caab1f67-kube-api-access-c7ssn\") pod \"29f0f414-45f8-4563-b38a-0d09caab1f67\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.628217 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-fernet-keys\") pod \"29f0f414-45f8-4563-b38a-0d09caab1f67\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.628246 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-config-data\") pod \"29f0f414-45f8-4563-b38a-0d09caab1f67\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.628407 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-combined-ca-bundle\") pod \"29f0f414-45f8-4563-b38a-0d09caab1f67\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.628457 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-scripts\") pod \"29f0f414-45f8-4563-b38a-0d09caab1f67\" (UID: \"29f0f414-45f8-4563-b38a-0d09caab1f67\") " Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.633993 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29f0f414-45f8-4563-b38a-0d09caab1f67-kube-api-access-c7ssn" (OuterVolumeSpecName: "kube-api-access-c7ssn") pod "29f0f414-45f8-4563-b38a-0d09caab1f67" (UID: "29f0f414-45f8-4563-b38a-0d09caab1f67"). InnerVolumeSpecName "kube-api-access-c7ssn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.634271 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "29f0f414-45f8-4563-b38a-0d09caab1f67" (UID: "29f0f414-45f8-4563-b38a-0d09caab1f67"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.634762 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-scripts" (OuterVolumeSpecName: "scripts") pod "29f0f414-45f8-4563-b38a-0d09caab1f67" (UID: "29f0f414-45f8-4563-b38a-0d09caab1f67"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.651090 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-config-data" (OuterVolumeSpecName: "config-data") pod "29f0f414-45f8-4563-b38a-0d09caab1f67" (UID: "29f0f414-45f8-4563-b38a-0d09caab1f67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.653539 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "29f0f414-45f8-4563-b38a-0d09caab1f67" (UID: "29f0f414-45f8-4563-b38a-0d09caab1f67"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.670800 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29f0f414-45f8-4563-b38a-0d09caab1f67" (UID: "29f0f414-45f8-4563-b38a-0d09caab1f67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.729652 4799 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.729698 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7ssn\" (UniqueName: \"kubernetes.io/projected/29f0f414-45f8-4563-b38a-0d09caab1f67-kube-api-access-c7ssn\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.729708 4799 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.729717 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.729725 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:51 crc kubenswrapper[4799]: I0127 09:13:51.729734 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29f0f414-45f8-4563-b38a-0d09caab1f67-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.121702 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lkkc7" event={"ID":"29f0f414-45f8-4563-b38a-0d09caab1f67","Type":"ContainerDied","Data":"ab47838778bb32cecf5eea22875df2fa6330ae661a21f5ff6cb12ef1b18b0bf6"} Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.122024 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab47838778bb32cecf5eea22875df2fa6330ae661a21f5ff6cb12ef1b18b0bf6" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.121983 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lkkc7" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.478404 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3f174ce-1bf6-4fc7-a8f2-353f9365cffd" path="/var/lib/kubelet/pods/f3f174ce-1bf6-4fc7-a8f2-353f9365cffd/volumes" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.595117 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7cb6c95d94-hxzp8"] Jan 27 09:13:52 crc kubenswrapper[4799]: E0127 09:13:52.595477 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f0f414-45f8-4563-b38a-0d09caab1f67" containerName="keystone-bootstrap" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.595494 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f0f414-45f8-4563-b38a-0d09caab1f67" containerName="keystone-bootstrap" Jan 27 09:13:52 crc kubenswrapper[4799]: E0127 09:13:52.595506 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3f174ce-1bf6-4fc7-a8f2-353f9365cffd" containerName="init" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.595512 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3f174ce-1bf6-4fc7-a8f2-353f9365cffd" containerName="init" Jan 27 09:13:52 crc kubenswrapper[4799]: E0127 09:13:52.595521 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3f174ce-1bf6-4fc7-a8f2-353f9365cffd" containerName="dnsmasq-dns" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.595527 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3f174ce-1bf6-4fc7-a8f2-353f9365cffd" containerName="dnsmasq-dns" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.595694 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="29f0f414-45f8-4563-b38a-0d09caab1f67" containerName="keystone-bootstrap" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.595707 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3f174ce-1bf6-4fc7-a8f2-353f9365cffd" containerName="dnsmasq-dns" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.596175 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7cb6c95d94-hxzp8" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.598224 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9tlk6" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.598534 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.599533 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.607649 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.617815 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7cb6c95d94-hxzp8"] Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.748960 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a86471f0-5809-491d-8f4a-f236533017f8-fernet-keys\") pod \"keystone-7cb6c95d94-hxzp8\" (UID: \"a86471f0-5809-491d-8f4a-f236533017f8\") " pod="openstack/keystone-7cb6c95d94-hxzp8" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.749258 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86471f0-5809-491d-8f4a-f236533017f8-combined-ca-bundle\") pod \"keystone-7cb6c95d94-hxzp8\" (UID: \"a86471f0-5809-491d-8f4a-f236533017f8\") " pod="openstack/keystone-7cb6c95d94-hxzp8" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.749376 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a86471f0-5809-491d-8f4a-f236533017f8-credential-keys\") pod \"keystone-7cb6c95d94-hxzp8\" (UID: \"a86471f0-5809-491d-8f4a-f236533017f8\") " pod="openstack/keystone-7cb6c95d94-hxzp8" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.749473 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5hpw\" (UniqueName: \"kubernetes.io/projected/a86471f0-5809-491d-8f4a-f236533017f8-kube-api-access-t5hpw\") pod \"keystone-7cb6c95d94-hxzp8\" (UID: \"a86471f0-5809-491d-8f4a-f236533017f8\") " pod="openstack/keystone-7cb6c95d94-hxzp8" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.749566 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a86471f0-5809-491d-8f4a-f236533017f8-scripts\") pod \"keystone-7cb6c95d94-hxzp8\" (UID: \"a86471f0-5809-491d-8f4a-f236533017f8\") " pod="openstack/keystone-7cb6c95d94-hxzp8" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.749652 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a86471f0-5809-491d-8f4a-f236533017f8-config-data\") pod \"keystone-7cb6c95d94-hxzp8\" (UID: \"a86471f0-5809-491d-8f4a-f236533017f8\") " pod="openstack/keystone-7cb6c95d94-hxzp8" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.851555 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5hpw\" (UniqueName: \"kubernetes.io/projected/a86471f0-5809-491d-8f4a-f236533017f8-kube-api-access-t5hpw\") pod \"keystone-7cb6c95d94-hxzp8\" (UID: \"a86471f0-5809-491d-8f4a-f236533017f8\") " pod="openstack/keystone-7cb6c95d94-hxzp8" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.851621 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a86471f0-5809-491d-8f4a-f236533017f8-scripts\") pod \"keystone-7cb6c95d94-hxzp8\" (UID: \"a86471f0-5809-491d-8f4a-f236533017f8\") " pod="openstack/keystone-7cb6c95d94-hxzp8" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.851665 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a86471f0-5809-491d-8f4a-f236533017f8-config-data\") pod \"keystone-7cb6c95d94-hxzp8\" (UID: \"a86471f0-5809-491d-8f4a-f236533017f8\") " pod="openstack/keystone-7cb6c95d94-hxzp8" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.851727 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a86471f0-5809-491d-8f4a-f236533017f8-fernet-keys\") pod \"keystone-7cb6c95d94-hxzp8\" (UID: \"a86471f0-5809-491d-8f4a-f236533017f8\") " pod="openstack/keystone-7cb6c95d94-hxzp8" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.851794 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86471f0-5809-491d-8f4a-f236533017f8-combined-ca-bundle\") pod \"keystone-7cb6c95d94-hxzp8\" (UID: \"a86471f0-5809-491d-8f4a-f236533017f8\") " pod="openstack/keystone-7cb6c95d94-hxzp8" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.851821 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a86471f0-5809-491d-8f4a-f236533017f8-credential-keys\") pod \"keystone-7cb6c95d94-hxzp8\" (UID: \"a86471f0-5809-491d-8f4a-f236533017f8\") " pod="openstack/keystone-7cb6c95d94-hxzp8" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.855851 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a86471f0-5809-491d-8f4a-f236533017f8-credential-keys\") pod \"keystone-7cb6c95d94-hxzp8\" (UID: \"a86471f0-5809-491d-8f4a-f236533017f8\") " pod="openstack/keystone-7cb6c95d94-hxzp8" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.859181 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a86471f0-5809-491d-8f4a-f236533017f8-config-data\") pod \"keystone-7cb6c95d94-hxzp8\" (UID: \"a86471f0-5809-491d-8f4a-f236533017f8\") " pod="openstack/keystone-7cb6c95d94-hxzp8" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.860236 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a86471f0-5809-491d-8f4a-f236533017f8-fernet-keys\") pod \"keystone-7cb6c95d94-hxzp8\" (UID: \"a86471f0-5809-491d-8f4a-f236533017f8\") " pod="openstack/keystone-7cb6c95d94-hxzp8" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.861791 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86471f0-5809-491d-8f4a-f236533017f8-combined-ca-bundle\") pod \"keystone-7cb6c95d94-hxzp8\" (UID: \"a86471f0-5809-491d-8f4a-f236533017f8\") " pod="openstack/keystone-7cb6c95d94-hxzp8" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.865618 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a86471f0-5809-491d-8f4a-f236533017f8-scripts\") pod \"keystone-7cb6c95d94-hxzp8\" (UID: \"a86471f0-5809-491d-8f4a-f236533017f8\") " pod="openstack/keystone-7cb6c95d94-hxzp8" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.875677 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5hpw\" (UniqueName: \"kubernetes.io/projected/a86471f0-5809-491d-8f4a-f236533017f8-kube-api-access-t5hpw\") pod \"keystone-7cb6c95d94-hxzp8\" (UID: \"a86471f0-5809-491d-8f4a-f236533017f8\") " pod="openstack/keystone-7cb6c95d94-hxzp8" Jan 27 09:13:52 crc kubenswrapper[4799]: I0127 09:13:52.916144 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7cb6c95d94-hxzp8" Jan 27 09:13:53 crc kubenswrapper[4799]: I0127 09:13:53.339674 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7cb6c95d94-hxzp8"] Jan 27 09:13:53 crc kubenswrapper[4799]: I0127 09:13:53.731615 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:13:53 crc kubenswrapper[4799]: I0127 09:13:53.732170 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:13:53 crc kubenswrapper[4799]: I0127 09:13:53.732234 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 09:13:53 crc kubenswrapper[4799]: I0127 09:13:53.733646 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 09:13:53 crc kubenswrapper[4799]: I0127 09:13:53.733756 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" gracePeriod=600 Jan 27 09:13:53 crc kubenswrapper[4799]: E0127 09:13:53.851126 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:13:54 crc kubenswrapper[4799]: I0127 09:13:54.140113 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7cb6c95d94-hxzp8" event={"ID":"a86471f0-5809-491d-8f4a-f236533017f8","Type":"ContainerStarted","Data":"e754fc207e9985ddbd08771284830a821c803d9c6abf11f4442920a1c8b2b0a9"} Jan 27 09:13:54 crc kubenswrapper[4799]: I0127 09:13:54.140173 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7cb6c95d94-hxzp8" event={"ID":"a86471f0-5809-491d-8f4a-f236533017f8","Type":"ContainerStarted","Data":"9b6ae7e0ccaedc634ce0dabdb1b2b2e03c136de92d960bb885e754853db272b0"} Jan 27 09:13:54 crc kubenswrapper[4799]: I0127 09:13:54.140245 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7cb6c95d94-hxzp8" Jan 27 09:13:54 crc kubenswrapper[4799]: I0127 09:13:54.141877 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" exitCode=0 Jan 27 09:13:54 crc kubenswrapper[4799]: I0127 09:13:54.141909 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47"} Jan 27 09:13:54 crc kubenswrapper[4799]: I0127 09:13:54.141959 4799 scope.go:117] "RemoveContainer" containerID="1d935470e42da2a88c4f12bf0eaba98305672f9b4f487c92db4e981fe15a0e50" Jan 27 09:13:54 crc kubenswrapper[4799]: I0127 09:13:54.142264 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:13:54 crc kubenswrapper[4799]: E0127 09:13:54.142509 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:13:54 crc kubenswrapper[4799]: I0127 09:13:54.158668 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7cb6c95d94-hxzp8" podStartSLOduration=2.158655085 podStartE2EDuration="2.158655085s" podCreationTimestamp="2026-01-27 09:13:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:13:54.157823673 +0000 UTC m=+5300.468927768" watchObservedRunningTime="2026-01-27 09:13:54.158655085 +0000 UTC m=+5300.469759150" Jan 27 09:14:08 crc kubenswrapper[4799]: I0127 09:14:08.451632 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:14:08 crc kubenswrapper[4799]: E0127 09:14:08.452441 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:14:21 crc kubenswrapper[4799]: I0127 09:14:21.451808 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:14:21 crc kubenswrapper[4799]: E0127 09:14:21.452535 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:14:22 crc kubenswrapper[4799]: I0127 09:14:22.244889 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mvdxr"] Jan 27 09:14:22 crc kubenswrapper[4799]: I0127 09:14:22.247864 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvdxr" Jan 27 09:14:22 crc kubenswrapper[4799]: I0127 09:14:22.253954 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mvdxr"] Jan 27 09:14:22 crc kubenswrapper[4799]: I0127 09:14:22.348747 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57d8c208-953b-406b-9d76-78abd1ea16c7-utilities\") pod \"certified-operators-mvdxr\" (UID: \"57d8c208-953b-406b-9d76-78abd1ea16c7\") " pod="openshift-marketplace/certified-operators-mvdxr" Jan 27 09:14:22 crc kubenswrapper[4799]: I0127 09:14:22.348879 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22p65\" (UniqueName: \"kubernetes.io/projected/57d8c208-953b-406b-9d76-78abd1ea16c7-kube-api-access-22p65\") pod \"certified-operators-mvdxr\" (UID: \"57d8c208-953b-406b-9d76-78abd1ea16c7\") " pod="openshift-marketplace/certified-operators-mvdxr" Jan 27 09:14:22 crc kubenswrapper[4799]: I0127 09:14:22.348974 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57d8c208-953b-406b-9d76-78abd1ea16c7-catalog-content\") pod \"certified-operators-mvdxr\" (UID: \"57d8c208-953b-406b-9d76-78abd1ea16c7\") " pod="openshift-marketplace/certified-operators-mvdxr" Jan 27 09:14:22 crc kubenswrapper[4799]: I0127 09:14:22.451047 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22p65\" (UniqueName: \"kubernetes.io/projected/57d8c208-953b-406b-9d76-78abd1ea16c7-kube-api-access-22p65\") pod \"certified-operators-mvdxr\" (UID: \"57d8c208-953b-406b-9d76-78abd1ea16c7\") " pod="openshift-marketplace/certified-operators-mvdxr" Jan 27 09:14:22 crc kubenswrapper[4799]: I0127 09:14:22.451152 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57d8c208-953b-406b-9d76-78abd1ea16c7-catalog-content\") pod \"certified-operators-mvdxr\" (UID: \"57d8c208-953b-406b-9d76-78abd1ea16c7\") " pod="openshift-marketplace/certified-operators-mvdxr" Jan 27 09:14:22 crc kubenswrapper[4799]: I0127 09:14:22.451232 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57d8c208-953b-406b-9d76-78abd1ea16c7-utilities\") pod \"certified-operators-mvdxr\" (UID: \"57d8c208-953b-406b-9d76-78abd1ea16c7\") " pod="openshift-marketplace/certified-operators-mvdxr" Jan 27 09:14:22 crc kubenswrapper[4799]: I0127 09:14:22.451744 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57d8c208-953b-406b-9d76-78abd1ea16c7-catalog-content\") pod \"certified-operators-mvdxr\" (UID: \"57d8c208-953b-406b-9d76-78abd1ea16c7\") " pod="openshift-marketplace/certified-operators-mvdxr" Jan 27 09:14:22 crc kubenswrapper[4799]: I0127 09:14:22.451761 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57d8c208-953b-406b-9d76-78abd1ea16c7-utilities\") pod \"certified-operators-mvdxr\" (UID: \"57d8c208-953b-406b-9d76-78abd1ea16c7\") " pod="openshift-marketplace/certified-operators-mvdxr" Jan 27 09:14:22 crc kubenswrapper[4799]: I0127 09:14:22.474025 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22p65\" (UniqueName: \"kubernetes.io/projected/57d8c208-953b-406b-9d76-78abd1ea16c7-kube-api-access-22p65\") pod \"certified-operators-mvdxr\" (UID: \"57d8c208-953b-406b-9d76-78abd1ea16c7\") " pod="openshift-marketplace/certified-operators-mvdxr" Jan 27 09:14:22 crc kubenswrapper[4799]: I0127 09:14:22.571431 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvdxr" Jan 27 09:14:23 crc kubenswrapper[4799]: I0127 09:14:23.071529 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mvdxr"] Jan 27 09:14:23 crc kubenswrapper[4799]: W0127 09:14:23.075394 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57d8c208_953b_406b_9d76_78abd1ea16c7.slice/crio-b26e3514b1a663095e4ddf65a63e1f2331ae453da7cf8b07ac6a84c626ff5b0b WatchSource:0}: Error finding container b26e3514b1a663095e4ddf65a63e1f2331ae453da7cf8b07ac6a84c626ff5b0b: Status 404 returned error can't find the container with id b26e3514b1a663095e4ddf65a63e1f2331ae453da7cf8b07ac6a84c626ff5b0b Jan 27 09:14:23 crc kubenswrapper[4799]: I0127 09:14:23.402695 4799 generic.go:334] "Generic (PLEG): container finished" podID="57d8c208-953b-406b-9d76-78abd1ea16c7" containerID="082bf0c2700ca42b5deb31efaf2dbbbd68fecfcc4e384ee9fe98c015d621df66" exitCode=0 Jan 27 09:14:23 crc kubenswrapper[4799]: I0127 09:14:23.402741 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvdxr" event={"ID":"57d8c208-953b-406b-9d76-78abd1ea16c7","Type":"ContainerDied","Data":"082bf0c2700ca42b5deb31efaf2dbbbd68fecfcc4e384ee9fe98c015d621df66"} Jan 27 09:14:23 crc kubenswrapper[4799]: I0127 09:14:23.403004 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvdxr" event={"ID":"57d8c208-953b-406b-9d76-78abd1ea16c7","Type":"ContainerStarted","Data":"b26e3514b1a663095e4ddf65a63e1f2331ae453da7cf8b07ac6a84c626ff5b0b"} Jan 27 09:14:24 crc kubenswrapper[4799]: I0127 09:14:24.411105 4799 generic.go:334] "Generic (PLEG): container finished" podID="57d8c208-953b-406b-9d76-78abd1ea16c7" containerID="60972ecfcaae3a1d8cc3971335cab2f74bd0581b95f7ca6e3e930d9c7d30742c" exitCode=0 Jan 27 09:14:24 crc kubenswrapper[4799]: I0127 09:14:24.411188 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvdxr" event={"ID":"57d8c208-953b-406b-9d76-78abd1ea16c7","Type":"ContainerDied","Data":"60972ecfcaae3a1d8cc3971335cab2f74bd0581b95f7ca6e3e930d9c7d30742c"} Jan 27 09:14:24 crc kubenswrapper[4799]: I0127 09:14:24.478201 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7cb6c95d94-hxzp8" Jan 27 09:14:26 crc kubenswrapper[4799]: I0127 09:14:26.431943 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvdxr" event={"ID":"57d8c208-953b-406b-9d76-78abd1ea16c7","Type":"ContainerStarted","Data":"875e04e6f807f6719782958b8851ec4f6f5a529d9e15da5b38ab29232915cef0"} Jan 27 09:14:26 crc kubenswrapper[4799]: I0127 09:14:26.456343 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mvdxr" podStartSLOduration=2.313839786 podStartE2EDuration="4.456288237s" podCreationTimestamp="2026-01-27 09:14:22 +0000 UTC" firstStartedPulling="2026-01-27 09:14:23.404159832 +0000 UTC m=+5329.715263897" lastFinishedPulling="2026-01-27 09:14:25.546608283 +0000 UTC m=+5331.857712348" observedRunningTime="2026-01-27 09:14:26.448581457 +0000 UTC m=+5332.759685542" watchObservedRunningTime="2026-01-27 09:14:26.456288237 +0000 UTC m=+5332.767392322" Jan 27 09:14:27 crc kubenswrapper[4799]: I0127 09:14:27.970138 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 09:14:27 crc kubenswrapper[4799]: I0127 09:14:27.971538 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 09:14:27 crc kubenswrapper[4799]: I0127 09:14:27.975619 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 27 09:14:27 crc kubenswrapper[4799]: I0127 09:14:27.975662 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 27 09:14:27 crc kubenswrapper[4799]: I0127 09:14:27.975709 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-vj7px" Jan 27 09:14:27 crc kubenswrapper[4799]: I0127 09:14:27.982839 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.008516 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 27 09:14:28 crc kubenswrapper[4799]: E0127 09:14:28.009185 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-gdb2c openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[kube-api-access-gdb2c openstack-config openstack-config-secret]: context canceled" pod="openstack/openstackclient" podUID="1aebe5f7-6bce-48c5-985c-404dc2115858" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.040885 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.045934 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1aebe5f7-6bce-48c5-985c-404dc2115858-openstack-config-secret\") pod \"openstackclient\" (UID: \"1aebe5f7-6bce-48c5-985c-404dc2115858\") " pod="openstack/openstackclient" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.045996 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1aebe5f7-6bce-48c5-985c-404dc2115858-openstack-config\") pod \"openstackclient\" (UID: \"1aebe5f7-6bce-48c5-985c-404dc2115858\") " pod="openstack/openstackclient" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.046027 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdb2c\" (UniqueName: \"kubernetes.io/projected/1aebe5f7-6bce-48c5-985c-404dc2115858-kube-api-access-gdb2c\") pod \"openstackclient\" (UID: \"1aebe5f7-6bce-48c5-985c-404dc2115858\") " pod="openstack/openstackclient" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.051230 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.052615 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.058810 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.147575 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80-openstack-config-secret\") pod \"openstackclient\" (UID: \"d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80\") " pod="openstack/openstackclient" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.147662 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc6hg\" (UniqueName: \"kubernetes.io/projected/d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80-kube-api-access-sc6hg\") pod \"openstackclient\" (UID: \"d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80\") " pod="openstack/openstackclient" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.147698 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1aebe5f7-6bce-48c5-985c-404dc2115858-openstack-config-secret\") pod \"openstackclient\" (UID: \"1aebe5f7-6bce-48c5-985c-404dc2115858\") " pod="openstack/openstackclient" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.147743 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1aebe5f7-6bce-48c5-985c-404dc2115858-openstack-config\") pod \"openstackclient\" (UID: \"1aebe5f7-6bce-48c5-985c-404dc2115858\") " pod="openstack/openstackclient" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.147766 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80-openstack-config\") pod \"openstackclient\" (UID: \"d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80\") " pod="openstack/openstackclient" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.147788 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdb2c\" (UniqueName: \"kubernetes.io/projected/1aebe5f7-6bce-48c5-985c-404dc2115858-kube-api-access-gdb2c\") pod \"openstackclient\" (UID: \"1aebe5f7-6bce-48c5-985c-404dc2115858\") " pod="openstack/openstackclient" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.149728 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1aebe5f7-6bce-48c5-985c-404dc2115858-openstack-config\") pod \"openstackclient\" (UID: \"1aebe5f7-6bce-48c5-985c-404dc2115858\") " pod="openstack/openstackclient" Jan 27 09:14:28 crc kubenswrapper[4799]: E0127 09:14:28.149936 4799 projected.go:194] Error preparing data for projected volume kube-api-access-gdb2c for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (1aebe5f7-6bce-48c5-985c-404dc2115858) does not match the UID in record. The object might have been deleted and then recreated Jan 27 09:14:28 crc kubenswrapper[4799]: E0127 09:14:28.150001 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1aebe5f7-6bce-48c5-985c-404dc2115858-kube-api-access-gdb2c podName:1aebe5f7-6bce-48c5-985c-404dc2115858 nodeName:}" failed. No retries permitted until 2026-01-27 09:14:28.649984628 +0000 UTC m=+5334.961088693 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gdb2c" (UniqueName: "kubernetes.io/projected/1aebe5f7-6bce-48c5-985c-404dc2115858-kube-api-access-gdb2c") pod "openstackclient" (UID: "1aebe5f7-6bce-48c5-985c-404dc2115858") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (1aebe5f7-6bce-48c5-985c-404dc2115858) does not match the UID in record. The object might have been deleted and then recreated Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.153950 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1aebe5f7-6bce-48c5-985c-404dc2115858-openstack-config-secret\") pod \"openstackclient\" (UID: \"1aebe5f7-6bce-48c5-985c-404dc2115858\") " pod="openstack/openstackclient" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.249114 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80-openstack-config\") pod \"openstackclient\" (UID: \"d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80\") " pod="openstack/openstackclient" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.249266 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80-openstack-config-secret\") pod \"openstackclient\" (UID: \"d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80\") " pod="openstack/openstackclient" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.249295 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc6hg\" (UniqueName: \"kubernetes.io/projected/d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80-kube-api-access-sc6hg\") pod \"openstackclient\" (UID: \"d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80\") " pod="openstack/openstackclient" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.250119 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80-openstack-config\") pod \"openstackclient\" (UID: \"d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80\") " pod="openstack/openstackclient" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.252645 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80-openstack-config-secret\") pod \"openstackclient\" (UID: \"d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80\") " pod="openstack/openstackclient" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.265896 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc6hg\" (UniqueName: \"kubernetes.io/projected/d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80-kube-api-access-sc6hg\") pod \"openstackclient\" (UID: \"d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80\") " pod="openstack/openstackclient" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.374854 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.447999 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.451076 4799 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="1aebe5f7-6bce-48c5-985c-404dc2115858" podUID="d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.501717 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.561273 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1aebe5f7-6bce-48c5-985c-404dc2115858-openstack-config\") pod \"1aebe5f7-6bce-48c5-985c-404dc2115858\" (UID: \"1aebe5f7-6bce-48c5-985c-404dc2115858\") " Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.561402 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1aebe5f7-6bce-48c5-985c-404dc2115858-openstack-config-secret\") pod \"1aebe5f7-6bce-48c5-985c-404dc2115858\" (UID: \"1aebe5f7-6bce-48c5-985c-404dc2115858\") " Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.561840 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdb2c\" (UniqueName: \"kubernetes.io/projected/1aebe5f7-6bce-48c5-985c-404dc2115858-kube-api-access-gdb2c\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.562869 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aebe5f7-6bce-48c5-985c-404dc2115858-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "1aebe5f7-6bce-48c5-985c-404dc2115858" (UID: "1aebe5f7-6bce-48c5-985c-404dc2115858"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.572489 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aebe5f7-6bce-48c5-985c-404dc2115858-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "1aebe5f7-6bce-48c5-985c-404dc2115858" (UID: "1aebe5f7-6bce-48c5-985c-404dc2115858"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.667427 4799 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1aebe5f7-6bce-48c5-985c-404dc2115858-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.667461 4799 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1aebe5f7-6bce-48c5-985c-404dc2115858-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:28 crc kubenswrapper[4799]: I0127 09:14:28.806877 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 09:14:28 crc kubenswrapper[4799]: W0127 09:14:28.811487 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2be17ba_8bd6_43b5_ad28_f0dd7f5e6e80.slice/crio-628885fc815242dc181188766ed3d08cc2069127a386b2cc9e3e05e5717ff249 WatchSource:0}: Error finding container 628885fc815242dc181188766ed3d08cc2069127a386b2cc9e3e05e5717ff249: Status 404 returned error can't find the container with id 628885fc815242dc181188766ed3d08cc2069127a386b2cc9e3e05e5717ff249 Jan 27 09:14:29 crc kubenswrapper[4799]: I0127 09:14:29.460861 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 09:14:29 crc kubenswrapper[4799]: I0127 09:14:29.465399 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80","Type":"ContainerStarted","Data":"5f684e98a6799c3dba1b3f003af8d7ab16d64164889369d323be0e40a4b69fcc"} Jan 27 09:14:29 crc kubenswrapper[4799]: I0127 09:14:29.465450 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80","Type":"ContainerStarted","Data":"628885fc815242dc181188766ed3d08cc2069127a386b2cc9e3e05e5717ff249"} Jan 27 09:14:29 crc kubenswrapper[4799]: I0127 09:14:29.471393 4799 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="1aebe5f7-6bce-48c5-985c-404dc2115858" podUID="d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80" Jan 27 09:14:29 crc kubenswrapper[4799]: I0127 09:14:29.495145 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.495127121 podStartE2EDuration="1.495127121s" podCreationTimestamp="2026-01-27 09:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:14:29.489352894 +0000 UTC m=+5335.800456959" watchObservedRunningTime="2026-01-27 09:14:29.495127121 +0000 UTC m=+5335.806231186" Jan 27 09:14:30 crc kubenswrapper[4799]: I0127 09:14:30.463963 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aebe5f7-6bce-48c5-985c-404dc2115858" path="/var/lib/kubelet/pods/1aebe5f7-6bce-48c5-985c-404dc2115858/volumes" Jan 27 09:14:32 crc kubenswrapper[4799]: I0127 09:14:32.572077 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mvdxr" Jan 27 09:14:32 crc kubenswrapper[4799]: I0127 09:14:32.572471 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mvdxr" Jan 27 09:14:32 crc kubenswrapper[4799]: I0127 09:14:32.611010 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mvdxr" Jan 27 09:14:33 crc kubenswrapper[4799]: I0127 09:14:33.546593 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mvdxr" Jan 27 09:14:33 crc kubenswrapper[4799]: I0127 09:14:33.602412 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mvdxr"] Jan 27 09:14:35 crc kubenswrapper[4799]: I0127 09:14:35.517716 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mvdxr" podUID="57d8c208-953b-406b-9d76-78abd1ea16c7" containerName="registry-server" containerID="cri-o://875e04e6f807f6719782958b8851ec4f6f5a529d9e15da5b38ab29232915cef0" gracePeriod=2 Jan 27 09:14:36 crc kubenswrapper[4799]: I0127 09:14:36.454726 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:14:36 crc kubenswrapper[4799]: E0127 09:14:36.455216 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:14:36 crc kubenswrapper[4799]: I0127 09:14:36.534001 4799 generic.go:334] "Generic (PLEG): container finished" podID="57d8c208-953b-406b-9d76-78abd1ea16c7" containerID="875e04e6f807f6719782958b8851ec4f6f5a529d9e15da5b38ab29232915cef0" exitCode=0 Jan 27 09:14:36 crc kubenswrapper[4799]: I0127 09:14:36.534046 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvdxr" event={"ID":"57d8c208-953b-406b-9d76-78abd1ea16c7","Type":"ContainerDied","Data":"875e04e6f807f6719782958b8851ec4f6f5a529d9e15da5b38ab29232915cef0"} Jan 27 09:14:36 crc kubenswrapper[4799]: I0127 09:14:36.534072 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvdxr" event={"ID":"57d8c208-953b-406b-9d76-78abd1ea16c7","Type":"ContainerDied","Data":"b26e3514b1a663095e4ddf65a63e1f2331ae453da7cf8b07ac6a84c626ff5b0b"} Jan 27 09:14:36 crc kubenswrapper[4799]: I0127 09:14:36.534086 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b26e3514b1a663095e4ddf65a63e1f2331ae453da7cf8b07ac6a84c626ff5b0b" Jan 27 09:14:36 crc kubenswrapper[4799]: I0127 09:14:36.564825 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvdxr" Jan 27 09:14:36 crc kubenswrapper[4799]: I0127 09:14:36.610699 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22p65\" (UniqueName: \"kubernetes.io/projected/57d8c208-953b-406b-9d76-78abd1ea16c7-kube-api-access-22p65\") pod \"57d8c208-953b-406b-9d76-78abd1ea16c7\" (UID: \"57d8c208-953b-406b-9d76-78abd1ea16c7\") " Jan 27 09:14:36 crc kubenswrapper[4799]: I0127 09:14:36.610875 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57d8c208-953b-406b-9d76-78abd1ea16c7-utilities\") pod \"57d8c208-953b-406b-9d76-78abd1ea16c7\" (UID: \"57d8c208-953b-406b-9d76-78abd1ea16c7\") " Jan 27 09:14:36 crc kubenswrapper[4799]: I0127 09:14:36.610925 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57d8c208-953b-406b-9d76-78abd1ea16c7-catalog-content\") pod \"57d8c208-953b-406b-9d76-78abd1ea16c7\" (UID: \"57d8c208-953b-406b-9d76-78abd1ea16c7\") " Jan 27 09:14:36 crc kubenswrapper[4799]: I0127 09:14:36.616343 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57d8c208-953b-406b-9d76-78abd1ea16c7-utilities" (OuterVolumeSpecName: "utilities") pod "57d8c208-953b-406b-9d76-78abd1ea16c7" (UID: "57d8c208-953b-406b-9d76-78abd1ea16c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:14:36 crc kubenswrapper[4799]: I0127 09:14:36.623477 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57d8c208-953b-406b-9d76-78abd1ea16c7-kube-api-access-22p65" (OuterVolumeSpecName: "kube-api-access-22p65") pod "57d8c208-953b-406b-9d76-78abd1ea16c7" (UID: "57d8c208-953b-406b-9d76-78abd1ea16c7"). InnerVolumeSpecName "kube-api-access-22p65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:14:36 crc kubenswrapper[4799]: I0127 09:14:36.674769 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57d8c208-953b-406b-9d76-78abd1ea16c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57d8c208-953b-406b-9d76-78abd1ea16c7" (UID: "57d8c208-953b-406b-9d76-78abd1ea16c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:14:36 crc kubenswrapper[4799]: I0127 09:14:36.712624 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57d8c208-953b-406b-9d76-78abd1ea16c7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:36 crc kubenswrapper[4799]: I0127 09:14:36.712652 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57d8c208-953b-406b-9d76-78abd1ea16c7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:36 crc kubenswrapper[4799]: I0127 09:14:36.712663 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22p65\" (UniqueName: \"kubernetes.io/projected/57d8c208-953b-406b-9d76-78abd1ea16c7-kube-api-access-22p65\") on node \"crc\" DevicePath \"\"" Jan 27 09:14:37 crc kubenswrapper[4799]: I0127 09:14:37.543193 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvdxr" Jan 27 09:14:37 crc kubenswrapper[4799]: I0127 09:14:37.594727 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mvdxr"] Jan 27 09:14:37 crc kubenswrapper[4799]: I0127 09:14:37.605182 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mvdxr"] Jan 27 09:14:38 crc kubenswrapper[4799]: I0127 09:14:38.463090 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57d8c208-953b-406b-9d76-78abd1ea16c7" path="/var/lib/kubelet/pods/57d8c208-953b-406b-9d76-78abd1ea16c7/volumes" Jan 27 09:14:47 crc kubenswrapper[4799]: I0127 09:14:47.451783 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:14:47 crc kubenswrapper[4799]: E0127 09:14:47.452722 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:15:00 crc kubenswrapper[4799]: I0127 09:15:00.167226 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491755-6vf7z"] Jan 27 09:15:00 crc kubenswrapper[4799]: E0127 09:15:00.168360 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57d8c208-953b-406b-9d76-78abd1ea16c7" containerName="registry-server" Jan 27 09:15:00 crc kubenswrapper[4799]: I0127 09:15:00.168381 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="57d8c208-953b-406b-9d76-78abd1ea16c7" containerName="registry-server" Jan 27 09:15:00 crc kubenswrapper[4799]: E0127 09:15:00.168399 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57d8c208-953b-406b-9d76-78abd1ea16c7" containerName="extract-utilities" Jan 27 09:15:00 crc kubenswrapper[4799]: I0127 09:15:00.168410 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="57d8c208-953b-406b-9d76-78abd1ea16c7" containerName="extract-utilities" Jan 27 09:15:00 crc kubenswrapper[4799]: E0127 09:15:00.168433 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57d8c208-953b-406b-9d76-78abd1ea16c7" containerName="extract-content" Jan 27 09:15:00 crc kubenswrapper[4799]: I0127 09:15:00.168443 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="57d8c208-953b-406b-9d76-78abd1ea16c7" containerName="extract-content" Jan 27 09:15:00 crc kubenswrapper[4799]: I0127 09:15:00.168691 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="57d8c208-953b-406b-9d76-78abd1ea16c7" containerName="registry-server" Jan 27 09:15:00 crc kubenswrapper[4799]: I0127 09:15:00.169755 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-6vf7z" Jan 27 09:15:00 crc kubenswrapper[4799]: I0127 09:15:00.172721 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 09:15:00 crc kubenswrapper[4799]: I0127 09:15:00.173931 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 09:15:00 crc kubenswrapper[4799]: I0127 09:15:00.182012 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491755-6vf7z"] Jan 27 09:15:00 crc kubenswrapper[4799]: I0127 09:15:00.351347 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24389976-4042-4f68-b694-1002ddb60da0-secret-volume\") pod \"collect-profiles-29491755-6vf7z\" (UID: \"24389976-4042-4f68-b694-1002ddb60da0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-6vf7z" Jan 27 09:15:00 crc kubenswrapper[4799]: I0127 09:15:00.351406 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxqj7\" (UniqueName: \"kubernetes.io/projected/24389976-4042-4f68-b694-1002ddb60da0-kube-api-access-kxqj7\") pod \"collect-profiles-29491755-6vf7z\" (UID: \"24389976-4042-4f68-b694-1002ddb60da0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-6vf7z" Jan 27 09:15:00 crc kubenswrapper[4799]: I0127 09:15:00.351672 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24389976-4042-4f68-b694-1002ddb60da0-config-volume\") pod \"collect-profiles-29491755-6vf7z\" (UID: \"24389976-4042-4f68-b694-1002ddb60da0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-6vf7z" Jan 27 09:15:00 crc kubenswrapper[4799]: I0127 09:15:00.453509 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24389976-4042-4f68-b694-1002ddb60da0-config-volume\") pod \"collect-profiles-29491755-6vf7z\" (UID: \"24389976-4042-4f68-b694-1002ddb60da0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-6vf7z" Jan 27 09:15:00 crc kubenswrapper[4799]: I0127 09:15:00.453596 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24389976-4042-4f68-b694-1002ddb60da0-secret-volume\") pod \"collect-profiles-29491755-6vf7z\" (UID: \"24389976-4042-4f68-b694-1002ddb60da0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-6vf7z" Jan 27 09:15:00 crc kubenswrapper[4799]: I0127 09:15:00.453642 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxqj7\" (UniqueName: \"kubernetes.io/projected/24389976-4042-4f68-b694-1002ddb60da0-kube-api-access-kxqj7\") pod \"collect-profiles-29491755-6vf7z\" (UID: \"24389976-4042-4f68-b694-1002ddb60da0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-6vf7z" Jan 27 09:15:00 crc kubenswrapper[4799]: I0127 09:15:00.455143 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24389976-4042-4f68-b694-1002ddb60da0-config-volume\") pod \"collect-profiles-29491755-6vf7z\" (UID: \"24389976-4042-4f68-b694-1002ddb60da0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-6vf7z" Jan 27 09:15:00 crc kubenswrapper[4799]: I0127 09:15:00.461072 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24389976-4042-4f68-b694-1002ddb60da0-secret-volume\") pod \"collect-profiles-29491755-6vf7z\" (UID: \"24389976-4042-4f68-b694-1002ddb60da0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-6vf7z" Jan 27 09:15:00 crc kubenswrapper[4799]: I0127 09:15:00.477926 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxqj7\" (UniqueName: \"kubernetes.io/projected/24389976-4042-4f68-b694-1002ddb60da0-kube-api-access-kxqj7\") pod \"collect-profiles-29491755-6vf7z\" (UID: \"24389976-4042-4f68-b694-1002ddb60da0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-6vf7z" Jan 27 09:15:00 crc kubenswrapper[4799]: I0127 09:15:00.496628 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-6vf7z" Jan 27 09:15:01 crc kubenswrapper[4799]: I0127 09:15:01.087246 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491755-6vf7z"] Jan 27 09:15:01 crc kubenswrapper[4799]: I0127 09:15:01.452838 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:15:01 crc kubenswrapper[4799]: E0127 09:15:01.453287 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:15:01 crc kubenswrapper[4799]: I0127 09:15:01.737693 4799 generic.go:334] "Generic (PLEG): container finished" podID="24389976-4042-4f68-b694-1002ddb60da0" containerID="bdb889e38d4d1600150ba6a84d5ab3d8e5360d1a2b7de8a4e7b82c944182dee3" exitCode=0 Jan 27 09:15:01 crc kubenswrapper[4799]: I0127 09:15:01.737743 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-6vf7z" event={"ID":"24389976-4042-4f68-b694-1002ddb60da0","Type":"ContainerDied","Data":"bdb889e38d4d1600150ba6a84d5ab3d8e5360d1a2b7de8a4e7b82c944182dee3"} Jan 27 09:15:01 crc kubenswrapper[4799]: I0127 09:15:01.737771 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-6vf7z" event={"ID":"24389976-4042-4f68-b694-1002ddb60da0","Type":"ContainerStarted","Data":"5ca840fbfd6789e620c8c82c5e99e51a754b67bfcc3fd13fb90cfae13dfffcb6"} Jan 27 09:15:03 crc kubenswrapper[4799]: I0127 09:15:03.014275 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-6vf7z" Jan 27 09:15:03 crc kubenswrapper[4799]: I0127 09:15:03.113816 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24389976-4042-4f68-b694-1002ddb60da0-config-volume\") pod \"24389976-4042-4f68-b694-1002ddb60da0\" (UID: \"24389976-4042-4f68-b694-1002ddb60da0\") " Jan 27 09:15:03 crc kubenswrapper[4799]: I0127 09:15:03.113961 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxqj7\" (UniqueName: \"kubernetes.io/projected/24389976-4042-4f68-b694-1002ddb60da0-kube-api-access-kxqj7\") pod \"24389976-4042-4f68-b694-1002ddb60da0\" (UID: \"24389976-4042-4f68-b694-1002ddb60da0\") " Jan 27 09:15:03 crc kubenswrapper[4799]: I0127 09:15:03.114038 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24389976-4042-4f68-b694-1002ddb60da0-secret-volume\") pod \"24389976-4042-4f68-b694-1002ddb60da0\" (UID: \"24389976-4042-4f68-b694-1002ddb60da0\") " Jan 27 09:15:03 crc kubenswrapper[4799]: I0127 09:15:03.115737 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24389976-4042-4f68-b694-1002ddb60da0-config-volume" (OuterVolumeSpecName: "config-volume") pod "24389976-4042-4f68-b694-1002ddb60da0" (UID: "24389976-4042-4f68-b694-1002ddb60da0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:15:03 crc kubenswrapper[4799]: I0127 09:15:03.121262 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24389976-4042-4f68-b694-1002ddb60da0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "24389976-4042-4f68-b694-1002ddb60da0" (UID: "24389976-4042-4f68-b694-1002ddb60da0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:15:03 crc kubenswrapper[4799]: I0127 09:15:03.123379 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24389976-4042-4f68-b694-1002ddb60da0-kube-api-access-kxqj7" (OuterVolumeSpecName: "kube-api-access-kxqj7") pod "24389976-4042-4f68-b694-1002ddb60da0" (UID: "24389976-4042-4f68-b694-1002ddb60da0"). InnerVolumeSpecName "kube-api-access-kxqj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:15:03 crc kubenswrapper[4799]: I0127 09:15:03.216618 4799 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24389976-4042-4f68-b694-1002ddb60da0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:03 crc kubenswrapper[4799]: I0127 09:15:03.216659 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxqj7\" (UniqueName: \"kubernetes.io/projected/24389976-4042-4f68-b694-1002ddb60da0-kube-api-access-kxqj7\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:03 crc kubenswrapper[4799]: I0127 09:15:03.216677 4799 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24389976-4042-4f68-b694-1002ddb60da0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 09:15:03 crc kubenswrapper[4799]: I0127 09:15:03.755416 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-6vf7z" event={"ID":"24389976-4042-4f68-b694-1002ddb60da0","Type":"ContainerDied","Data":"5ca840fbfd6789e620c8c82c5e99e51a754b67bfcc3fd13fb90cfae13dfffcb6"} Jan 27 09:15:03 crc kubenswrapper[4799]: I0127 09:15:03.755461 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ca840fbfd6789e620c8c82c5e99e51a754b67bfcc3fd13fb90cfae13dfffcb6" Jan 27 09:15:03 crc kubenswrapper[4799]: I0127 09:15:03.755497 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491755-6vf7z" Jan 27 09:15:04 crc kubenswrapper[4799]: I0127 09:15:04.100790 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491710-7qljf"] Jan 27 09:15:04 crc kubenswrapper[4799]: I0127 09:15:04.108346 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491710-7qljf"] Jan 27 09:15:04 crc kubenswrapper[4799]: I0127 09:15:04.463678 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8451b63-3252-4186-9a88-3d831e1a66fc" path="/var/lib/kubelet/pods/d8451b63-3252-4186-9a88-3d831e1a66fc/volumes" Jan 27 09:15:12 crc kubenswrapper[4799]: I0127 09:15:12.451643 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:15:12 crc kubenswrapper[4799]: E0127 09:15:12.452310 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:15:26 crc kubenswrapper[4799]: I0127 09:15:26.452793 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:15:26 crc kubenswrapper[4799]: E0127 09:15:26.454194 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:15:38 crc kubenswrapper[4799]: I0127 09:15:38.451726 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:15:38 crc kubenswrapper[4799]: E0127 09:15:38.452984 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:15:40 crc kubenswrapper[4799]: I0127 09:15:40.150359 4799 scope.go:117] "RemoveContainer" containerID="9a5c27ce1e96d191a5129160d83742a898be6a93ece214e63952542f67845c16" Jan 27 09:15:51 crc kubenswrapper[4799]: I0127 09:15:51.452058 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:15:51 crc kubenswrapper[4799]: E0127 09:15:51.452901 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:15:57 crc kubenswrapper[4799]: I0127 09:15:57.052604 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5476s"] Jan 27 09:15:57 crc kubenswrapper[4799]: I0127 09:15:57.059693 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-5476s"] Jan 27 09:15:58 crc kubenswrapper[4799]: I0127 09:15:58.467335 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e273bb3-f59b-4b53-a996-22631e029156" path="/var/lib/kubelet/pods/3e273bb3-f59b-4b53-a996-22631e029156/volumes" Jan 27 09:16:03 crc kubenswrapper[4799]: I0127 09:16:03.451959 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:16:03 crc kubenswrapper[4799]: E0127 09:16:03.452808 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.052994 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-2843-account-create-update-48mnx"] Jan 27 09:16:10 crc kubenswrapper[4799]: E0127 09:16:10.053752 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24389976-4042-4f68-b694-1002ddb60da0" containerName="collect-profiles" Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.053775 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="24389976-4042-4f68-b694-1002ddb60da0" containerName="collect-profiles" Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.053940 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="24389976-4042-4f68-b694-1002ddb60da0" containerName="collect-profiles" Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.054561 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2843-account-create-update-48mnx" Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.056735 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.064186 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-ljhr7"] Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.065526 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ljhr7" Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.075860 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2843-account-create-update-48mnx"] Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.083692 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-ljhr7"] Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.253174 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ln2h\" (UniqueName: \"kubernetes.io/projected/07984ba6-b448-4418-bc8d-e09313294368-kube-api-access-4ln2h\") pod \"barbican-db-create-ljhr7\" (UID: \"07984ba6-b448-4418-bc8d-e09313294368\") " pod="openstack/barbican-db-create-ljhr7" Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.253277 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdl4f\" (UniqueName: \"kubernetes.io/projected/46006d52-e5ef-4ec0-bf9e-e0c77c0cd441-kube-api-access-kdl4f\") pod \"barbican-2843-account-create-update-48mnx\" (UID: \"46006d52-e5ef-4ec0-bf9e-e0c77c0cd441\") " pod="openstack/barbican-2843-account-create-update-48mnx" Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.253361 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46006d52-e5ef-4ec0-bf9e-e0c77c0cd441-operator-scripts\") pod \"barbican-2843-account-create-update-48mnx\" (UID: \"46006d52-e5ef-4ec0-bf9e-e0c77c0cd441\") " pod="openstack/barbican-2843-account-create-update-48mnx" Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.253600 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07984ba6-b448-4418-bc8d-e09313294368-operator-scripts\") pod \"barbican-db-create-ljhr7\" (UID: \"07984ba6-b448-4418-bc8d-e09313294368\") " pod="openstack/barbican-db-create-ljhr7" Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.355172 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdl4f\" (UniqueName: \"kubernetes.io/projected/46006d52-e5ef-4ec0-bf9e-e0c77c0cd441-kube-api-access-kdl4f\") pod \"barbican-2843-account-create-update-48mnx\" (UID: \"46006d52-e5ef-4ec0-bf9e-e0c77c0cd441\") " pod="openstack/barbican-2843-account-create-update-48mnx" Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.355240 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46006d52-e5ef-4ec0-bf9e-e0c77c0cd441-operator-scripts\") pod \"barbican-2843-account-create-update-48mnx\" (UID: \"46006d52-e5ef-4ec0-bf9e-e0c77c0cd441\") " pod="openstack/barbican-2843-account-create-update-48mnx" Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.355320 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07984ba6-b448-4418-bc8d-e09313294368-operator-scripts\") pod \"barbican-db-create-ljhr7\" (UID: \"07984ba6-b448-4418-bc8d-e09313294368\") " pod="openstack/barbican-db-create-ljhr7" Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.355412 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ln2h\" (UniqueName: \"kubernetes.io/projected/07984ba6-b448-4418-bc8d-e09313294368-kube-api-access-4ln2h\") pod \"barbican-db-create-ljhr7\" (UID: \"07984ba6-b448-4418-bc8d-e09313294368\") " pod="openstack/barbican-db-create-ljhr7" Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.356562 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46006d52-e5ef-4ec0-bf9e-e0c77c0cd441-operator-scripts\") pod \"barbican-2843-account-create-update-48mnx\" (UID: \"46006d52-e5ef-4ec0-bf9e-e0c77c0cd441\") " pod="openstack/barbican-2843-account-create-update-48mnx" Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.356618 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07984ba6-b448-4418-bc8d-e09313294368-operator-scripts\") pod \"barbican-db-create-ljhr7\" (UID: \"07984ba6-b448-4418-bc8d-e09313294368\") " pod="openstack/barbican-db-create-ljhr7" Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.374569 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdl4f\" (UniqueName: \"kubernetes.io/projected/46006d52-e5ef-4ec0-bf9e-e0c77c0cd441-kube-api-access-kdl4f\") pod \"barbican-2843-account-create-update-48mnx\" (UID: \"46006d52-e5ef-4ec0-bf9e-e0c77c0cd441\") " pod="openstack/barbican-2843-account-create-update-48mnx" Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.377591 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ln2h\" (UniqueName: \"kubernetes.io/projected/07984ba6-b448-4418-bc8d-e09313294368-kube-api-access-4ln2h\") pod \"barbican-db-create-ljhr7\" (UID: \"07984ba6-b448-4418-bc8d-e09313294368\") " pod="openstack/barbican-db-create-ljhr7" Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.383922 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2843-account-create-update-48mnx" Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.396608 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ljhr7" Jan 27 09:16:10 crc kubenswrapper[4799]: W0127 09:16:10.872157 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07984ba6_b448_4418_bc8d_e09313294368.slice/crio-9d5e1e1777e1f4a16ae9094e589c66c34de48e930e23b472dce3669dbf7f13b7 WatchSource:0}: Error finding container 9d5e1e1777e1f4a16ae9094e589c66c34de48e930e23b472dce3669dbf7f13b7: Status 404 returned error can't find the container with id 9d5e1e1777e1f4a16ae9094e589c66c34de48e930e23b472dce3669dbf7f13b7 Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.874833 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-ljhr7"] Jan 27 09:16:10 crc kubenswrapper[4799]: I0127 09:16:10.916369 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2843-account-create-update-48mnx"] Jan 27 09:16:10 crc kubenswrapper[4799]: W0127 09:16:10.931678 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46006d52_e5ef_4ec0_bf9e_e0c77c0cd441.slice/crio-4261c2aeddc140e47ceb509687e22d0e0fdefea8b1901e42f586ed061a025b0e WatchSource:0}: Error finding container 4261c2aeddc140e47ceb509687e22d0e0fdefea8b1901e42f586ed061a025b0e: Status 404 returned error can't find the container with id 4261c2aeddc140e47ceb509687e22d0e0fdefea8b1901e42f586ed061a025b0e Jan 27 09:16:11 crc kubenswrapper[4799]: I0127 09:16:11.396140 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ljhr7" event={"ID":"07984ba6-b448-4418-bc8d-e09313294368","Type":"ContainerStarted","Data":"aa2c906c4a284eea692fb23f88b544ea42b085d555455e752957b466572524cb"} Jan 27 09:16:11 crc kubenswrapper[4799]: I0127 09:16:11.396714 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ljhr7" event={"ID":"07984ba6-b448-4418-bc8d-e09313294368","Type":"ContainerStarted","Data":"9d5e1e1777e1f4a16ae9094e589c66c34de48e930e23b472dce3669dbf7f13b7"} Jan 27 09:16:11 crc kubenswrapper[4799]: I0127 09:16:11.398125 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2843-account-create-update-48mnx" event={"ID":"46006d52-e5ef-4ec0-bf9e-e0c77c0cd441","Type":"ContainerStarted","Data":"60766fcd0e55a6dedac127c12ca5f0eb8b00492830d64598c0bd57ab460d9d1f"} Jan 27 09:16:11 crc kubenswrapper[4799]: I0127 09:16:11.398173 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2843-account-create-update-48mnx" event={"ID":"46006d52-e5ef-4ec0-bf9e-e0c77c0cd441","Type":"ContainerStarted","Data":"4261c2aeddc140e47ceb509687e22d0e0fdefea8b1901e42f586ed061a025b0e"} Jan 27 09:16:11 crc kubenswrapper[4799]: I0127 09:16:11.416746 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-ljhr7" podStartSLOduration=1.416728669 podStartE2EDuration="1.416728669s" podCreationTimestamp="2026-01-27 09:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:16:11.410651744 +0000 UTC m=+5437.721755809" watchObservedRunningTime="2026-01-27 09:16:11.416728669 +0000 UTC m=+5437.727832734" Jan 27 09:16:11 crc kubenswrapper[4799]: I0127 09:16:11.427638 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-2843-account-create-update-48mnx" podStartSLOduration=1.427616065 podStartE2EDuration="1.427616065s" podCreationTimestamp="2026-01-27 09:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:16:11.426144385 +0000 UTC m=+5437.737248450" watchObservedRunningTime="2026-01-27 09:16:11.427616065 +0000 UTC m=+5437.738720140" Jan 27 09:16:12 crc kubenswrapper[4799]: E0127 09:16:12.246349 4799 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46006d52_e5ef_4ec0_bf9e_e0c77c0cd441.slice/crio-conmon-60766fcd0e55a6dedac127c12ca5f0eb8b00492830d64598c0bd57ab460d9d1f.scope\": RecentStats: unable to find data in memory cache]" Jan 27 09:16:12 crc kubenswrapper[4799]: I0127 09:16:12.410268 4799 generic.go:334] "Generic (PLEG): container finished" podID="46006d52-e5ef-4ec0-bf9e-e0c77c0cd441" containerID="60766fcd0e55a6dedac127c12ca5f0eb8b00492830d64598c0bd57ab460d9d1f" exitCode=0 Jan 27 09:16:12 crc kubenswrapper[4799]: I0127 09:16:12.410346 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2843-account-create-update-48mnx" event={"ID":"46006d52-e5ef-4ec0-bf9e-e0c77c0cd441","Type":"ContainerDied","Data":"60766fcd0e55a6dedac127c12ca5f0eb8b00492830d64598c0bd57ab460d9d1f"} Jan 27 09:16:12 crc kubenswrapper[4799]: I0127 09:16:12.412754 4799 generic.go:334] "Generic (PLEG): container finished" podID="07984ba6-b448-4418-bc8d-e09313294368" containerID="aa2c906c4a284eea692fb23f88b544ea42b085d555455e752957b466572524cb" exitCode=0 Jan 27 09:16:12 crc kubenswrapper[4799]: I0127 09:16:12.412817 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ljhr7" event={"ID":"07984ba6-b448-4418-bc8d-e09313294368","Type":"ContainerDied","Data":"aa2c906c4a284eea692fb23f88b544ea42b085d555455e752957b466572524cb"} Jan 27 09:16:13 crc kubenswrapper[4799]: I0127 09:16:13.820227 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ljhr7" Jan 27 09:16:13 crc kubenswrapper[4799]: I0127 09:16:13.826474 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2843-account-create-update-48mnx" Jan 27 09:16:14 crc kubenswrapper[4799]: I0127 09:16:14.013390 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07984ba6-b448-4418-bc8d-e09313294368-operator-scripts\") pod \"07984ba6-b448-4418-bc8d-e09313294368\" (UID: \"07984ba6-b448-4418-bc8d-e09313294368\") " Jan 27 09:16:14 crc kubenswrapper[4799]: I0127 09:16:14.013445 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ln2h\" (UniqueName: \"kubernetes.io/projected/07984ba6-b448-4418-bc8d-e09313294368-kube-api-access-4ln2h\") pod \"07984ba6-b448-4418-bc8d-e09313294368\" (UID: \"07984ba6-b448-4418-bc8d-e09313294368\") " Jan 27 09:16:14 crc kubenswrapper[4799]: I0127 09:16:14.013504 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46006d52-e5ef-4ec0-bf9e-e0c77c0cd441-operator-scripts\") pod \"46006d52-e5ef-4ec0-bf9e-e0c77c0cd441\" (UID: \"46006d52-e5ef-4ec0-bf9e-e0c77c0cd441\") " Jan 27 09:16:14 crc kubenswrapper[4799]: I0127 09:16:14.013552 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdl4f\" (UniqueName: \"kubernetes.io/projected/46006d52-e5ef-4ec0-bf9e-e0c77c0cd441-kube-api-access-kdl4f\") pod \"46006d52-e5ef-4ec0-bf9e-e0c77c0cd441\" (UID: \"46006d52-e5ef-4ec0-bf9e-e0c77c0cd441\") " Jan 27 09:16:14 crc kubenswrapper[4799]: I0127 09:16:14.014434 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07984ba6-b448-4418-bc8d-e09313294368-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07984ba6-b448-4418-bc8d-e09313294368" (UID: "07984ba6-b448-4418-bc8d-e09313294368"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:16:14 crc kubenswrapper[4799]: I0127 09:16:14.015175 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46006d52-e5ef-4ec0-bf9e-e0c77c0cd441-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "46006d52-e5ef-4ec0-bf9e-e0c77c0cd441" (UID: "46006d52-e5ef-4ec0-bf9e-e0c77c0cd441"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:16:14 crc kubenswrapper[4799]: I0127 09:16:14.026610 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07984ba6-b448-4418-bc8d-e09313294368-kube-api-access-4ln2h" (OuterVolumeSpecName: "kube-api-access-4ln2h") pod "07984ba6-b448-4418-bc8d-e09313294368" (UID: "07984ba6-b448-4418-bc8d-e09313294368"). InnerVolumeSpecName "kube-api-access-4ln2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:16:14 crc kubenswrapper[4799]: I0127 09:16:14.026729 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46006d52-e5ef-4ec0-bf9e-e0c77c0cd441-kube-api-access-kdl4f" (OuterVolumeSpecName: "kube-api-access-kdl4f") pod "46006d52-e5ef-4ec0-bf9e-e0c77c0cd441" (UID: "46006d52-e5ef-4ec0-bf9e-e0c77c0cd441"). InnerVolumeSpecName "kube-api-access-kdl4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:16:14 crc kubenswrapper[4799]: I0127 09:16:14.116186 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46006d52-e5ef-4ec0-bf9e-e0c77c0cd441-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:14 crc kubenswrapper[4799]: I0127 09:16:14.116214 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdl4f\" (UniqueName: \"kubernetes.io/projected/46006d52-e5ef-4ec0-bf9e-e0c77c0cd441-kube-api-access-kdl4f\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:14 crc kubenswrapper[4799]: I0127 09:16:14.116225 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07984ba6-b448-4418-bc8d-e09313294368-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:14 crc kubenswrapper[4799]: I0127 09:16:14.116235 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ln2h\" (UniqueName: \"kubernetes.io/projected/07984ba6-b448-4418-bc8d-e09313294368-kube-api-access-4ln2h\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:14 crc kubenswrapper[4799]: I0127 09:16:14.435634 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ljhr7" Jan 27 09:16:14 crc kubenswrapper[4799]: I0127 09:16:14.436116 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ljhr7" event={"ID":"07984ba6-b448-4418-bc8d-e09313294368","Type":"ContainerDied","Data":"9d5e1e1777e1f4a16ae9094e589c66c34de48e930e23b472dce3669dbf7f13b7"} Jan 27 09:16:14 crc kubenswrapper[4799]: I0127 09:16:14.436145 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d5e1e1777e1f4a16ae9094e589c66c34de48e930e23b472dce3669dbf7f13b7" Jan 27 09:16:14 crc kubenswrapper[4799]: I0127 09:16:14.438984 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2843-account-create-update-48mnx" event={"ID":"46006d52-e5ef-4ec0-bf9e-e0c77c0cd441","Type":"ContainerDied","Data":"4261c2aeddc140e47ceb509687e22d0e0fdefea8b1901e42f586ed061a025b0e"} Jan 27 09:16:14 crc kubenswrapper[4799]: I0127 09:16:14.439034 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4261c2aeddc140e47ceb509687e22d0e0fdefea8b1901e42f586ed061a025b0e" Jan 27 09:16:14 crc kubenswrapper[4799]: I0127 09:16:14.439104 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2843-account-create-update-48mnx" Jan 27 09:16:15 crc kubenswrapper[4799]: I0127 09:16:15.318557 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-fgxzt"] Jan 27 09:16:15 crc kubenswrapper[4799]: E0127 09:16:15.319822 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07984ba6-b448-4418-bc8d-e09313294368" containerName="mariadb-database-create" Jan 27 09:16:15 crc kubenswrapper[4799]: I0127 09:16:15.319888 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="07984ba6-b448-4418-bc8d-e09313294368" containerName="mariadb-database-create" Jan 27 09:16:15 crc kubenswrapper[4799]: E0127 09:16:15.319985 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46006d52-e5ef-4ec0-bf9e-e0c77c0cd441" containerName="mariadb-account-create-update" Jan 27 09:16:15 crc kubenswrapper[4799]: I0127 09:16:15.320078 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="46006d52-e5ef-4ec0-bf9e-e0c77c0cd441" containerName="mariadb-account-create-update" Jan 27 09:16:15 crc kubenswrapper[4799]: I0127 09:16:15.320273 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="07984ba6-b448-4418-bc8d-e09313294368" containerName="mariadb-database-create" Jan 27 09:16:15 crc kubenswrapper[4799]: I0127 09:16:15.320356 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="46006d52-e5ef-4ec0-bf9e-e0c77c0cd441" containerName="mariadb-account-create-update" Jan 27 09:16:15 crc kubenswrapper[4799]: I0127 09:16:15.320943 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fgxzt" Jan 27 09:16:15 crc kubenswrapper[4799]: I0127 09:16:15.322642 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-q7bhf" Jan 27 09:16:15 crc kubenswrapper[4799]: I0127 09:16:15.324972 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 09:16:15 crc kubenswrapper[4799]: I0127 09:16:15.330141 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-fgxzt"] Jan 27 09:16:15 crc kubenswrapper[4799]: I0127 09:16:15.435110 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2-db-sync-config-data\") pod \"barbican-db-sync-fgxzt\" (UID: \"e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2\") " pod="openstack/barbican-db-sync-fgxzt" Jan 27 09:16:15 crc kubenswrapper[4799]: I0127 09:16:15.435433 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2-combined-ca-bundle\") pod \"barbican-db-sync-fgxzt\" (UID: \"e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2\") " pod="openstack/barbican-db-sync-fgxzt" Jan 27 09:16:15 crc kubenswrapper[4799]: I0127 09:16:15.435619 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4b6b\" (UniqueName: \"kubernetes.io/projected/e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2-kube-api-access-k4b6b\") pod \"barbican-db-sync-fgxzt\" (UID: \"e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2\") " pod="openstack/barbican-db-sync-fgxzt" Jan 27 09:16:15 crc kubenswrapper[4799]: I0127 09:16:15.537440 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2-db-sync-config-data\") pod \"barbican-db-sync-fgxzt\" (UID: \"e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2\") " pod="openstack/barbican-db-sync-fgxzt" Jan 27 09:16:15 crc kubenswrapper[4799]: I0127 09:16:15.537568 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2-combined-ca-bundle\") pod \"barbican-db-sync-fgxzt\" (UID: \"e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2\") " pod="openstack/barbican-db-sync-fgxzt" Jan 27 09:16:15 crc kubenswrapper[4799]: I0127 09:16:15.537675 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4b6b\" (UniqueName: \"kubernetes.io/projected/e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2-kube-api-access-k4b6b\") pod \"barbican-db-sync-fgxzt\" (UID: \"e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2\") " pod="openstack/barbican-db-sync-fgxzt" Jan 27 09:16:15 crc kubenswrapper[4799]: I0127 09:16:15.545804 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2-combined-ca-bundle\") pod \"barbican-db-sync-fgxzt\" (UID: \"e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2\") " pod="openstack/barbican-db-sync-fgxzt" Jan 27 09:16:15 crc kubenswrapper[4799]: I0127 09:16:15.546042 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2-db-sync-config-data\") pod \"barbican-db-sync-fgxzt\" (UID: \"e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2\") " pod="openstack/barbican-db-sync-fgxzt" Jan 27 09:16:15 crc kubenswrapper[4799]: I0127 09:16:15.561660 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4b6b\" (UniqueName: \"kubernetes.io/projected/e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2-kube-api-access-k4b6b\") pod \"barbican-db-sync-fgxzt\" (UID: \"e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2\") " pod="openstack/barbican-db-sync-fgxzt" Jan 27 09:16:15 crc kubenswrapper[4799]: I0127 09:16:15.641392 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fgxzt" Jan 27 09:16:16 crc kubenswrapper[4799]: I0127 09:16:16.108840 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-fgxzt"] Jan 27 09:16:16 crc kubenswrapper[4799]: I0127 09:16:16.464123 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fgxzt" event={"ID":"e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2","Type":"ContainerStarted","Data":"6677da93dd3166c302eb3c4b5f95f8f8fffff34a9c376da8a6c00206498c184f"} Jan 27 09:16:16 crc kubenswrapper[4799]: I0127 09:16:16.464192 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fgxzt" event={"ID":"e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2","Type":"ContainerStarted","Data":"c286ef59d3b369169b03722bc36a203a2484ae36f1d6e7cded75d5bb0025de6d"} Jan 27 09:16:16 crc kubenswrapper[4799]: I0127 09:16:16.477078 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-fgxzt" podStartSLOduration=1.4770557659999999 podStartE2EDuration="1.477055766s" podCreationTimestamp="2026-01-27 09:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:16:16.469834208 +0000 UTC m=+5442.780938293" watchObservedRunningTime="2026-01-27 09:16:16.477055766 +0000 UTC m=+5442.788159831" Jan 27 09:16:17 crc kubenswrapper[4799]: I0127 09:16:17.451328 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:16:17 crc kubenswrapper[4799]: E0127 09:16:17.451940 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:16:18 crc kubenswrapper[4799]: I0127 09:16:18.469997 4799 generic.go:334] "Generic (PLEG): container finished" podID="e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2" containerID="6677da93dd3166c302eb3c4b5f95f8f8fffff34a9c376da8a6c00206498c184f" exitCode=0 Jan 27 09:16:18 crc kubenswrapper[4799]: I0127 09:16:18.470035 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fgxzt" event={"ID":"e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2","Type":"ContainerDied","Data":"6677da93dd3166c302eb3c4b5f95f8f8fffff34a9c376da8a6c00206498c184f"} Jan 27 09:16:19 crc kubenswrapper[4799]: I0127 09:16:19.760108 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fgxzt" Jan 27 09:16:19 crc kubenswrapper[4799]: I0127 09:16:19.914563 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2-db-sync-config-data\") pod \"e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2\" (UID: \"e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2\") " Jan 27 09:16:19 crc kubenswrapper[4799]: I0127 09:16:19.914655 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4b6b\" (UniqueName: \"kubernetes.io/projected/e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2-kube-api-access-k4b6b\") pod \"e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2\" (UID: \"e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2\") " Jan 27 09:16:19 crc kubenswrapper[4799]: I0127 09:16:19.914710 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2-combined-ca-bundle\") pod \"e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2\" (UID: \"e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2\") " Jan 27 09:16:19 crc kubenswrapper[4799]: I0127 09:16:19.924444 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2" (UID: "e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:16:19 crc kubenswrapper[4799]: I0127 09:16:19.925014 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2-kube-api-access-k4b6b" (OuterVolumeSpecName: "kube-api-access-k4b6b") pod "e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2" (UID: "e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2"). InnerVolumeSpecName "kube-api-access-k4b6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:16:19 crc kubenswrapper[4799]: I0127 09:16:19.966049 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2" (UID: "e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.016536 4799 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.016576 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4b6b\" (UniqueName: \"kubernetes.io/projected/e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2-kube-api-access-k4b6b\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.016586 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.487954 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fgxzt" event={"ID":"e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2","Type":"ContainerDied","Data":"c286ef59d3b369169b03722bc36a203a2484ae36f1d6e7cded75d5bb0025de6d"} Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.488000 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c286ef59d3b369169b03722bc36a203a2484ae36f1d6e7cded75d5bb0025de6d" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.488108 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fgxzt" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.719684 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-646689794f-ln658"] Jan 27 09:16:20 crc kubenswrapper[4799]: E0127 09:16:20.720116 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2" containerName="barbican-db-sync" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.720141 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2" containerName="barbican-db-sync" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.720373 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2" containerName="barbican-db-sync" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.721573 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-646689794f-ln658" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.727917 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.728041 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.728164 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-q7bhf" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.736593 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-646689794f-ln658"] Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.748291 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-df9f7bcc4-tnmxn"] Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.753987 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-df9f7bcc4-tnmxn" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.757637 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.788363 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-df9f7bcc4-tnmxn"] Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.812623 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-df979b9b9-jvkng"] Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.814851 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df979b9b9-jvkng" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.830195 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42976d87-a22a-4111-9c1c-35370e961782-config-data\") pod \"barbican-worker-646689794f-ln658\" (UID: \"42976d87-a22a-4111-9c1c-35370e961782\") " pod="openstack/barbican-worker-646689794f-ln658" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.830318 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42976d87-a22a-4111-9c1c-35370e961782-logs\") pod \"barbican-worker-646689794f-ln658\" (UID: \"42976d87-a22a-4111-9c1c-35370e961782\") " pod="openstack/barbican-worker-646689794f-ln658" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.830342 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42976d87-a22a-4111-9c1c-35370e961782-config-data-custom\") pod \"barbican-worker-646689794f-ln658\" (UID: \"42976d87-a22a-4111-9c1c-35370e961782\") " pod="openstack/barbican-worker-646689794f-ln658" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.830387 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82w5f\" (UniqueName: \"kubernetes.io/projected/42976d87-a22a-4111-9c1c-35370e961782-kube-api-access-82w5f\") pod \"barbican-worker-646689794f-ln658\" (UID: \"42976d87-a22a-4111-9c1c-35370e961782\") " pod="openstack/barbican-worker-646689794f-ln658" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.830438 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42976d87-a22a-4111-9c1c-35370e961782-combined-ca-bundle\") pod \"barbican-worker-646689794f-ln658\" (UID: \"42976d87-a22a-4111-9c1c-35370e961782\") " pod="openstack/barbican-worker-646689794f-ln658" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.841889 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-df979b9b9-jvkng"] Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.932143 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42976d87-a22a-4111-9c1c-35370e961782-config-data-custom\") pod \"barbican-worker-646689794f-ln658\" (UID: \"42976d87-a22a-4111-9c1c-35370e961782\") " pod="openstack/barbican-worker-646689794f-ln658" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.932227 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42976d87-a22a-4111-9c1c-35370e961782-logs\") pod \"barbican-worker-646689794f-ln658\" (UID: \"42976d87-a22a-4111-9c1c-35370e961782\") " pod="openstack/barbican-worker-646689794f-ln658" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.932278 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82w5f\" (UniqueName: \"kubernetes.io/projected/42976d87-a22a-4111-9c1c-35370e961782-kube-api-access-82w5f\") pod \"barbican-worker-646689794f-ln658\" (UID: \"42976d87-a22a-4111-9c1c-35370e961782\") " pod="openstack/barbican-worker-646689794f-ln658" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.932396 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-dns-svc\") pod \"dnsmasq-dns-df979b9b9-jvkng\" (UID: \"68198885-4c18-4ea5-b5de-24b9f6cda897\") " pod="openstack/dnsmasq-dns-df979b9b9-jvkng" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.932423 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-ovsdbserver-nb\") pod \"dnsmasq-dns-df979b9b9-jvkng\" (UID: \"68198885-4c18-4ea5-b5de-24b9f6cda897\") " pod="openstack/dnsmasq-dns-df979b9b9-jvkng" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.932446 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2435d082-5291-4270-b40e-eae6085ee3db-logs\") pod \"barbican-keystone-listener-df9f7bcc4-tnmxn\" (UID: \"2435d082-5291-4270-b40e-eae6085ee3db\") " pod="openstack/barbican-keystone-listener-df9f7bcc4-tnmxn" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.932480 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z68t\" (UniqueName: \"kubernetes.io/projected/68198885-4c18-4ea5-b5de-24b9f6cda897-kube-api-access-2z68t\") pod \"dnsmasq-dns-df979b9b9-jvkng\" (UID: \"68198885-4c18-4ea5-b5de-24b9f6cda897\") " pod="openstack/dnsmasq-dns-df979b9b9-jvkng" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.932519 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2435d082-5291-4270-b40e-eae6085ee3db-config-data\") pod \"barbican-keystone-listener-df9f7bcc4-tnmxn\" (UID: \"2435d082-5291-4270-b40e-eae6085ee3db\") " pod="openstack/barbican-keystone-listener-df9f7bcc4-tnmxn" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.932553 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42976d87-a22a-4111-9c1c-35370e961782-combined-ca-bundle\") pod \"barbican-worker-646689794f-ln658\" (UID: \"42976d87-a22a-4111-9c1c-35370e961782\") " pod="openstack/barbican-worker-646689794f-ln658" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.932576 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-424vk\" (UniqueName: \"kubernetes.io/projected/2435d082-5291-4270-b40e-eae6085ee3db-kube-api-access-424vk\") pod \"barbican-keystone-listener-df9f7bcc4-tnmxn\" (UID: \"2435d082-5291-4270-b40e-eae6085ee3db\") " pod="openstack/barbican-keystone-listener-df9f7bcc4-tnmxn" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.932612 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42976d87-a22a-4111-9c1c-35370e961782-config-data\") pod \"barbican-worker-646689794f-ln658\" (UID: \"42976d87-a22a-4111-9c1c-35370e961782\") " pod="openstack/barbican-worker-646689794f-ln658" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.932664 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2435d082-5291-4270-b40e-eae6085ee3db-config-data-custom\") pod \"barbican-keystone-listener-df9f7bcc4-tnmxn\" (UID: \"2435d082-5291-4270-b40e-eae6085ee3db\") " pod="openstack/barbican-keystone-listener-df9f7bcc4-tnmxn" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.932709 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-ovsdbserver-sb\") pod \"dnsmasq-dns-df979b9b9-jvkng\" (UID: \"68198885-4c18-4ea5-b5de-24b9f6cda897\") " pod="openstack/dnsmasq-dns-df979b9b9-jvkng" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.932749 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2435d082-5291-4270-b40e-eae6085ee3db-combined-ca-bundle\") pod \"barbican-keystone-listener-df9f7bcc4-tnmxn\" (UID: \"2435d082-5291-4270-b40e-eae6085ee3db\") " pod="openstack/barbican-keystone-listener-df9f7bcc4-tnmxn" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.932837 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-config\") pod \"dnsmasq-dns-df979b9b9-jvkng\" (UID: \"68198885-4c18-4ea5-b5de-24b9f6cda897\") " pod="openstack/dnsmasq-dns-df979b9b9-jvkng" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.933340 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42976d87-a22a-4111-9c1c-35370e961782-logs\") pod \"barbican-worker-646689794f-ln658\" (UID: \"42976d87-a22a-4111-9c1c-35370e961782\") " pod="openstack/barbican-worker-646689794f-ln658" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.936726 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42976d87-a22a-4111-9c1c-35370e961782-combined-ca-bundle\") pod \"barbican-worker-646689794f-ln658\" (UID: \"42976d87-a22a-4111-9c1c-35370e961782\") " pod="openstack/barbican-worker-646689794f-ln658" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.940529 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42976d87-a22a-4111-9c1c-35370e961782-config-data\") pod \"barbican-worker-646689794f-ln658\" (UID: \"42976d87-a22a-4111-9c1c-35370e961782\") " pod="openstack/barbican-worker-646689794f-ln658" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.942875 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-765c486f4b-k85rt"] Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.943059 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42976d87-a22a-4111-9c1c-35370e961782-config-data-custom\") pod \"barbican-worker-646689794f-ln658\" (UID: \"42976d87-a22a-4111-9c1c-35370e961782\") " pod="openstack/barbican-worker-646689794f-ln658" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.944593 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-765c486f4b-k85rt" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.946776 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.957586 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-765c486f4b-k85rt"] Jan 27 09:16:20 crc kubenswrapper[4799]: I0127 09:16:20.963458 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82w5f\" (UniqueName: \"kubernetes.io/projected/42976d87-a22a-4111-9c1c-35370e961782-kube-api-access-82w5f\") pod \"barbican-worker-646689794f-ln658\" (UID: \"42976d87-a22a-4111-9c1c-35370e961782\") " pod="openstack/barbican-worker-646689794f-ln658" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.034553 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-dns-svc\") pod \"dnsmasq-dns-df979b9b9-jvkng\" (UID: \"68198885-4c18-4ea5-b5de-24b9f6cda897\") " pod="openstack/dnsmasq-dns-df979b9b9-jvkng" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.034591 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-ovsdbserver-nb\") pod \"dnsmasq-dns-df979b9b9-jvkng\" (UID: \"68198885-4c18-4ea5-b5de-24b9f6cda897\") " pod="openstack/dnsmasq-dns-df979b9b9-jvkng" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.034612 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2435d082-5291-4270-b40e-eae6085ee3db-logs\") pod \"barbican-keystone-listener-df9f7bcc4-tnmxn\" (UID: \"2435d082-5291-4270-b40e-eae6085ee3db\") " pod="openstack/barbican-keystone-listener-df9f7bcc4-tnmxn" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.034642 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z68t\" (UniqueName: \"kubernetes.io/projected/68198885-4c18-4ea5-b5de-24b9f6cda897-kube-api-access-2z68t\") pod \"dnsmasq-dns-df979b9b9-jvkng\" (UID: \"68198885-4c18-4ea5-b5de-24b9f6cda897\") " pod="openstack/dnsmasq-dns-df979b9b9-jvkng" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.034668 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2435d082-5291-4270-b40e-eae6085ee3db-config-data\") pod \"barbican-keystone-listener-df9f7bcc4-tnmxn\" (UID: \"2435d082-5291-4270-b40e-eae6085ee3db\") " pod="openstack/barbican-keystone-listener-df9f7bcc4-tnmxn" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.034696 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12991e8b-afab-43fd-8635-c48de903d58a-config-data-custom\") pod \"barbican-api-765c486f4b-k85rt\" (UID: \"12991e8b-afab-43fd-8635-c48de903d58a\") " pod="openstack/barbican-api-765c486f4b-k85rt" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.034719 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-424vk\" (UniqueName: \"kubernetes.io/projected/2435d082-5291-4270-b40e-eae6085ee3db-kube-api-access-424vk\") pod \"barbican-keystone-listener-df9f7bcc4-tnmxn\" (UID: \"2435d082-5291-4270-b40e-eae6085ee3db\") " pod="openstack/barbican-keystone-listener-df9f7bcc4-tnmxn" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.034742 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12991e8b-afab-43fd-8635-c48de903d58a-logs\") pod \"barbican-api-765c486f4b-k85rt\" (UID: \"12991e8b-afab-43fd-8635-c48de903d58a\") " pod="openstack/barbican-api-765c486f4b-k85rt" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.034766 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12991e8b-afab-43fd-8635-c48de903d58a-config-data\") pod \"barbican-api-765c486f4b-k85rt\" (UID: \"12991e8b-afab-43fd-8635-c48de903d58a\") " pod="openstack/barbican-api-765c486f4b-k85rt" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.034796 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2435d082-5291-4270-b40e-eae6085ee3db-config-data-custom\") pod \"barbican-keystone-listener-df9f7bcc4-tnmxn\" (UID: \"2435d082-5291-4270-b40e-eae6085ee3db\") " pod="openstack/barbican-keystone-listener-df9f7bcc4-tnmxn" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.034819 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-ovsdbserver-sb\") pod \"dnsmasq-dns-df979b9b9-jvkng\" (UID: \"68198885-4c18-4ea5-b5de-24b9f6cda897\") " pod="openstack/dnsmasq-dns-df979b9b9-jvkng" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.034842 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2435d082-5291-4270-b40e-eae6085ee3db-combined-ca-bundle\") pod \"barbican-keystone-listener-df9f7bcc4-tnmxn\" (UID: \"2435d082-5291-4270-b40e-eae6085ee3db\") " pod="openstack/barbican-keystone-listener-df9f7bcc4-tnmxn" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.034860 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12991e8b-afab-43fd-8635-c48de903d58a-combined-ca-bundle\") pod \"barbican-api-765c486f4b-k85rt\" (UID: \"12991e8b-afab-43fd-8635-c48de903d58a\") " pod="openstack/barbican-api-765c486f4b-k85rt" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.034886 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc792\" (UniqueName: \"kubernetes.io/projected/12991e8b-afab-43fd-8635-c48de903d58a-kube-api-access-wc792\") pod \"barbican-api-765c486f4b-k85rt\" (UID: \"12991e8b-afab-43fd-8635-c48de903d58a\") " pod="openstack/barbican-api-765c486f4b-k85rt" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.034920 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-config\") pod \"dnsmasq-dns-df979b9b9-jvkng\" (UID: \"68198885-4c18-4ea5-b5de-24b9f6cda897\") " pod="openstack/dnsmasq-dns-df979b9b9-jvkng" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.035093 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2435d082-5291-4270-b40e-eae6085ee3db-logs\") pod \"barbican-keystone-listener-df9f7bcc4-tnmxn\" (UID: \"2435d082-5291-4270-b40e-eae6085ee3db\") " pod="openstack/barbican-keystone-listener-df9f7bcc4-tnmxn" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.035667 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-ovsdbserver-nb\") pod \"dnsmasq-dns-df979b9b9-jvkng\" (UID: \"68198885-4c18-4ea5-b5de-24b9f6cda897\") " pod="openstack/dnsmasq-dns-df979b9b9-jvkng" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.035669 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-dns-svc\") pod \"dnsmasq-dns-df979b9b9-jvkng\" (UID: \"68198885-4c18-4ea5-b5de-24b9f6cda897\") " pod="openstack/dnsmasq-dns-df979b9b9-jvkng" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.036235 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-config\") pod \"dnsmasq-dns-df979b9b9-jvkng\" (UID: \"68198885-4c18-4ea5-b5de-24b9f6cda897\") " pod="openstack/dnsmasq-dns-df979b9b9-jvkng" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.036656 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-ovsdbserver-sb\") pod \"dnsmasq-dns-df979b9b9-jvkng\" (UID: \"68198885-4c18-4ea5-b5de-24b9f6cda897\") " pod="openstack/dnsmasq-dns-df979b9b9-jvkng" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.040009 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2435d082-5291-4270-b40e-eae6085ee3db-config-data-custom\") pod \"barbican-keystone-listener-df9f7bcc4-tnmxn\" (UID: \"2435d082-5291-4270-b40e-eae6085ee3db\") " pod="openstack/barbican-keystone-listener-df9f7bcc4-tnmxn" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.040665 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2435d082-5291-4270-b40e-eae6085ee3db-config-data\") pod \"barbican-keystone-listener-df9f7bcc4-tnmxn\" (UID: \"2435d082-5291-4270-b40e-eae6085ee3db\") " pod="openstack/barbican-keystone-listener-df9f7bcc4-tnmxn" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.041891 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-646689794f-ln658" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.050407 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z68t\" (UniqueName: \"kubernetes.io/projected/68198885-4c18-4ea5-b5de-24b9f6cda897-kube-api-access-2z68t\") pod \"dnsmasq-dns-df979b9b9-jvkng\" (UID: \"68198885-4c18-4ea5-b5de-24b9f6cda897\") " pod="openstack/dnsmasq-dns-df979b9b9-jvkng" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.053149 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2435d082-5291-4270-b40e-eae6085ee3db-combined-ca-bundle\") pod \"barbican-keystone-listener-df9f7bcc4-tnmxn\" (UID: \"2435d082-5291-4270-b40e-eae6085ee3db\") " pod="openstack/barbican-keystone-listener-df9f7bcc4-tnmxn" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.054757 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-424vk\" (UniqueName: \"kubernetes.io/projected/2435d082-5291-4270-b40e-eae6085ee3db-kube-api-access-424vk\") pod \"barbican-keystone-listener-df9f7bcc4-tnmxn\" (UID: \"2435d082-5291-4270-b40e-eae6085ee3db\") " pod="openstack/barbican-keystone-listener-df9f7bcc4-tnmxn" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.074900 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-df9f7bcc4-tnmxn" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.136442 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df979b9b9-jvkng" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.136894 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12991e8b-afab-43fd-8635-c48de903d58a-combined-ca-bundle\") pod \"barbican-api-765c486f4b-k85rt\" (UID: \"12991e8b-afab-43fd-8635-c48de903d58a\") " pod="openstack/barbican-api-765c486f4b-k85rt" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.136937 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc792\" (UniqueName: \"kubernetes.io/projected/12991e8b-afab-43fd-8635-c48de903d58a-kube-api-access-wc792\") pod \"barbican-api-765c486f4b-k85rt\" (UID: \"12991e8b-afab-43fd-8635-c48de903d58a\") " pod="openstack/barbican-api-765c486f4b-k85rt" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.137005 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12991e8b-afab-43fd-8635-c48de903d58a-config-data-custom\") pod \"barbican-api-765c486f4b-k85rt\" (UID: \"12991e8b-afab-43fd-8635-c48de903d58a\") " pod="openstack/barbican-api-765c486f4b-k85rt" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.137048 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12991e8b-afab-43fd-8635-c48de903d58a-logs\") pod \"barbican-api-765c486f4b-k85rt\" (UID: \"12991e8b-afab-43fd-8635-c48de903d58a\") " pod="openstack/barbican-api-765c486f4b-k85rt" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.137079 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12991e8b-afab-43fd-8635-c48de903d58a-config-data\") pod \"barbican-api-765c486f4b-k85rt\" (UID: \"12991e8b-afab-43fd-8635-c48de903d58a\") " pod="openstack/barbican-api-765c486f4b-k85rt" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.137845 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12991e8b-afab-43fd-8635-c48de903d58a-logs\") pod \"barbican-api-765c486f4b-k85rt\" (UID: \"12991e8b-afab-43fd-8635-c48de903d58a\") " pod="openstack/barbican-api-765c486f4b-k85rt" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.140598 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12991e8b-afab-43fd-8635-c48de903d58a-combined-ca-bundle\") pod \"barbican-api-765c486f4b-k85rt\" (UID: \"12991e8b-afab-43fd-8635-c48de903d58a\") " pod="openstack/barbican-api-765c486f4b-k85rt" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.140695 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12991e8b-afab-43fd-8635-c48de903d58a-config-data-custom\") pod \"barbican-api-765c486f4b-k85rt\" (UID: \"12991e8b-afab-43fd-8635-c48de903d58a\") " pod="openstack/barbican-api-765c486f4b-k85rt" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.141925 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12991e8b-afab-43fd-8635-c48de903d58a-config-data\") pod \"barbican-api-765c486f4b-k85rt\" (UID: \"12991e8b-afab-43fd-8635-c48de903d58a\") " pod="openstack/barbican-api-765c486f4b-k85rt" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.159033 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc792\" (UniqueName: \"kubernetes.io/projected/12991e8b-afab-43fd-8635-c48de903d58a-kube-api-access-wc792\") pod \"barbican-api-765c486f4b-k85rt\" (UID: \"12991e8b-afab-43fd-8635-c48de903d58a\") " pod="openstack/barbican-api-765c486f4b-k85rt" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.318164 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-765c486f4b-k85rt" Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.560844 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-df979b9b9-jvkng"] Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.635513 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-646689794f-ln658"] Jan 27 09:16:21 crc kubenswrapper[4799]: I0127 09:16:21.701897 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-df9f7bcc4-tnmxn"] Jan 27 09:16:22 crc kubenswrapper[4799]: I0127 09:16:22.033218 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-765c486f4b-k85rt"] Jan 27 09:16:22 crc kubenswrapper[4799]: I0127 09:16:22.537398 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-765c486f4b-k85rt" event={"ID":"12991e8b-afab-43fd-8635-c48de903d58a","Type":"ContainerStarted","Data":"d6b9bdd4d31a0d73f90d4eb43c1e2b4ad923c47df9ca19c1a4fd9a6ff1d8232b"} Jan 27 09:16:22 crc kubenswrapper[4799]: I0127 09:16:22.537717 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-765c486f4b-k85rt" event={"ID":"12991e8b-afab-43fd-8635-c48de903d58a","Type":"ContainerStarted","Data":"89785d7b6ebfc2478c6025f2fc546a959d1760c1ba6b88cbd8d95b402809ea41"} Jan 27 09:16:22 crc kubenswrapper[4799]: I0127 09:16:22.537729 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-765c486f4b-k85rt" event={"ID":"12991e8b-afab-43fd-8635-c48de903d58a","Type":"ContainerStarted","Data":"ad045379586acd2fdf88d76e774f1089accc3f8c903cfe7599144060fbdbda39"} Jan 27 09:16:22 crc kubenswrapper[4799]: I0127 09:16:22.539051 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-646689794f-ln658" event={"ID":"42976d87-a22a-4111-9c1c-35370e961782","Type":"ContainerStarted","Data":"f5e0527de3ab37893984a0a6ef25702a8d51b0225aebf3e9d58cc85a253c1ed9"} Jan 27 09:16:22 crc kubenswrapper[4799]: I0127 09:16:22.539096 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-646689794f-ln658" event={"ID":"42976d87-a22a-4111-9c1c-35370e961782","Type":"ContainerStarted","Data":"12712df42227104eb4696e82cd71d2cd1b2480ac59dbfdcae97369d56b4e7e07"} Jan 27 09:16:22 crc kubenswrapper[4799]: I0127 09:16:22.539113 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-646689794f-ln658" event={"ID":"42976d87-a22a-4111-9c1c-35370e961782","Type":"ContainerStarted","Data":"fb03d562d0561a8346c235ce65b844e59eba2ea02c6931669b53d4af61a33acb"} Jan 27 09:16:22 crc kubenswrapper[4799]: I0127 09:16:22.542007 4799 generic.go:334] "Generic (PLEG): container finished" podID="68198885-4c18-4ea5-b5de-24b9f6cda897" containerID="5f008ea9f352cdbb811bc4cc231aac2f0f99a1cc70b8012e7ed1c37426aa95f0" exitCode=0 Jan 27 09:16:22 crc kubenswrapper[4799]: I0127 09:16:22.542673 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df979b9b9-jvkng" event={"ID":"68198885-4c18-4ea5-b5de-24b9f6cda897","Type":"ContainerDied","Data":"5f008ea9f352cdbb811bc4cc231aac2f0f99a1cc70b8012e7ed1c37426aa95f0"} Jan 27 09:16:22 crc kubenswrapper[4799]: I0127 09:16:22.542708 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df979b9b9-jvkng" event={"ID":"68198885-4c18-4ea5-b5de-24b9f6cda897","Type":"ContainerStarted","Data":"3efa4d777072b5fdbf0e91cee61233cda057acc6a22756f7624e89931e8e423d"} Jan 27 09:16:22 crc kubenswrapper[4799]: I0127 09:16:22.544636 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-df9f7bcc4-tnmxn" event={"ID":"2435d082-5291-4270-b40e-eae6085ee3db","Type":"ContainerStarted","Data":"5ae372a0307429b8d2318d5e5c6ed80ab4aca96e6fbb0c23dfe456a020b9667f"} Jan 27 09:16:22 crc kubenswrapper[4799]: I0127 09:16:22.544671 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-df9f7bcc4-tnmxn" event={"ID":"2435d082-5291-4270-b40e-eae6085ee3db","Type":"ContainerStarted","Data":"962e53f0cc338a28fee02f3a20136a21cf1503f128ba647f11adf6d3cc6eff7b"} Jan 27 09:16:22 crc kubenswrapper[4799]: I0127 09:16:22.544686 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-df9f7bcc4-tnmxn" event={"ID":"2435d082-5291-4270-b40e-eae6085ee3db","Type":"ContainerStarted","Data":"bb667e0b72d218ce9230c28cb50f9ed1566c664f0ff2ae8b67bef5d24d4b9d66"} Jan 27 09:16:22 crc kubenswrapper[4799]: I0127 09:16:22.560590 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-646689794f-ln658" podStartSLOduration=2.56057006 podStartE2EDuration="2.56057006s" podCreationTimestamp="2026-01-27 09:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:16:22.556873249 +0000 UTC m=+5448.867977304" watchObservedRunningTime="2026-01-27 09:16:22.56057006 +0000 UTC m=+5448.871674125" Jan 27 09:16:22 crc kubenswrapper[4799]: I0127 09:16:22.587088 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-df9f7bcc4-tnmxn" podStartSLOduration=2.58706923 podStartE2EDuration="2.58706923s" podCreationTimestamp="2026-01-27 09:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:16:22.581601212 +0000 UTC m=+5448.892705287" watchObservedRunningTime="2026-01-27 09:16:22.58706923 +0000 UTC m=+5448.898173295" Jan 27 09:16:23 crc kubenswrapper[4799]: I0127 09:16:23.554935 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df979b9b9-jvkng" event={"ID":"68198885-4c18-4ea5-b5de-24b9f6cda897","Type":"ContainerStarted","Data":"8ff69720f5dd7dd6ebadbc7e49ac52bf7a3a1d9b5363844091e1fe75049fcd78"} Jan 27 09:16:23 crc kubenswrapper[4799]: I0127 09:16:23.573109 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-df979b9b9-jvkng" podStartSLOduration=3.573089629 podStartE2EDuration="3.573089629s" podCreationTimestamp="2026-01-27 09:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:16:23.57092647 +0000 UTC m=+5449.882030535" watchObservedRunningTime="2026-01-27 09:16:23.573089629 +0000 UTC m=+5449.884193694" Jan 27 09:16:23 crc kubenswrapper[4799]: I0127 09:16:23.594964 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-765c486f4b-k85rt" podStartSLOduration=3.594946143 podStartE2EDuration="3.594946143s" podCreationTimestamp="2026-01-27 09:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:16:23.591786508 +0000 UTC m=+5449.902890593" watchObservedRunningTime="2026-01-27 09:16:23.594946143 +0000 UTC m=+5449.906050208" Jan 27 09:16:24 crc kubenswrapper[4799]: I0127 09:16:24.562713 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-df979b9b9-jvkng" Jan 27 09:16:24 crc kubenswrapper[4799]: I0127 09:16:24.563501 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-765c486f4b-k85rt" Jan 27 09:16:24 crc kubenswrapper[4799]: I0127 09:16:24.563537 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-765c486f4b-k85rt" Jan 27 09:16:28 crc kubenswrapper[4799]: I0127 09:16:28.451781 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:16:28 crc kubenswrapper[4799]: E0127 09:16:28.452657 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:16:31 crc kubenswrapper[4799]: I0127 09:16:31.139248 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-df979b9b9-jvkng" Jan 27 09:16:31 crc kubenswrapper[4799]: I0127 09:16:31.208790 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6859fc6d9c-qbc77"] Jan 27 09:16:31 crc kubenswrapper[4799]: I0127 09:16:31.209025 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" podUID="796fb974-de10-4836-9541-96983ed8f913" containerName="dnsmasq-dns" containerID="cri-o://cb5d2a8f4c87046dd3984e8467f7abaaed3e5ca6c98d4579d5b8df77b584a27f" gracePeriod=10 Jan 27 09:16:31 crc kubenswrapper[4799]: I0127 09:16:31.622539 4799 generic.go:334] "Generic (PLEG): container finished" podID="796fb974-de10-4836-9541-96983ed8f913" containerID="cb5d2a8f4c87046dd3984e8467f7abaaed3e5ca6c98d4579d5b8df77b584a27f" exitCode=0 Jan 27 09:16:31 crc kubenswrapper[4799]: I0127 09:16:31.622619 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" event={"ID":"796fb974-de10-4836-9541-96983ed8f913","Type":"ContainerDied","Data":"cb5d2a8f4c87046dd3984e8467f7abaaed3e5ca6c98d4579d5b8df77b584a27f"} Jan 27 09:16:31 crc kubenswrapper[4799]: I0127 09:16:31.701681 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" Jan 27 09:16:31 crc kubenswrapper[4799]: I0127 09:16:31.787862 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-ovsdbserver-sb\") pod \"796fb974-de10-4836-9541-96983ed8f913\" (UID: \"796fb974-de10-4836-9541-96983ed8f913\") " Jan 27 09:16:31 crc kubenswrapper[4799]: I0127 09:16:31.787946 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-ovsdbserver-nb\") pod \"796fb974-de10-4836-9541-96983ed8f913\" (UID: \"796fb974-de10-4836-9541-96983ed8f913\") " Jan 27 09:16:31 crc kubenswrapper[4799]: I0127 09:16:31.787972 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-dns-svc\") pod \"796fb974-de10-4836-9541-96983ed8f913\" (UID: \"796fb974-de10-4836-9541-96983ed8f913\") " Jan 27 09:16:31 crc kubenswrapper[4799]: I0127 09:16:31.788042 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-config\") pod \"796fb974-de10-4836-9541-96983ed8f913\" (UID: \"796fb974-de10-4836-9541-96983ed8f913\") " Jan 27 09:16:31 crc kubenswrapper[4799]: I0127 09:16:31.788063 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flrwg\" (UniqueName: \"kubernetes.io/projected/796fb974-de10-4836-9541-96983ed8f913-kube-api-access-flrwg\") pod \"796fb974-de10-4836-9541-96983ed8f913\" (UID: \"796fb974-de10-4836-9541-96983ed8f913\") " Jan 27 09:16:31 crc kubenswrapper[4799]: I0127 09:16:31.798970 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/796fb974-de10-4836-9541-96983ed8f913-kube-api-access-flrwg" (OuterVolumeSpecName: "kube-api-access-flrwg") pod "796fb974-de10-4836-9541-96983ed8f913" (UID: "796fb974-de10-4836-9541-96983ed8f913"). InnerVolumeSpecName "kube-api-access-flrwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:16:31 crc kubenswrapper[4799]: I0127 09:16:31.847093 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "796fb974-de10-4836-9541-96983ed8f913" (UID: "796fb974-de10-4836-9541-96983ed8f913"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:16:31 crc kubenswrapper[4799]: I0127 09:16:31.851936 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "796fb974-de10-4836-9541-96983ed8f913" (UID: "796fb974-de10-4836-9541-96983ed8f913"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:16:31 crc kubenswrapper[4799]: I0127 09:16:31.861753 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-config" (OuterVolumeSpecName: "config") pod "796fb974-de10-4836-9541-96983ed8f913" (UID: "796fb974-de10-4836-9541-96983ed8f913"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:16:31 crc kubenswrapper[4799]: I0127 09:16:31.867282 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "796fb974-de10-4836-9541-96983ed8f913" (UID: "796fb974-de10-4836-9541-96983ed8f913"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:16:31 crc kubenswrapper[4799]: I0127 09:16:31.890014 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:31 crc kubenswrapper[4799]: I0127 09:16:31.890047 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:31 crc kubenswrapper[4799]: I0127 09:16:31.890059 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:31 crc kubenswrapper[4799]: I0127 09:16:31.890067 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/796fb974-de10-4836-9541-96983ed8f913-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:31 crc kubenswrapper[4799]: I0127 09:16:31.890077 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flrwg\" (UniqueName: \"kubernetes.io/projected/796fb974-de10-4836-9541-96983ed8f913-kube-api-access-flrwg\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:32 crc kubenswrapper[4799]: I0127 09:16:32.636169 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" event={"ID":"796fb974-de10-4836-9541-96983ed8f913","Type":"ContainerDied","Data":"547543eac5e6c0fba7f1bee3b45eaf24c9411a6a58ca9c0bd21ddbe7bcd451dc"} Jan 27 09:16:32 crc kubenswrapper[4799]: I0127 09:16:32.636515 4799 scope.go:117] "RemoveContainer" containerID="cb5d2a8f4c87046dd3984e8467f7abaaed3e5ca6c98d4579d5b8df77b584a27f" Jan 27 09:16:32 crc kubenswrapper[4799]: I0127 09:16:32.636616 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6859fc6d9c-qbc77" Jan 27 09:16:32 crc kubenswrapper[4799]: I0127 09:16:32.662889 4799 scope.go:117] "RemoveContainer" containerID="0a659ba51a6e477ebf497cb9942800a4c251471590cf6b6a6968381a4ef50c7a" Jan 27 09:16:32 crc kubenswrapper[4799]: I0127 09:16:32.663312 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6859fc6d9c-qbc77"] Jan 27 09:16:32 crc kubenswrapper[4799]: I0127 09:16:32.674759 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6859fc6d9c-qbc77"] Jan 27 09:16:32 crc kubenswrapper[4799]: I0127 09:16:32.851481 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-765c486f4b-k85rt" Jan 27 09:16:33 crc kubenswrapper[4799]: I0127 09:16:33.029273 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-765c486f4b-k85rt" Jan 27 09:16:34 crc kubenswrapper[4799]: I0127 09:16:34.466084 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="796fb974-de10-4836-9541-96983ed8f913" path="/var/lib/kubelet/pods/796fb974-de10-4836-9541-96983ed8f913/volumes" Jan 27 09:16:40 crc kubenswrapper[4799]: I0127 09:16:40.213180 4799 scope.go:117] "RemoveContainer" containerID="7b2a5cc6166e1513bb4fe1521862a85f0225c13454f06f867841f72953d7fb3f" Jan 27 09:16:40 crc kubenswrapper[4799]: I0127 09:16:40.451674 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:16:40 crc kubenswrapper[4799]: E0127 09:16:40.452238 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.053645 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-86jnb"] Jan 27 09:16:44 crc kubenswrapper[4799]: E0127 09:16:44.054729 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796fb974-de10-4836-9541-96983ed8f913" containerName="init" Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.054745 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="796fb974-de10-4836-9541-96983ed8f913" containerName="init" Jan 27 09:16:44 crc kubenswrapper[4799]: E0127 09:16:44.054769 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796fb974-de10-4836-9541-96983ed8f913" containerName="dnsmasq-dns" Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.054777 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="796fb974-de10-4836-9541-96983ed8f913" containerName="dnsmasq-dns" Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.054979 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="796fb974-de10-4836-9541-96983ed8f913" containerName="dnsmasq-dns" Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.055711 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-86jnb" Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.061079 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-86jnb"] Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.155630 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d0e9-account-create-update-6v2sg"] Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.156605 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d0e9-account-create-update-6v2sg" Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.158501 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.166208 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d0e9-account-create-update-6v2sg"] Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.221672 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa25c089-ffcf-47cb-a852-f00efd999834-operator-scripts\") pod \"neutron-d0e9-account-create-update-6v2sg\" (UID: \"fa25c089-ffcf-47cb-a852-f00efd999834\") " pod="openstack/neutron-d0e9-account-create-update-6v2sg" Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.221725 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp7mc\" (UniqueName: \"kubernetes.io/projected/fa25c089-ffcf-47cb-a852-f00efd999834-kube-api-access-mp7mc\") pod \"neutron-d0e9-account-create-update-6v2sg\" (UID: \"fa25c089-ffcf-47cb-a852-f00efd999834\") " pod="openstack/neutron-d0e9-account-create-update-6v2sg" Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.221921 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0b826cd-24f3-452a-a058-aa6dd0414e73-operator-scripts\") pod \"neutron-db-create-86jnb\" (UID: \"c0b826cd-24f3-452a-a058-aa6dd0414e73\") " pod="openstack/neutron-db-create-86jnb" Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.221982 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r54t4\" (UniqueName: \"kubernetes.io/projected/c0b826cd-24f3-452a-a058-aa6dd0414e73-kube-api-access-r54t4\") pod \"neutron-db-create-86jnb\" (UID: \"c0b826cd-24f3-452a-a058-aa6dd0414e73\") " pod="openstack/neutron-db-create-86jnb" Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.324345 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa25c089-ffcf-47cb-a852-f00efd999834-operator-scripts\") pod \"neutron-d0e9-account-create-update-6v2sg\" (UID: \"fa25c089-ffcf-47cb-a852-f00efd999834\") " pod="openstack/neutron-d0e9-account-create-update-6v2sg" Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.324465 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp7mc\" (UniqueName: \"kubernetes.io/projected/fa25c089-ffcf-47cb-a852-f00efd999834-kube-api-access-mp7mc\") pod \"neutron-d0e9-account-create-update-6v2sg\" (UID: \"fa25c089-ffcf-47cb-a852-f00efd999834\") " pod="openstack/neutron-d0e9-account-create-update-6v2sg" Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.324552 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0b826cd-24f3-452a-a058-aa6dd0414e73-operator-scripts\") pod \"neutron-db-create-86jnb\" (UID: \"c0b826cd-24f3-452a-a058-aa6dd0414e73\") " pod="openstack/neutron-db-create-86jnb" Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.324598 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r54t4\" (UniqueName: \"kubernetes.io/projected/c0b826cd-24f3-452a-a058-aa6dd0414e73-kube-api-access-r54t4\") pod \"neutron-db-create-86jnb\" (UID: \"c0b826cd-24f3-452a-a058-aa6dd0414e73\") " pod="openstack/neutron-db-create-86jnb" Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.325374 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0b826cd-24f3-452a-a058-aa6dd0414e73-operator-scripts\") pod \"neutron-db-create-86jnb\" (UID: \"c0b826cd-24f3-452a-a058-aa6dd0414e73\") " pod="openstack/neutron-db-create-86jnb" Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.325374 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa25c089-ffcf-47cb-a852-f00efd999834-operator-scripts\") pod \"neutron-d0e9-account-create-update-6v2sg\" (UID: \"fa25c089-ffcf-47cb-a852-f00efd999834\") " pod="openstack/neutron-d0e9-account-create-update-6v2sg" Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.346984 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp7mc\" (UniqueName: \"kubernetes.io/projected/fa25c089-ffcf-47cb-a852-f00efd999834-kube-api-access-mp7mc\") pod \"neutron-d0e9-account-create-update-6v2sg\" (UID: \"fa25c089-ffcf-47cb-a852-f00efd999834\") " pod="openstack/neutron-d0e9-account-create-update-6v2sg" Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.347439 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r54t4\" (UniqueName: \"kubernetes.io/projected/c0b826cd-24f3-452a-a058-aa6dd0414e73-kube-api-access-r54t4\") pod \"neutron-db-create-86jnb\" (UID: \"c0b826cd-24f3-452a-a058-aa6dd0414e73\") " pod="openstack/neutron-db-create-86jnb" Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.374413 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-86jnb" Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.473574 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d0e9-account-create-update-6v2sg" Jan 27 09:16:44 crc kubenswrapper[4799]: I0127 09:16:44.859567 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-86jnb"] Jan 27 09:16:44 crc kubenswrapper[4799]: W0127 09:16:44.865187 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0b826cd_24f3_452a_a058_aa6dd0414e73.slice/crio-7d494fa8d0de698f1285f62f814ae431de2757ffbdae156c013acbb6802bb440 WatchSource:0}: Error finding container 7d494fa8d0de698f1285f62f814ae431de2757ffbdae156c013acbb6802bb440: Status 404 returned error can't find the container with id 7d494fa8d0de698f1285f62f814ae431de2757ffbdae156c013acbb6802bb440 Jan 27 09:16:45 crc kubenswrapper[4799]: I0127 09:16:45.006404 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d0e9-account-create-update-6v2sg"] Jan 27 09:16:45 crc kubenswrapper[4799]: W0127 09:16:45.008919 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa25c089_ffcf_47cb_a852_f00efd999834.slice/crio-1d3cc0b680cdd65613d0b2ddcedeb29b4d98ac483faea11ca3488c49dced5096 WatchSource:0}: Error finding container 1d3cc0b680cdd65613d0b2ddcedeb29b4d98ac483faea11ca3488c49dced5096: Status 404 returned error can't find the container with id 1d3cc0b680cdd65613d0b2ddcedeb29b4d98ac483faea11ca3488c49dced5096 Jan 27 09:16:45 crc kubenswrapper[4799]: I0127 09:16:45.783196 4799 generic.go:334] "Generic (PLEG): container finished" podID="fa25c089-ffcf-47cb-a852-f00efd999834" containerID="bd9b8a0eb88d29db66f609f0553adfb8c627233e4943e79ced1a508def24aaf3" exitCode=0 Jan 27 09:16:45 crc kubenswrapper[4799]: I0127 09:16:45.783458 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d0e9-account-create-update-6v2sg" event={"ID":"fa25c089-ffcf-47cb-a852-f00efd999834","Type":"ContainerDied","Data":"bd9b8a0eb88d29db66f609f0553adfb8c627233e4943e79ced1a508def24aaf3"} Jan 27 09:16:45 crc kubenswrapper[4799]: I0127 09:16:45.784508 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d0e9-account-create-update-6v2sg" event={"ID":"fa25c089-ffcf-47cb-a852-f00efd999834","Type":"ContainerStarted","Data":"1d3cc0b680cdd65613d0b2ddcedeb29b4d98ac483faea11ca3488c49dced5096"} Jan 27 09:16:45 crc kubenswrapper[4799]: I0127 09:16:45.785801 4799 generic.go:334] "Generic (PLEG): container finished" podID="c0b826cd-24f3-452a-a058-aa6dd0414e73" containerID="5b455dd7fe8eb8e19cf1247368e12a04f5ba3260c71f621d78bd7fd6edeb7f45" exitCode=0 Jan 27 09:16:45 crc kubenswrapper[4799]: I0127 09:16:45.785851 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-86jnb" event={"ID":"c0b826cd-24f3-452a-a058-aa6dd0414e73","Type":"ContainerDied","Data":"5b455dd7fe8eb8e19cf1247368e12a04f5ba3260c71f621d78bd7fd6edeb7f45"} Jan 27 09:16:45 crc kubenswrapper[4799]: I0127 09:16:45.785880 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-86jnb" event={"ID":"c0b826cd-24f3-452a-a058-aa6dd0414e73","Type":"ContainerStarted","Data":"7d494fa8d0de698f1285f62f814ae431de2757ffbdae156c013acbb6802bb440"} Jan 27 09:16:47 crc kubenswrapper[4799]: I0127 09:16:47.221971 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-86jnb" Jan 27 09:16:47 crc kubenswrapper[4799]: I0127 09:16:47.229082 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d0e9-account-create-update-6v2sg" Jan 27 09:16:47 crc kubenswrapper[4799]: I0127 09:16:47.377390 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp7mc\" (UniqueName: \"kubernetes.io/projected/fa25c089-ffcf-47cb-a852-f00efd999834-kube-api-access-mp7mc\") pod \"fa25c089-ffcf-47cb-a852-f00efd999834\" (UID: \"fa25c089-ffcf-47cb-a852-f00efd999834\") " Jan 27 09:16:47 crc kubenswrapper[4799]: I0127 09:16:47.377459 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r54t4\" (UniqueName: \"kubernetes.io/projected/c0b826cd-24f3-452a-a058-aa6dd0414e73-kube-api-access-r54t4\") pod \"c0b826cd-24f3-452a-a058-aa6dd0414e73\" (UID: \"c0b826cd-24f3-452a-a058-aa6dd0414e73\") " Jan 27 09:16:47 crc kubenswrapper[4799]: I0127 09:16:47.377616 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa25c089-ffcf-47cb-a852-f00efd999834-operator-scripts\") pod \"fa25c089-ffcf-47cb-a852-f00efd999834\" (UID: \"fa25c089-ffcf-47cb-a852-f00efd999834\") " Jan 27 09:16:47 crc kubenswrapper[4799]: I0127 09:16:47.377657 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0b826cd-24f3-452a-a058-aa6dd0414e73-operator-scripts\") pod \"c0b826cd-24f3-452a-a058-aa6dd0414e73\" (UID: \"c0b826cd-24f3-452a-a058-aa6dd0414e73\") " Jan 27 09:16:47 crc kubenswrapper[4799]: I0127 09:16:47.378381 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa25c089-ffcf-47cb-a852-f00efd999834-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa25c089-ffcf-47cb-a852-f00efd999834" (UID: "fa25c089-ffcf-47cb-a852-f00efd999834"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:16:47 crc kubenswrapper[4799]: I0127 09:16:47.378399 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0b826cd-24f3-452a-a058-aa6dd0414e73-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c0b826cd-24f3-452a-a058-aa6dd0414e73" (UID: "c0b826cd-24f3-452a-a058-aa6dd0414e73"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:16:47 crc kubenswrapper[4799]: I0127 09:16:47.382856 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa25c089-ffcf-47cb-a852-f00efd999834-kube-api-access-mp7mc" (OuterVolumeSpecName: "kube-api-access-mp7mc") pod "fa25c089-ffcf-47cb-a852-f00efd999834" (UID: "fa25c089-ffcf-47cb-a852-f00efd999834"). InnerVolumeSpecName "kube-api-access-mp7mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:16:47 crc kubenswrapper[4799]: I0127 09:16:47.387182 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0b826cd-24f3-452a-a058-aa6dd0414e73-kube-api-access-r54t4" (OuterVolumeSpecName: "kube-api-access-r54t4") pod "c0b826cd-24f3-452a-a058-aa6dd0414e73" (UID: "c0b826cd-24f3-452a-a058-aa6dd0414e73"). InnerVolumeSpecName "kube-api-access-r54t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:16:47 crc kubenswrapper[4799]: I0127 09:16:47.479877 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r54t4\" (UniqueName: \"kubernetes.io/projected/c0b826cd-24f3-452a-a058-aa6dd0414e73-kube-api-access-r54t4\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:47 crc kubenswrapper[4799]: I0127 09:16:47.479953 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa25c089-ffcf-47cb-a852-f00efd999834-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:47 crc kubenswrapper[4799]: I0127 09:16:47.480047 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0b826cd-24f3-452a-a058-aa6dd0414e73-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:47 crc kubenswrapper[4799]: I0127 09:16:47.480087 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp7mc\" (UniqueName: \"kubernetes.io/projected/fa25c089-ffcf-47cb-a852-f00efd999834-kube-api-access-mp7mc\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:47 crc kubenswrapper[4799]: I0127 09:16:47.806331 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d0e9-account-create-update-6v2sg" event={"ID":"fa25c089-ffcf-47cb-a852-f00efd999834","Type":"ContainerDied","Data":"1d3cc0b680cdd65613d0b2ddcedeb29b4d98ac483faea11ca3488c49dced5096"} Jan 27 09:16:47 crc kubenswrapper[4799]: I0127 09:16:47.806654 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d3cc0b680cdd65613d0b2ddcedeb29b4d98ac483faea11ca3488c49dced5096" Jan 27 09:16:47 crc kubenswrapper[4799]: I0127 09:16:47.806713 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d0e9-account-create-update-6v2sg" Jan 27 09:16:47 crc kubenswrapper[4799]: I0127 09:16:47.814088 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-86jnb" event={"ID":"c0b826cd-24f3-452a-a058-aa6dd0414e73","Type":"ContainerDied","Data":"7d494fa8d0de698f1285f62f814ae431de2757ffbdae156c013acbb6802bb440"} Jan 27 09:16:47 crc kubenswrapper[4799]: I0127 09:16:47.814126 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d494fa8d0de698f1285f62f814ae431de2757ffbdae156c013acbb6802bb440" Jan 27 09:16:47 crc kubenswrapper[4799]: I0127 09:16:47.814179 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-86jnb" Jan 27 09:16:49 crc kubenswrapper[4799]: I0127 09:16:49.367838 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-xqszt"] Jan 27 09:16:49 crc kubenswrapper[4799]: E0127 09:16:49.368337 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b826cd-24f3-452a-a058-aa6dd0414e73" containerName="mariadb-database-create" Jan 27 09:16:49 crc kubenswrapper[4799]: I0127 09:16:49.368354 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b826cd-24f3-452a-a058-aa6dd0414e73" containerName="mariadb-database-create" Jan 27 09:16:49 crc kubenswrapper[4799]: E0127 09:16:49.368370 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa25c089-ffcf-47cb-a852-f00efd999834" containerName="mariadb-account-create-update" Jan 27 09:16:49 crc kubenswrapper[4799]: I0127 09:16:49.368376 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa25c089-ffcf-47cb-a852-f00efd999834" containerName="mariadb-account-create-update" Jan 27 09:16:49 crc kubenswrapper[4799]: I0127 09:16:49.368567 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0b826cd-24f3-452a-a058-aa6dd0414e73" containerName="mariadb-database-create" Jan 27 09:16:49 crc kubenswrapper[4799]: I0127 09:16:49.368579 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa25c089-ffcf-47cb-a852-f00efd999834" containerName="mariadb-account-create-update" Jan 27 09:16:49 crc kubenswrapper[4799]: I0127 09:16:49.369157 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-xqszt" Jan 27 09:16:49 crc kubenswrapper[4799]: I0127 09:16:49.371146 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 09:16:49 crc kubenswrapper[4799]: I0127 09:16:49.371473 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-8ntc6" Jan 27 09:16:49 crc kubenswrapper[4799]: I0127 09:16:49.371586 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 09:16:49 crc kubenswrapper[4799]: I0127 09:16:49.381751 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-xqszt"] Jan 27 09:16:49 crc kubenswrapper[4799]: I0127 09:16:49.413492 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff35b6f5-6086-4d84-be04-e985293fdd87-combined-ca-bundle\") pod \"neutron-db-sync-xqszt\" (UID: \"ff35b6f5-6086-4d84-be04-e985293fdd87\") " pod="openstack/neutron-db-sync-xqszt" Jan 27 09:16:49 crc kubenswrapper[4799]: I0127 09:16:49.413561 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvh7h\" (UniqueName: \"kubernetes.io/projected/ff35b6f5-6086-4d84-be04-e985293fdd87-kube-api-access-pvh7h\") pod \"neutron-db-sync-xqszt\" (UID: \"ff35b6f5-6086-4d84-be04-e985293fdd87\") " pod="openstack/neutron-db-sync-xqszt" Jan 27 09:16:49 crc kubenswrapper[4799]: I0127 09:16:49.413613 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ff35b6f5-6086-4d84-be04-e985293fdd87-config\") pod \"neutron-db-sync-xqszt\" (UID: \"ff35b6f5-6086-4d84-be04-e985293fdd87\") " pod="openstack/neutron-db-sync-xqszt" Jan 27 09:16:49 crc kubenswrapper[4799]: I0127 09:16:49.514935 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ff35b6f5-6086-4d84-be04-e985293fdd87-config\") pod \"neutron-db-sync-xqszt\" (UID: \"ff35b6f5-6086-4d84-be04-e985293fdd87\") " pod="openstack/neutron-db-sync-xqszt" Jan 27 09:16:49 crc kubenswrapper[4799]: I0127 09:16:49.515171 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff35b6f5-6086-4d84-be04-e985293fdd87-combined-ca-bundle\") pod \"neutron-db-sync-xqszt\" (UID: \"ff35b6f5-6086-4d84-be04-e985293fdd87\") " pod="openstack/neutron-db-sync-xqszt" Jan 27 09:16:49 crc kubenswrapper[4799]: I0127 09:16:49.515277 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvh7h\" (UniqueName: \"kubernetes.io/projected/ff35b6f5-6086-4d84-be04-e985293fdd87-kube-api-access-pvh7h\") pod \"neutron-db-sync-xqszt\" (UID: \"ff35b6f5-6086-4d84-be04-e985293fdd87\") " pod="openstack/neutron-db-sync-xqszt" Jan 27 09:16:49 crc kubenswrapper[4799]: I0127 09:16:49.527459 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff35b6f5-6086-4d84-be04-e985293fdd87-combined-ca-bundle\") pod \"neutron-db-sync-xqszt\" (UID: \"ff35b6f5-6086-4d84-be04-e985293fdd87\") " pod="openstack/neutron-db-sync-xqszt" Jan 27 09:16:49 crc kubenswrapper[4799]: I0127 09:16:49.529559 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ff35b6f5-6086-4d84-be04-e985293fdd87-config\") pod \"neutron-db-sync-xqszt\" (UID: \"ff35b6f5-6086-4d84-be04-e985293fdd87\") " pod="openstack/neutron-db-sync-xqszt" Jan 27 09:16:49 crc kubenswrapper[4799]: I0127 09:16:49.531735 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvh7h\" (UniqueName: \"kubernetes.io/projected/ff35b6f5-6086-4d84-be04-e985293fdd87-kube-api-access-pvh7h\") pod \"neutron-db-sync-xqszt\" (UID: \"ff35b6f5-6086-4d84-be04-e985293fdd87\") " pod="openstack/neutron-db-sync-xqszt" Jan 27 09:16:49 crc kubenswrapper[4799]: I0127 09:16:49.692573 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-xqszt" Jan 27 09:16:50 crc kubenswrapper[4799]: I0127 09:16:50.140832 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-xqszt"] Jan 27 09:16:50 crc kubenswrapper[4799]: I0127 09:16:50.842573 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-xqszt" event={"ID":"ff35b6f5-6086-4d84-be04-e985293fdd87","Type":"ContainerStarted","Data":"561118bd1cfc368adec06eeec6619da8bbf3a3c206a3a17d59935e4034f4949b"} Jan 27 09:16:50 crc kubenswrapper[4799]: I0127 09:16:50.842962 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-xqszt" event={"ID":"ff35b6f5-6086-4d84-be04-e985293fdd87","Type":"ContainerStarted","Data":"1e9aabb1502455ad5f97efa54976751c4a5be96d260dee325bff5234f4de9a8a"} Jan 27 09:16:50 crc kubenswrapper[4799]: I0127 09:16:50.872337 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-xqszt" podStartSLOduration=1.872317723 podStartE2EDuration="1.872317723s" podCreationTimestamp="2026-01-27 09:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:16:50.863512914 +0000 UTC m=+5477.174617009" watchObservedRunningTime="2026-01-27 09:16:50.872317723 +0000 UTC m=+5477.183421798" Jan 27 09:16:51 crc kubenswrapper[4799]: I0127 09:16:51.451377 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:16:51 crc kubenswrapper[4799]: E0127 09:16:51.452133 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:16:57 crc kubenswrapper[4799]: I0127 09:16:57.901366 4799 generic.go:334] "Generic (PLEG): container finished" podID="ff35b6f5-6086-4d84-be04-e985293fdd87" containerID="561118bd1cfc368adec06eeec6619da8bbf3a3c206a3a17d59935e4034f4949b" exitCode=0 Jan 27 09:16:57 crc kubenswrapper[4799]: I0127 09:16:57.901502 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-xqszt" event={"ID":"ff35b6f5-6086-4d84-be04-e985293fdd87","Type":"ContainerDied","Data":"561118bd1cfc368adec06eeec6619da8bbf3a3c206a3a17d59935e4034f4949b"} Jan 27 09:16:59 crc kubenswrapper[4799]: I0127 09:16:59.276935 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-xqszt" Jan 27 09:16:59 crc kubenswrapper[4799]: I0127 09:16:59.383957 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvh7h\" (UniqueName: \"kubernetes.io/projected/ff35b6f5-6086-4d84-be04-e985293fdd87-kube-api-access-pvh7h\") pod \"ff35b6f5-6086-4d84-be04-e985293fdd87\" (UID: \"ff35b6f5-6086-4d84-be04-e985293fdd87\") " Jan 27 09:16:59 crc kubenswrapper[4799]: I0127 09:16:59.384013 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ff35b6f5-6086-4d84-be04-e985293fdd87-config\") pod \"ff35b6f5-6086-4d84-be04-e985293fdd87\" (UID: \"ff35b6f5-6086-4d84-be04-e985293fdd87\") " Jan 27 09:16:59 crc kubenswrapper[4799]: I0127 09:16:59.384123 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff35b6f5-6086-4d84-be04-e985293fdd87-combined-ca-bundle\") pod \"ff35b6f5-6086-4d84-be04-e985293fdd87\" (UID: \"ff35b6f5-6086-4d84-be04-e985293fdd87\") " Jan 27 09:16:59 crc kubenswrapper[4799]: I0127 09:16:59.390352 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff35b6f5-6086-4d84-be04-e985293fdd87-kube-api-access-pvh7h" (OuterVolumeSpecName: "kube-api-access-pvh7h") pod "ff35b6f5-6086-4d84-be04-e985293fdd87" (UID: "ff35b6f5-6086-4d84-be04-e985293fdd87"). InnerVolumeSpecName "kube-api-access-pvh7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:16:59 crc kubenswrapper[4799]: I0127 09:16:59.408822 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff35b6f5-6086-4d84-be04-e985293fdd87-config" (OuterVolumeSpecName: "config") pod "ff35b6f5-6086-4d84-be04-e985293fdd87" (UID: "ff35b6f5-6086-4d84-be04-e985293fdd87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:16:59 crc kubenswrapper[4799]: I0127 09:16:59.411437 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff35b6f5-6086-4d84-be04-e985293fdd87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff35b6f5-6086-4d84-be04-e985293fdd87" (UID: "ff35b6f5-6086-4d84-be04-e985293fdd87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:16:59 crc kubenswrapper[4799]: I0127 09:16:59.486479 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvh7h\" (UniqueName: \"kubernetes.io/projected/ff35b6f5-6086-4d84-be04-e985293fdd87-kube-api-access-pvh7h\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:59 crc kubenswrapper[4799]: I0127 09:16:59.486519 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ff35b6f5-6086-4d84-be04-e985293fdd87-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:59 crc kubenswrapper[4799]: I0127 09:16:59.486532 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff35b6f5-6086-4d84-be04-e985293fdd87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:16:59 crc kubenswrapper[4799]: I0127 09:16:59.935468 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-xqszt" event={"ID":"ff35b6f5-6086-4d84-be04-e985293fdd87","Type":"ContainerDied","Data":"1e9aabb1502455ad5f97efa54976751c4a5be96d260dee325bff5234f4de9a8a"} Jan 27 09:16:59 crc kubenswrapper[4799]: I0127 09:16:59.935542 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e9aabb1502455ad5f97efa54976751c4a5be96d260dee325bff5234f4de9a8a" Jan 27 09:16:59 crc kubenswrapper[4799]: I0127 09:16:59.935546 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-xqszt" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.091676 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dddc8d79-4xd27"] Jan 27 09:17:00 crc kubenswrapper[4799]: E0127 09:17:00.092024 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff35b6f5-6086-4d84-be04-e985293fdd87" containerName="neutron-db-sync" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.092041 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff35b6f5-6086-4d84-be04-e985293fdd87" containerName="neutron-db-sync" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.092206 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff35b6f5-6086-4d84-be04-e985293fdd87" containerName="neutron-db-sync" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.097081 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dddc8d79-4xd27" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.105212 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dddc8d79-4xd27"] Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.161040 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59d99bc4df-65fzn"] Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.163022 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d99bc4df-65fzn" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.169847 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.170087 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-8ntc6" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.170231 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.182041 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59d99bc4df-65fzn"] Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.210542 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w75sf\" (UniqueName: \"kubernetes.io/projected/cfd7952b-2e54-458c-9b3b-770466bcc0e7-kube-api-access-w75sf\") pod \"dnsmasq-dns-dddc8d79-4xd27\" (UID: \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\") " pod="openstack/dnsmasq-dns-dddc8d79-4xd27" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.210624 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-ovsdbserver-sb\") pod \"dnsmasq-dns-dddc8d79-4xd27\" (UID: \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\") " pod="openstack/dnsmasq-dns-dddc8d79-4xd27" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.210784 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-config\") pod \"dnsmasq-dns-dddc8d79-4xd27\" (UID: \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\") " pod="openstack/dnsmasq-dns-dddc8d79-4xd27" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.210990 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-ovsdbserver-nb\") pod \"dnsmasq-dns-dddc8d79-4xd27\" (UID: \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\") " pod="openstack/dnsmasq-dns-dddc8d79-4xd27" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.211107 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-dns-svc\") pod \"dnsmasq-dns-dddc8d79-4xd27\" (UID: \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\") " pod="openstack/dnsmasq-dns-dddc8d79-4xd27" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.313118 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w75sf\" (UniqueName: \"kubernetes.io/projected/cfd7952b-2e54-458c-9b3b-770466bcc0e7-kube-api-access-w75sf\") pod \"dnsmasq-dns-dddc8d79-4xd27\" (UID: \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\") " pod="openstack/dnsmasq-dns-dddc8d79-4xd27" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.313593 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-ovsdbserver-sb\") pod \"dnsmasq-dns-dddc8d79-4xd27\" (UID: \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\") " pod="openstack/dnsmasq-dns-dddc8d79-4xd27" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.313636 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-config\") pod \"dnsmasq-dns-dddc8d79-4xd27\" (UID: \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\") " pod="openstack/dnsmasq-dns-dddc8d79-4xd27" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.313680 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/88499b25-ea05-4dad-b96a-9ff1244b25e1-config\") pod \"neutron-59d99bc4df-65fzn\" (UID: \"88499b25-ea05-4dad-b96a-9ff1244b25e1\") " pod="openstack/neutron-59d99bc4df-65fzn" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.313733 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88499b25-ea05-4dad-b96a-9ff1244b25e1-combined-ca-bundle\") pod \"neutron-59d99bc4df-65fzn\" (UID: \"88499b25-ea05-4dad-b96a-9ff1244b25e1\") " pod="openstack/neutron-59d99bc4df-65fzn" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.313785 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlj25\" (UniqueName: \"kubernetes.io/projected/88499b25-ea05-4dad-b96a-9ff1244b25e1-kube-api-access-xlj25\") pod \"neutron-59d99bc4df-65fzn\" (UID: \"88499b25-ea05-4dad-b96a-9ff1244b25e1\") " pod="openstack/neutron-59d99bc4df-65fzn" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.313809 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-ovsdbserver-nb\") pod \"dnsmasq-dns-dddc8d79-4xd27\" (UID: \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\") " pod="openstack/dnsmasq-dns-dddc8d79-4xd27" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.313865 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/88499b25-ea05-4dad-b96a-9ff1244b25e1-httpd-config\") pod \"neutron-59d99bc4df-65fzn\" (UID: \"88499b25-ea05-4dad-b96a-9ff1244b25e1\") " pod="openstack/neutron-59d99bc4df-65fzn" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.313900 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-dns-svc\") pod \"dnsmasq-dns-dddc8d79-4xd27\" (UID: \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\") " pod="openstack/dnsmasq-dns-dddc8d79-4xd27" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.314909 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-dns-svc\") pod \"dnsmasq-dns-dddc8d79-4xd27\" (UID: \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\") " pod="openstack/dnsmasq-dns-dddc8d79-4xd27" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.315851 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-config\") pod \"dnsmasq-dns-dddc8d79-4xd27\" (UID: \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\") " pod="openstack/dnsmasq-dns-dddc8d79-4xd27" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.315960 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-ovsdbserver-nb\") pod \"dnsmasq-dns-dddc8d79-4xd27\" (UID: \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\") " pod="openstack/dnsmasq-dns-dddc8d79-4xd27" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.317123 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-ovsdbserver-sb\") pod \"dnsmasq-dns-dddc8d79-4xd27\" (UID: \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\") " pod="openstack/dnsmasq-dns-dddc8d79-4xd27" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.337875 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w75sf\" (UniqueName: \"kubernetes.io/projected/cfd7952b-2e54-458c-9b3b-770466bcc0e7-kube-api-access-w75sf\") pod \"dnsmasq-dns-dddc8d79-4xd27\" (UID: \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\") " pod="openstack/dnsmasq-dns-dddc8d79-4xd27" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.415518 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/88499b25-ea05-4dad-b96a-9ff1244b25e1-config\") pod \"neutron-59d99bc4df-65fzn\" (UID: \"88499b25-ea05-4dad-b96a-9ff1244b25e1\") " pod="openstack/neutron-59d99bc4df-65fzn" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.415608 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88499b25-ea05-4dad-b96a-9ff1244b25e1-combined-ca-bundle\") pod \"neutron-59d99bc4df-65fzn\" (UID: \"88499b25-ea05-4dad-b96a-9ff1244b25e1\") " pod="openstack/neutron-59d99bc4df-65fzn" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.415652 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlj25\" (UniqueName: \"kubernetes.io/projected/88499b25-ea05-4dad-b96a-9ff1244b25e1-kube-api-access-xlj25\") pod \"neutron-59d99bc4df-65fzn\" (UID: \"88499b25-ea05-4dad-b96a-9ff1244b25e1\") " pod="openstack/neutron-59d99bc4df-65fzn" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.415703 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/88499b25-ea05-4dad-b96a-9ff1244b25e1-httpd-config\") pod \"neutron-59d99bc4df-65fzn\" (UID: \"88499b25-ea05-4dad-b96a-9ff1244b25e1\") " pod="openstack/neutron-59d99bc4df-65fzn" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.419045 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/88499b25-ea05-4dad-b96a-9ff1244b25e1-httpd-config\") pod \"neutron-59d99bc4df-65fzn\" (UID: \"88499b25-ea05-4dad-b96a-9ff1244b25e1\") " pod="openstack/neutron-59d99bc4df-65fzn" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.422054 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/88499b25-ea05-4dad-b96a-9ff1244b25e1-config\") pod \"neutron-59d99bc4df-65fzn\" (UID: \"88499b25-ea05-4dad-b96a-9ff1244b25e1\") " pod="openstack/neutron-59d99bc4df-65fzn" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.432725 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlj25\" (UniqueName: \"kubernetes.io/projected/88499b25-ea05-4dad-b96a-9ff1244b25e1-kube-api-access-xlj25\") pod \"neutron-59d99bc4df-65fzn\" (UID: \"88499b25-ea05-4dad-b96a-9ff1244b25e1\") " pod="openstack/neutron-59d99bc4df-65fzn" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.433096 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88499b25-ea05-4dad-b96a-9ff1244b25e1-combined-ca-bundle\") pod \"neutron-59d99bc4df-65fzn\" (UID: \"88499b25-ea05-4dad-b96a-9ff1244b25e1\") " pod="openstack/neutron-59d99bc4df-65fzn" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.440568 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dddc8d79-4xd27" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.481063 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d99bc4df-65fzn" Jan 27 09:17:00 crc kubenswrapper[4799]: I0127 09:17:00.948983 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dddc8d79-4xd27"] Jan 27 09:17:01 crc kubenswrapper[4799]: I0127 09:17:01.113656 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59d99bc4df-65fzn"] Jan 27 09:17:01 crc kubenswrapper[4799]: I0127 09:17:01.952025 4799 generic.go:334] "Generic (PLEG): container finished" podID="cfd7952b-2e54-458c-9b3b-770466bcc0e7" containerID="afa225297c65da3940e9983d850b361ea1096dc7222bd568447140a09de4aa16" exitCode=0 Jan 27 09:17:01 crc kubenswrapper[4799]: I0127 09:17:01.952097 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dddc8d79-4xd27" event={"ID":"cfd7952b-2e54-458c-9b3b-770466bcc0e7","Type":"ContainerDied","Data":"afa225297c65da3940e9983d850b361ea1096dc7222bd568447140a09de4aa16"} Jan 27 09:17:01 crc kubenswrapper[4799]: I0127 09:17:01.952416 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dddc8d79-4xd27" event={"ID":"cfd7952b-2e54-458c-9b3b-770466bcc0e7","Type":"ContainerStarted","Data":"58f7c93d627b46aeb14903b6dfdbc30c40daa8af8bff3b7a093b65fa0d75fcf6"} Jan 27 09:17:01 crc kubenswrapper[4799]: I0127 09:17:01.955190 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d99bc4df-65fzn" event={"ID":"88499b25-ea05-4dad-b96a-9ff1244b25e1","Type":"ContainerStarted","Data":"e88a7b78d53d61551ddc69274a7dc44de2d3517f8746316d1692cb95926e6828"} Jan 27 09:17:01 crc kubenswrapper[4799]: I0127 09:17:01.955219 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d99bc4df-65fzn" event={"ID":"88499b25-ea05-4dad-b96a-9ff1244b25e1","Type":"ContainerStarted","Data":"16586b5c5ab3f9795624883a5afd728b27b10684e766f605c3c431d761adab16"} Jan 27 09:17:01 crc kubenswrapper[4799]: I0127 09:17:01.955229 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d99bc4df-65fzn" event={"ID":"88499b25-ea05-4dad-b96a-9ff1244b25e1","Type":"ContainerStarted","Data":"04a345418abd6b96b67897660713f740791f25c64abb4aea39f2fcb717371cb1"} Jan 27 09:17:01 crc kubenswrapper[4799]: I0127 09:17:01.955605 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-59d99bc4df-65fzn" Jan 27 09:17:02 crc kubenswrapper[4799]: I0127 09:17:02.007736 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-59d99bc4df-65fzn" podStartSLOduration=2.007712411 podStartE2EDuration="2.007712411s" podCreationTimestamp="2026-01-27 09:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:17:02.004250857 +0000 UTC m=+5488.315354922" watchObservedRunningTime="2026-01-27 09:17:02.007712411 +0000 UTC m=+5488.318816466" Jan 27 09:17:02 crc kubenswrapper[4799]: I0127 09:17:02.966248 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dddc8d79-4xd27" event={"ID":"cfd7952b-2e54-458c-9b3b-770466bcc0e7","Type":"ContainerStarted","Data":"e337ca39ecfd1ea51aac6e0c6349469b8a1dde1dbd6cf6c224a63e7de679f34f"} Jan 27 09:17:02 crc kubenswrapper[4799]: I0127 09:17:02.993322 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dddc8d79-4xd27" podStartSLOduration=2.9932759669999998 podStartE2EDuration="2.993275967s" podCreationTimestamp="2026-01-27 09:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:17:02.987998173 +0000 UTC m=+5489.299102238" watchObservedRunningTime="2026-01-27 09:17:02.993275967 +0000 UTC m=+5489.304380062" Jan 27 09:17:03 crc kubenswrapper[4799]: I0127 09:17:03.974443 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dddc8d79-4xd27" Jan 27 09:17:05 crc kubenswrapper[4799]: I0127 09:17:05.452420 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:17:05 crc kubenswrapper[4799]: E0127 09:17:05.452818 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:17:10 crc kubenswrapper[4799]: I0127 09:17:10.442537 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dddc8d79-4xd27" Jan 27 09:17:10 crc kubenswrapper[4799]: I0127 09:17:10.503370 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-df979b9b9-jvkng"] Jan 27 09:17:10 crc kubenswrapper[4799]: I0127 09:17:10.503807 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-df979b9b9-jvkng" podUID="68198885-4c18-4ea5-b5de-24b9f6cda897" containerName="dnsmasq-dns" containerID="cri-o://8ff69720f5dd7dd6ebadbc7e49ac52bf7a3a1d9b5363844091e1fe75049fcd78" gracePeriod=10 Jan 27 09:17:11 crc kubenswrapper[4799]: I0127 09:17:11.036194 4799 generic.go:334] "Generic (PLEG): container finished" podID="68198885-4c18-4ea5-b5de-24b9f6cda897" containerID="8ff69720f5dd7dd6ebadbc7e49ac52bf7a3a1d9b5363844091e1fe75049fcd78" exitCode=0 Jan 27 09:17:11 crc kubenswrapper[4799]: I0127 09:17:11.036701 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df979b9b9-jvkng" event={"ID":"68198885-4c18-4ea5-b5de-24b9f6cda897","Type":"ContainerDied","Data":"8ff69720f5dd7dd6ebadbc7e49ac52bf7a3a1d9b5363844091e1fe75049fcd78"} Jan 27 09:17:11 crc kubenswrapper[4799]: I0127 09:17:11.036773 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df979b9b9-jvkng" event={"ID":"68198885-4c18-4ea5-b5de-24b9f6cda897","Type":"ContainerDied","Data":"3efa4d777072b5fdbf0e91cee61233cda057acc6a22756f7624e89931e8e423d"} Jan 27 09:17:11 crc kubenswrapper[4799]: I0127 09:17:11.036795 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3efa4d777072b5fdbf0e91cee61233cda057acc6a22756f7624e89931e8e423d" Jan 27 09:17:11 crc kubenswrapper[4799]: I0127 09:17:11.069961 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df979b9b9-jvkng" Jan 27 09:17:11 crc kubenswrapper[4799]: I0127 09:17:11.157487 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-config\") pod \"68198885-4c18-4ea5-b5de-24b9f6cda897\" (UID: \"68198885-4c18-4ea5-b5de-24b9f6cda897\") " Jan 27 09:17:11 crc kubenswrapper[4799]: I0127 09:17:11.157585 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-dns-svc\") pod \"68198885-4c18-4ea5-b5de-24b9f6cda897\" (UID: \"68198885-4c18-4ea5-b5de-24b9f6cda897\") " Jan 27 09:17:11 crc kubenswrapper[4799]: I0127 09:17:11.157623 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-ovsdbserver-sb\") pod \"68198885-4c18-4ea5-b5de-24b9f6cda897\" (UID: \"68198885-4c18-4ea5-b5de-24b9f6cda897\") " Jan 27 09:17:11 crc kubenswrapper[4799]: I0127 09:17:11.157679 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z68t\" (UniqueName: \"kubernetes.io/projected/68198885-4c18-4ea5-b5de-24b9f6cda897-kube-api-access-2z68t\") pod \"68198885-4c18-4ea5-b5de-24b9f6cda897\" (UID: \"68198885-4c18-4ea5-b5de-24b9f6cda897\") " Jan 27 09:17:11 crc kubenswrapper[4799]: I0127 09:17:11.157743 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-ovsdbserver-nb\") pod \"68198885-4c18-4ea5-b5de-24b9f6cda897\" (UID: \"68198885-4c18-4ea5-b5de-24b9f6cda897\") " Jan 27 09:17:11 crc kubenswrapper[4799]: I0127 09:17:11.190600 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68198885-4c18-4ea5-b5de-24b9f6cda897-kube-api-access-2z68t" (OuterVolumeSpecName: "kube-api-access-2z68t") pod "68198885-4c18-4ea5-b5de-24b9f6cda897" (UID: "68198885-4c18-4ea5-b5de-24b9f6cda897"). InnerVolumeSpecName "kube-api-access-2z68t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:17:11 crc kubenswrapper[4799]: I0127 09:17:11.203929 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "68198885-4c18-4ea5-b5de-24b9f6cda897" (UID: "68198885-4c18-4ea5-b5de-24b9f6cda897"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:11 crc kubenswrapper[4799]: I0127 09:17:11.207842 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "68198885-4c18-4ea5-b5de-24b9f6cda897" (UID: "68198885-4c18-4ea5-b5de-24b9f6cda897"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:11 crc kubenswrapper[4799]: I0127 09:17:11.213067 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-config" (OuterVolumeSpecName: "config") pod "68198885-4c18-4ea5-b5de-24b9f6cda897" (UID: "68198885-4c18-4ea5-b5de-24b9f6cda897"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:11 crc kubenswrapper[4799]: I0127 09:17:11.219833 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "68198885-4c18-4ea5-b5de-24b9f6cda897" (UID: "68198885-4c18-4ea5-b5de-24b9f6cda897"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:11 crc kubenswrapper[4799]: I0127 09:17:11.259135 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:11 crc kubenswrapper[4799]: I0127 09:17:11.259201 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:11 crc kubenswrapper[4799]: I0127 09:17:11.259224 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z68t\" (UniqueName: \"kubernetes.io/projected/68198885-4c18-4ea5-b5de-24b9f6cda897-kube-api-access-2z68t\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:11 crc kubenswrapper[4799]: I0127 09:17:11.259238 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:11 crc kubenswrapper[4799]: I0127 09:17:11.259255 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68198885-4c18-4ea5-b5de-24b9f6cda897-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:12 crc kubenswrapper[4799]: I0127 09:17:12.038200 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q4xwm"] Jan 27 09:17:12 crc kubenswrapper[4799]: E0127 09:17:12.038621 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68198885-4c18-4ea5-b5de-24b9f6cda897" containerName="init" Jan 27 09:17:12 crc kubenswrapper[4799]: I0127 09:17:12.038640 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="68198885-4c18-4ea5-b5de-24b9f6cda897" containerName="init" Jan 27 09:17:12 crc kubenswrapper[4799]: E0127 09:17:12.038686 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68198885-4c18-4ea5-b5de-24b9f6cda897" containerName="dnsmasq-dns" Jan 27 09:17:12 crc kubenswrapper[4799]: I0127 09:17:12.038695 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="68198885-4c18-4ea5-b5de-24b9f6cda897" containerName="dnsmasq-dns" Jan 27 09:17:12 crc kubenswrapper[4799]: I0127 09:17:12.038943 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="68198885-4c18-4ea5-b5de-24b9f6cda897" containerName="dnsmasq-dns" Jan 27 09:17:12 crc kubenswrapper[4799]: I0127 09:17:12.043862 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df979b9b9-jvkng" Jan 27 09:17:12 crc kubenswrapper[4799]: I0127 09:17:12.045495 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q4xwm" Jan 27 09:17:12 crc kubenswrapper[4799]: I0127 09:17:12.051176 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q4xwm"] Jan 27 09:17:12 crc kubenswrapper[4799]: I0127 09:17:12.071805 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whlc4\" (UniqueName: \"kubernetes.io/projected/65d33f0a-43f7-41ec-878e-b7c01550bfa1-kube-api-access-whlc4\") pod \"redhat-operators-q4xwm\" (UID: \"65d33f0a-43f7-41ec-878e-b7c01550bfa1\") " pod="openshift-marketplace/redhat-operators-q4xwm" Jan 27 09:17:12 crc kubenswrapper[4799]: I0127 09:17:12.075120 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65d33f0a-43f7-41ec-878e-b7c01550bfa1-catalog-content\") pod \"redhat-operators-q4xwm\" (UID: \"65d33f0a-43f7-41ec-878e-b7c01550bfa1\") " pod="openshift-marketplace/redhat-operators-q4xwm" Jan 27 09:17:12 crc kubenswrapper[4799]: I0127 09:17:12.075297 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65d33f0a-43f7-41ec-878e-b7c01550bfa1-utilities\") pod \"redhat-operators-q4xwm\" (UID: \"65d33f0a-43f7-41ec-878e-b7c01550bfa1\") " pod="openshift-marketplace/redhat-operators-q4xwm" Jan 27 09:17:12 crc kubenswrapper[4799]: I0127 09:17:12.096746 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-df979b9b9-jvkng"] Jan 27 09:17:12 crc kubenswrapper[4799]: I0127 09:17:12.110077 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-df979b9b9-jvkng"] Jan 27 09:17:12 crc kubenswrapper[4799]: I0127 09:17:12.177402 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whlc4\" (UniqueName: \"kubernetes.io/projected/65d33f0a-43f7-41ec-878e-b7c01550bfa1-kube-api-access-whlc4\") pod \"redhat-operators-q4xwm\" (UID: \"65d33f0a-43f7-41ec-878e-b7c01550bfa1\") " pod="openshift-marketplace/redhat-operators-q4xwm" Jan 27 09:17:12 crc kubenswrapper[4799]: I0127 09:17:12.177572 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65d33f0a-43f7-41ec-878e-b7c01550bfa1-catalog-content\") pod \"redhat-operators-q4xwm\" (UID: \"65d33f0a-43f7-41ec-878e-b7c01550bfa1\") " pod="openshift-marketplace/redhat-operators-q4xwm" Jan 27 09:17:12 crc kubenswrapper[4799]: I0127 09:17:12.177619 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65d33f0a-43f7-41ec-878e-b7c01550bfa1-utilities\") pod \"redhat-operators-q4xwm\" (UID: \"65d33f0a-43f7-41ec-878e-b7c01550bfa1\") " pod="openshift-marketplace/redhat-operators-q4xwm" Jan 27 09:17:12 crc kubenswrapper[4799]: I0127 09:17:12.178245 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65d33f0a-43f7-41ec-878e-b7c01550bfa1-catalog-content\") pod \"redhat-operators-q4xwm\" (UID: \"65d33f0a-43f7-41ec-878e-b7c01550bfa1\") " pod="openshift-marketplace/redhat-operators-q4xwm" Jan 27 09:17:12 crc kubenswrapper[4799]: I0127 09:17:12.178623 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65d33f0a-43f7-41ec-878e-b7c01550bfa1-utilities\") pod \"redhat-operators-q4xwm\" (UID: \"65d33f0a-43f7-41ec-878e-b7c01550bfa1\") " pod="openshift-marketplace/redhat-operators-q4xwm" Jan 27 09:17:12 crc kubenswrapper[4799]: I0127 09:17:12.203988 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whlc4\" (UniqueName: \"kubernetes.io/projected/65d33f0a-43f7-41ec-878e-b7c01550bfa1-kube-api-access-whlc4\") pod \"redhat-operators-q4xwm\" (UID: \"65d33f0a-43f7-41ec-878e-b7c01550bfa1\") " pod="openshift-marketplace/redhat-operators-q4xwm" Jan 27 09:17:12 crc kubenswrapper[4799]: I0127 09:17:12.369509 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q4xwm" Jan 27 09:17:12 crc kubenswrapper[4799]: I0127 09:17:12.464007 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68198885-4c18-4ea5-b5de-24b9f6cda897" path="/var/lib/kubelet/pods/68198885-4c18-4ea5-b5de-24b9f6cda897/volumes" Jan 27 09:17:12 crc kubenswrapper[4799]: W0127 09:17:12.847162 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65d33f0a_43f7_41ec_878e_b7c01550bfa1.slice/crio-bb374359c3ac144fe45c6ee251efdc1708e872c3702cb97c0b9a98673a209162 WatchSource:0}: Error finding container bb374359c3ac144fe45c6ee251efdc1708e872c3702cb97c0b9a98673a209162: Status 404 returned error can't find the container with id bb374359c3ac144fe45c6ee251efdc1708e872c3702cb97c0b9a98673a209162 Jan 27 09:17:12 crc kubenswrapper[4799]: I0127 09:17:12.851170 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q4xwm"] Jan 27 09:17:13 crc kubenswrapper[4799]: I0127 09:17:13.053438 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4xwm" event={"ID":"65d33f0a-43f7-41ec-878e-b7c01550bfa1","Type":"ContainerStarted","Data":"f28998e9be402adee3410ed0755df0e0d9c5c3c25521af52b90474211fc191ee"} Jan 27 09:17:13 crc kubenswrapper[4799]: I0127 09:17:13.053527 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4xwm" event={"ID":"65d33f0a-43f7-41ec-878e-b7c01550bfa1","Type":"ContainerStarted","Data":"bb374359c3ac144fe45c6ee251efdc1708e872c3702cb97c0b9a98673a209162"} Jan 27 09:17:14 crc kubenswrapper[4799]: I0127 09:17:14.064334 4799 generic.go:334] "Generic (PLEG): container finished" podID="65d33f0a-43f7-41ec-878e-b7c01550bfa1" containerID="f28998e9be402adee3410ed0755df0e0d9c5c3c25521af52b90474211fc191ee" exitCode=0 Jan 27 09:17:14 crc kubenswrapper[4799]: I0127 09:17:14.064401 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4xwm" event={"ID":"65d33f0a-43f7-41ec-878e-b7c01550bfa1","Type":"ContainerDied","Data":"f28998e9be402adee3410ed0755df0e0d9c5c3c25521af52b90474211fc191ee"} Jan 27 09:17:14 crc kubenswrapper[4799]: I0127 09:17:14.067373 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 09:17:15 crc kubenswrapper[4799]: I0127 09:17:15.034887 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d5hjv"] Jan 27 09:17:15 crc kubenswrapper[4799]: I0127 09:17:15.036797 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5hjv" Jan 27 09:17:15 crc kubenswrapper[4799]: I0127 09:17:15.063123 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5hjv"] Jan 27 09:17:15 crc kubenswrapper[4799]: I0127 09:17:15.073450 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4xwm" event={"ID":"65d33f0a-43f7-41ec-878e-b7c01550bfa1","Type":"ContainerStarted","Data":"88670dc471c18be59672505d315b00027a30d4d5d1530be760bf1e9759353b90"} Jan 27 09:17:15 crc kubenswrapper[4799]: I0127 09:17:15.221236 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b762ff5-aa70-4df6-9723-887c090f3337-utilities\") pod \"community-operators-d5hjv\" (UID: \"0b762ff5-aa70-4df6-9723-887c090f3337\") " pod="openshift-marketplace/community-operators-d5hjv" Jan 27 09:17:15 crc kubenswrapper[4799]: I0127 09:17:15.221440 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pczsp\" (UniqueName: \"kubernetes.io/projected/0b762ff5-aa70-4df6-9723-887c090f3337-kube-api-access-pczsp\") pod \"community-operators-d5hjv\" (UID: \"0b762ff5-aa70-4df6-9723-887c090f3337\") " pod="openshift-marketplace/community-operators-d5hjv" Jan 27 09:17:15 crc kubenswrapper[4799]: I0127 09:17:15.221610 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b762ff5-aa70-4df6-9723-887c090f3337-catalog-content\") pod \"community-operators-d5hjv\" (UID: \"0b762ff5-aa70-4df6-9723-887c090f3337\") " pod="openshift-marketplace/community-operators-d5hjv" Jan 27 09:17:15 crc kubenswrapper[4799]: I0127 09:17:15.323045 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b762ff5-aa70-4df6-9723-887c090f3337-utilities\") pod \"community-operators-d5hjv\" (UID: \"0b762ff5-aa70-4df6-9723-887c090f3337\") " pod="openshift-marketplace/community-operators-d5hjv" Jan 27 09:17:15 crc kubenswrapper[4799]: I0127 09:17:15.323125 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pczsp\" (UniqueName: \"kubernetes.io/projected/0b762ff5-aa70-4df6-9723-887c090f3337-kube-api-access-pczsp\") pod \"community-operators-d5hjv\" (UID: \"0b762ff5-aa70-4df6-9723-887c090f3337\") " pod="openshift-marketplace/community-operators-d5hjv" Jan 27 09:17:15 crc kubenswrapper[4799]: I0127 09:17:15.323234 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b762ff5-aa70-4df6-9723-887c090f3337-catalog-content\") pod \"community-operators-d5hjv\" (UID: \"0b762ff5-aa70-4df6-9723-887c090f3337\") " pod="openshift-marketplace/community-operators-d5hjv" Jan 27 09:17:15 crc kubenswrapper[4799]: I0127 09:17:15.323541 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b762ff5-aa70-4df6-9723-887c090f3337-utilities\") pod \"community-operators-d5hjv\" (UID: \"0b762ff5-aa70-4df6-9723-887c090f3337\") " pod="openshift-marketplace/community-operators-d5hjv" Jan 27 09:17:15 crc kubenswrapper[4799]: I0127 09:17:15.323616 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b762ff5-aa70-4df6-9723-887c090f3337-catalog-content\") pod \"community-operators-d5hjv\" (UID: \"0b762ff5-aa70-4df6-9723-887c090f3337\") " pod="openshift-marketplace/community-operators-d5hjv" Jan 27 09:17:15 crc kubenswrapper[4799]: I0127 09:17:15.352524 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pczsp\" (UniqueName: \"kubernetes.io/projected/0b762ff5-aa70-4df6-9723-887c090f3337-kube-api-access-pczsp\") pod \"community-operators-d5hjv\" (UID: \"0b762ff5-aa70-4df6-9723-887c090f3337\") " pod="openshift-marketplace/community-operators-d5hjv" Jan 27 09:17:15 crc kubenswrapper[4799]: I0127 09:17:15.370743 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5hjv" Jan 27 09:17:15 crc kubenswrapper[4799]: I0127 09:17:15.894556 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5hjv"] Jan 27 09:17:15 crc kubenswrapper[4799]: W0127 09:17:15.897219 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b762ff5_aa70_4df6_9723_887c090f3337.slice/crio-713a7a928b62f409f12dfbd767882d3dc6e8486141c9c173b33012f1e6a87fad WatchSource:0}: Error finding container 713a7a928b62f409f12dfbd767882d3dc6e8486141c9c173b33012f1e6a87fad: Status 404 returned error can't find the container with id 713a7a928b62f409f12dfbd767882d3dc6e8486141c9c173b33012f1e6a87fad Jan 27 09:17:16 crc kubenswrapper[4799]: I0127 09:17:16.097647 4799 generic.go:334] "Generic (PLEG): container finished" podID="65d33f0a-43f7-41ec-878e-b7c01550bfa1" containerID="88670dc471c18be59672505d315b00027a30d4d5d1530be760bf1e9759353b90" exitCode=0 Jan 27 09:17:16 crc kubenswrapper[4799]: I0127 09:17:16.097719 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4xwm" event={"ID":"65d33f0a-43f7-41ec-878e-b7c01550bfa1","Type":"ContainerDied","Data":"88670dc471c18be59672505d315b00027a30d4d5d1530be760bf1e9759353b90"} Jan 27 09:17:16 crc kubenswrapper[4799]: I0127 09:17:16.099985 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5hjv" event={"ID":"0b762ff5-aa70-4df6-9723-887c090f3337","Type":"ContainerStarted","Data":"0ab3ea85814f8904e4f0324e3b7e6920eadcdd483be8276f1f2843005ea3adb2"} Jan 27 09:17:16 crc kubenswrapper[4799]: I0127 09:17:16.100022 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5hjv" event={"ID":"0b762ff5-aa70-4df6-9723-887c090f3337","Type":"ContainerStarted","Data":"713a7a928b62f409f12dfbd767882d3dc6e8486141c9c173b33012f1e6a87fad"} Jan 27 09:17:17 crc kubenswrapper[4799]: I0127 09:17:17.109444 4799 generic.go:334] "Generic (PLEG): container finished" podID="0b762ff5-aa70-4df6-9723-887c090f3337" containerID="0ab3ea85814f8904e4f0324e3b7e6920eadcdd483be8276f1f2843005ea3adb2" exitCode=0 Jan 27 09:17:17 crc kubenswrapper[4799]: I0127 09:17:17.109512 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5hjv" event={"ID":"0b762ff5-aa70-4df6-9723-887c090f3337","Type":"ContainerDied","Data":"0ab3ea85814f8904e4f0324e3b7e6920eadcdd483be8276f1f2843005ea3adb2"} Jan 27 09:17:17 crc kubenswrapper[4799]: I0127 09:17:17.113629 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4xwm" event={"ID":"65d33f0a-43f7-41ec-878e-b7c01550bfa1","Type":"ContainerStarted","Data":"46e279b6bdfe354465b70f9da5dc122ac148b685c5370865720dd743a95ddf6a"} Jan 27 09:17:17 crc kubenswrapper[4799]: I0127 09:17:17.152226 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q4xwm" podStartSLOduration=2.724729758 podStartE2EDuration="5.152208161s" podCreationTimestamp="2026-01-27 09:17:12 +0000 UTC" firstStartedPulling="2026-01-27 09:17:14.066972575 +0000 UTC m=+5500.378076660" lastFinishedPulling="2026-01-27 09:17:16.494450958 +0000 UTC m=+5502.805555063" observedRunningTime="2026-01-27 09:17:17.147506142 +0000 UTC m=+5503.458610237" watchObservedRunningTime="2026-01-27 09:17:17.152208161 +0000 UTC m=+5503.463312226" Jan 27 09:17:18 crc kubenswrapper[4799]: I0127 09:17:18.127249 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5hjv" event={"ID":"0b762ff5-aa70-4df6-9723-887c090f3337","Type":"ContainerStarted","Data":"de100df8c3007ffa7d27dd7ccc31103b4e97f580f5fc9e4dd74af2cc28fc9115"} Jan 27 09:17:18 crc kubenswrapper[4799]: I0127 09:17:18.451458 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:17:18 crc kubenswrapper[4799]: E0127 09:17:18.451727 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:17:19 crc kubenswrapper[4799]: I0127 09:17:19.139272 4799 generic.go:334] "Generic (PLEG): container finished" podID="0b762ff5-aa70-4df6-9723-887c090f3337" containerID="de100df8c3007ffa7d27dd7ccc31103b4e97f580f5fc9e4dd74af2cc28fc9115" exitCode=0 Jan 27 09:17:19 crc kubenswrapper[4799]: I0127 09:17:19.139331 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5hjv" event={"ID":"0b762ff5-aa70-4df6-9723-887c090f3337","Type":"ContainerDied","Data":"de100df8c3007ffa7d27dd7ccc31103b4e97f580f5fc9e4dd74af2cc28fc9115"} Jan 27 09:17:20 crc kubenswrapper[4799]: I0127 09:17:20.149033 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5hjv" event={"ID":"0b762ff5-aa70-4df6-9723-887c090f3337","Type":"ContainerStarted","Data":"568722e205d6e9f07125f641bb80ee4fdd918b26fa8ea45717a6f5664e557105"} Jan 27 09:17:20 crc kubenswrapper[4799]: I0127 09:17:20.179938 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d5hjv" podStartSLOduration=2.7401311919999998 podStartE2EDuration="5.17991492s" podCreationTimestamp="2026-01-27 09:17:15 +0000 UTC" firstStartedPulling="2026-01-27 09:17:17.110980918 +0000 UTC m=+5503.422084983" lastFinishedPulling="2026-01-27 09:17:19.550764646 +0000 UTC m=+5505.861868711" observedRunningTime="2026-01-27 09:17:20.170835292 +0000 UTC m=+5506.481939357" watchObservedRunningTime="2026-01-27 09:17:20.17991492 +0000 UTC m=+5506.491018985" Jan 27 09:17:22 crc kubenswrapper[4799]: I0127 09:17:22.370499 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q4xwm" Jan 27 09:17:22 crc kubenswrapper[4799]: I0127 09:17:22.372159 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q4xwm" Jan 27 09:17:22 crc kubenswrapper[4799]: I0127 09:17:22.426141 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q4xwm" Jan 27 09:17:23 crc kubenswrapper[4799]: I0127 09:17:23.214481 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q4xwm" Jan 27 09:17:24 crc kubenswrapper[4799]: I0127 09:17:24.230204 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q4xwm"] Jan 27 09:17:25 crc kubenswrapper[4799]: I0127 09:17:25.188331 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q4xwm" podUID="65d33f0a-43f7-41ec-878e-b7c01550bfa1" containerName="registry-server" containerID="cri-o://46e279b6bdfe354465b70f9da5dc122ac148b685c5370865720dd743a95ddf6a" gracePeriod=2 Jan 27 09:17:25 crc kubenswrapper[4799]: I0127 09:17:25.371225 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d5hjv" Jan 27 09:17:25 crc kubenswrapper[4799]: I0127 09:17:25.371278 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d5hjv" Jan 27 09:17:25 crc kubenswrapper[4799]: I0127 09:17:25.410880 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d5hjv" Jan 27 09:17:26 crc kubenswrapper[4799]: I0127 09:17:26.198534 4799 generic.go:334] "Generic (PLEG): container finished" podID="65d33f0a-43f7-41ec-878e-b7c01550bfa1" containerID="46e279b6bdfe354465b70f9da5dc122ac148b685c5370865720dd743a95ddf6a" exitCode=0 Jan 27 09:17:26 crc kubenswrapper[4799]: I0127 09:17:26.198605 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4xwm" event={"ID":"65d33f0a-43f7-41ec-878e-b7c01550bfa1","Type":"ContainerDied","Data":"46e279b6bdfe354465b70f9da5dc122ac148b685c5370865720dd743a95ddf6a"} Jan 27 09:17:26 crc kubenswrapper[4799]: I0127 09:17:26.248794 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d5hjv" Jan 27 09:17:26 crc kubenswrapper[4799]: I0127 09:17:26.339113 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q4xwm" Jan 27 09:17:26 crc kubenswrapper[4799]: I0127 09:17:26.512087 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65d33f0a-43f7-41ec-878e-b7c01550bfa1-utilities\") pod \"65d33f0a-43f7-41ec-878e-b7c01550bfa1\" (UID: \"65d33f0a-43f7-41ec-878e-b7c01550bfa1\") " Jan 27 09:17:26 crc kubenswrapper[4799]: I0127 09:17:26.512258 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65d33f0a-43f7-41ec-878e-b7c01550bfa1-catalog-content\") pod \"65d33f0a-43f7-41ec-878e-b7c01550bfa1\" (UID: \"65d33f0a-43f7-41ec-878e-b7c01550bfa1\") " Jan 27 09:17:26 crc kubenswrapper[4799]: I0127 09:17:26.512327 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whlc4\" (UniqueName: \"kubernetes.io/projected/65d33f0a-43f7-41ec-878e-b7c01550bfa1-kube-api-access-whlc4\") pod \"65d33f0a-43f7-41ec-878e-b7c01550bfa1\" (UID: \"65d33f0a-43f7-41ec-878e-b7c01550bfa1\") " Jan 27 09:17:26 crc kubenswrapper[4799]: I0127 09:17:26.513051 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65d33f0a-43f7-41ec-878e-b7c01550bfa1-utilities" (OuterVolumeSpecName: "utilities") pod "65d33f0a-43f7-41ec-878e-b7c01550bfa1" (UID: "65d33f0a-43f7-41ec-878e-b7c01550bfa1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:17:26 crc kubenswrapper[4799]: I0127 09:17:26.517350 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65d33f0a-43f7-41ec-878e-b7c01550bfa1-kube-api-access-whlc4" (OuterVolumeSpecName: "kube-api-access-whlc4") pod "65d33f0a-43f7-41ec-878e-b7c01550bfa1" (UID: "65d33f0a-43f7-41ec-878e-b7c01550bfa1"). InnerVolumeSpecName "kube-api-access-whlc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:17:26 crc kubenswrapper[4799]: I0127 09:17:26.614561 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whlc4\" (UniqueName: \"kubernetes.io/projected/65d33f0a-43f7-41ec-878e-b7c01550bfa1-kube-api-access-whlc4\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:26 crc kubenswrapper[4799]: I0127 09:17:26.614598 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65d33f0a-43f7-41ec-878e-b7c01550bfa1-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:26 crc kubenswrapper[4799]: I0127 09:17:26.649152 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65d33f0a-43f7-41ec-878e-b7c01550bfa1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65d33f0a-43f7-41ec-878e-b7c01550bfa1" (UID: "65d33f0a-43f7-41ec-878e-b7c01550bfa1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:17:26 crc kubenswrapper[4799]: I0127 09:17:26.715739 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65d33f0a-43f7-41ec-878e-b7c01550bfa1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:26 crc kubenswrapper[4799]: I0127 09:17:26.832533 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d5hjv"] Jan 27 09:17:27 crc kubenswrapper[4799]: I0127 09:17:27.213173 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q4xwm" Jan 27 09:17:27 crc kubenswrapper[4799]: I0127 09:17:27.213206 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4xwm" event={"ID":"65d33f0a-43f7-41ec-878e-b7c01550bfa1","Type":"ContainerDied","Data":"bb374359c3ac144fe45c6ee251efdc1708e872c3702cb97c0b9a98673a209162"} Jan 27 09:17:27 crc kubenswrapper[4799]: I0127 09:17:27.213856 4799 scope.go:117] "RemoveContainer" containerID="46e279b6bdfe354465b70f9da5dc122ac148b685c5370865720dd743a95ddf6a" Jan 27 09:17:27 crc kubenswrapper[4799]: I0127 09:17:27.246545 4799 scope.go:117] "RemoveContainer" containerID="88670dc471c18be59672505d315b00027a30d4d5d1530be760bf1e9759353b90" Jan 27 09:17:27 crc kubenswrapper[4799]: I0127 09:17:27.270277 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q4xwm"] Jan 27 09:17:27 crc kubenswrapper[4799]: I0127 09:17:27.276513 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q4xwm"] Jan 27 09:17:27 crc kubenswrapper[4799]: I0127 09:17:27.294773 4799 scope.go:117] "RemoveContainer" containerID="f28998e9be402adee3410ed0755df0e0d9c5c3c25521af52b90474211fc191ee" Jan 27 09:17:28 crc kubenswrapper[4799]: I0127 09:17:28.222212 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d5hjv" podUID="0b762ff5-aa70-4df6-9723-887c090f3337" containerName="registry-server" containerID="cri-o://568722e205d6e9f07125f641bb80ee4fdd918b26fa8ea45717a6f5664e557105" gracePeriod=2 Jan 27 09:17:28 crc kubenswrapper[4799]: I0127 09:17:28.464167 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65d33f0a-43f7-41ec-878e-b7c01550bfa1" path="/var/lib/kubelet/pods/65d33f0a-43f7-41ec-878e-b7c01550bfa1/volumes" Jan 27 09:17:28 crc kubenswrapper[4799]: I0127 09:17:28.655214 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5hjv" Jan 27 09:17:28 crc kubenswrapper[4799]: I0127 09:17:28.751831 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pczsp\" (UniqueName: \"kubernetes.io/projected/0b762ff5-aa70-4df6-9723-887c090f3337-kube-api-access-pczsp\") pod \"0b762ff5-aa70-4df6-9723-887c090f3337\" (UID: \"0b762ff5-aa70-4df6-9723-887c090f3337\") " Jan 27 09:17:28 crc kubenswrapper[4799]: I0127 09:17:28.752336 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b762ff5-aa70-4df6-9723-887c090f3337-utilities\") pod \"0b762ff5-aa70-4df6-9723-887c090f3337\" (UID: \"0b762ff5-aa70-4df6-9723-887c090f3337\") " Jan 27 09:17:28 crc kubenswrapper[4799]: I0127 09:17:28.752494 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b762ff5-aa70-4df6-9723-887c090f3337-catalog-content\") pod \"0b762ff5-aa70-4df6-9723-887c090f3337\" (UID: \"0b762ff5-aa70-4df6-9723-887c090f3337\") " Jan 27 09:17:28 crc kubenswrapper[4799]: I0127 09:17:28.757100 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b762ff5-aa70-4df6-9723-887c090f3337-utilities" (OuterVolumeSpecName: "utilities") pod "0b762ff5-aa70-4df6-9723-887c090f3337" (UID: "0b762ff5-aa70-4df6-9723-887c090f3337"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:17:28 crc kubenswrapper[4799]: I0127 09:17:28.760020 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b762ff5-aa70-4df6-9723-887c090f3337-kube-api-access-pczsp" (OuterVolumeSpecName: "kube-api-access-pczsp") pod "0b762ff5-aa70-4df6-9723-887c090f3337" (UID: "0b762ff5-aa70-4df6-9723-887c090f3337"). InnerVolumeSpecName "kube-api-access-pczsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:17:28 crc kubenswrapper[4799]: I0127 09:17:28.813002 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b762ff5-aa70-4df6-9723-887c090f3337-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b762ff5-aa70-4df6-9723-887c090f3337" (UID: "0b762ff5-aa70-4df6-9723-887c090f3337"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:17:28 crc kubenswrapper[4799]: I0127 09:17:28.855200 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b762ff5-aa70-4df6-9723-887c090f3337-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:28 crc kubenswrapper[4799]: I0127 09:17:28.855238 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b762ff5-aa70-4df6-9723-887c090f3337-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:28 crc kubenswrapper[4799]: I0127 09:17:28.855250 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pczsp\" (UniqueName: \"kubernetes.io/projected/0b762ff5-aa70-4df6-9723-887c090f3337-kube-api-access-pczsp\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:29 crc kubenswrapper[4799]: I0127 09:17:29.233052 4799 generic.go:334] "Generic (PLEG): container finished" podID="0b762ff5-aa70-4df6-9723-887c090f3337" containerID="568722e205d6e9f07125f641bb80ee4fdd918b26fa8ea45717a6f5664e557105" exitCode=0 Jan 27 09:17:29 crc kubenswrapper[4799]: I0127 09:17:29.233103 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5hjv" event={"ID":"0b762ff5-aa70-4df6-9723-887c090f3337","Type":"ContainerDied","Data":"568722e205d6e9f07125f641bb80ee4fdd918b26fa8ea45717a6f5664e557105"} Jan 27 09:17:29 crc kubenswrapper[4799]: I0127 09:17:29.233134 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5hjv" event={"ID":"0b762ff5-aa70-4df6-9723-887c090f3337","Type":"ContainerDied","Data":"713a7a928b62f409f12dfbd767882d3dc6e8486141c9c173b33012f1e6a87fad"} Jan 27 09:17:29 crc kubenswrapper[4799]: I0127 09:17:29.233154 4799 scope.go:117] "RemoveContainer" containerID="568722e205d6e9f07125f641bb80ee4fdd918b26fa8ea45717a6f5664e557105" Jan 27 09:17:29 crc kubenswrapper[4799]: I0127 09:17:29.233181 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5hjv" Jan 27 09:17:29 crc kubenswrapper[4799]: I0127 09:17:29.253687 4799 scope.go:117] "RemoveContainer" containerID="de100df8c3007ffa7d27dd7ccc31103b4e97f580f5fc9e4dd74af2cc28fc9115" Jan 27 09:17:29 crc kubenswrapper[4799]: I0127 09:17:29.278692 4799 scope.go:117] "RemoveContainer" containerID="0ab3ea85814f8904e4f0324e3b7e6920eadcdd483be8276f1f2843005ea3adb2" Jan 27 09:17:29 crc kubenswrapper[4799]: I0127 09:17:29.285569 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d5hjv"] Jan 27 09:17:29 crc kubenswrapper[4799]: I0127 09:17:29.294769 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d5hjv"] Jan 27 09:17:29 crc kubenswrapper[4799]: I0127 09:17:29.326646 4799 scope.go:117] "RemoveContainer" containerID="568722e205d6e9f07125f641bb80ee4fdd918b26fa8ea45717a6f5664e557105" Jan 27 09:17:29 crc kubenswrapper[4799]: E0127 09:17:29.327140 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"568722e205d6e9f07125f641bb80ee4fdd918b26fa8ea45717a6f5664e557105\": container with ID starting with 568722e205d6e9f07125f641bb80ee4fdd918b26fa8ea45717a6f5664e557105 not found: ID does not exist" containerID="568722e205d6e9f07125f641bb80ee4fdd918b26fa8ea45717a6f5664e557105" Jan 27 09:17:29 crc kubenswrapper[4799]: I0127 09:17:29.327209 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"568722e205d6e9f07125f641bb80ee4fdd918b26fa8ea45717a6f5664e557105"} err="failed to get container status \"568722e205d6e9f07125f641bb80ee4fdd918b26fa8ea45717a6f5664e557105\": rpc error: code = NotFound desc = could not find container \"568722e205d6e9f07125f641bb80ee4fdd918b26fa8ea45717a6f5664e557105\": container with ID starting with 568722e205d6e9f07125f641bb80ee4fdd918b26fa8ea45717a6f5664e557105 not found: ID does not exist" Jan 27 09:17:29 crc kubenswrapper[4799]: I0127 09:17:29.327246 4799 scope.go:117] "RemoveContainer" containerID="de100df8c3007ffa7d27dd7ccc31103b4e97f580f5fc9e4dd74af2cc28fc9115" Jan 27 09:17:29 crc kubenswrapper[4799]: E0127 09:17:29.327571 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de100df8c3007ffa7d27dd7ccc31103b4e97f580f5fc9e4dd74af2cc28fc9115\": container with ID starting with de100df8c3007ffa7d27dd7ccc31103b4e97f580f5fc9e4dd74af2cc28fc9115 not found: ID does not exist" containerID="de100df8c3007ffa7d27dd7ccc31103b4e97f580f5fc9e4dd74af2cc28fc9115" Jan 27 09:17:29 crc kubenswrapper[4799]: I0127 09:17:29.327598 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de100df8c3007ffa7d27dd7ccc31103b4e97f580f5fc9e4dd74af2cc28fc9115"} err="failed to get container status \"de100df8c3007ffa7d27dd7ccc31103b4e97f580f5fc9e4dd74af2cc28fc9115\": rpc error: code = NotFound desc = could not find container \"de100df8c3007ffa7d27dd7ccc31103b4e97f580f5fc9e4dd74af2cc28fc9115\": container with ID starting with de100df8c3007ffa7d27dd7ccc31103b4e97f580f5fc9e4dd74af2cc28fc9115 not found: ID does not exist" Jan 27 09:17:29 crc kubenswrapper[4799]: I0127 09:17:29.327612 4799 scope.go:117] "RemoveContainer" containerID="0ab3ea85814f8904e4f0324e3b7e6920eadcdd483be8276f1f2843005ea3adb2" Jan 27 09:17:29 crc kubenswrapper[4799]: E0127 09:17:29.327942 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ab3ea85814f8904e4f0324e3b7e6920eadcdd483be8276f1f2843005ea3adb2\": container with ID starting with 0ab3ea85814f8904e4f0324e3b7e6920eadcdd483be8276f1f2843005ea3adb2 not found: ID does not exist" containerID="0ab3ea85814f8904e4f0324e3b7e6920eadcdd483be8276f1f2843005ea3adb2" Jan 27 09:17:29 crc kubenswrapper[4799]: I0127 09:17:29.327979 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ab3ea85814f8904e4f0324e3b7e6920eadcdd483be8276f1f2843005ea3adb2"} err="failed to get container status \"0ab3ea85814f8904e4f0324e3b7e6920eadcdd483be8276f1f2843005ea3adb2\": rpc error: code = NotFound desc = could not find container \"0ab3ea85814f8904e4f0324e3b7e6920eadcdd483be8276f1f2843005ea3adb2\": container with ID starting with 0ab3ea85814f8904e4f0324e3b7e6920eadcdd483be8276f1f2843005ea3adb2 not found: ID does not exist" Jan 27 09:17:30 crc kubenswrapper[4799]: I0127 09:17:30.464733 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b762ff5-aa70-4df6-9723-887c090f3337" path="/var/lib/kubelet/pods/0b762ff5-aa70-4df6-9723-887c090f3337/volumes" Jan 27 09:17:30 crc kubenswrapper[4799]: I0127 09:17:30.496602 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-59d99bc4df-65fzn" Jan 27 09:17:33 crc kubenswrapper[4799]: I0127 09:17:33.451923 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:17:33 crc kubenswrapper[4799]: E0127 09:17:33.452539 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.357330 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-33d6-account-create-update-nmrkh"] Jan 27 09:17:37 crc kubenswrapper[4799]: E0127 09:17:37.358560 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b762ff5-aa70-4df6-9723-887c090f3337" containerName="registry-server" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.358577 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b762ff5-aa70-4df6-9723-887c090f3337" containerName="registry-server" Jan 27 09:17:37 crc kubenswrapper[4799]: E0127 09:17:37.358613 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d33f0a-43f7-41ec-878e-b7c01550bfa1" containerName="extract-utilities" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.358620 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d33f0a-43f7-41ec-878e-b7c01550bfa1" containerName="extract-utilities" Jan 27 09:17:37 crc kubenswrapper[4799]: E0127 09:17:37.358629 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b762ff5-aa70-4df6-9723-887c090f3337" containerName="extract-content" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.358635 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b762ff5-aa70-4df6-9723-887c090f3337" containerName="extract-content" Jan 27 09:17:37 crc kubenswrapper[4799]: E0127 09:17:37.358649 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d33f0a-43f7-41ec-878e-b7c01550bfa1" containerName="extract-content" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.358656 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d33f0a-43f7-41ec-878e-b7c01550bfa1" containerName="extract-content" Jan 27 09:17:37 crc kubenswrapper[4799]: E0127 09:17:37.358672 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d33f0a-43f7-41ec-878e-b7c01550bfa1" containerName="registry-server" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.358677 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d33f0a-43f7-41ec-878e-b7c01550bfa1" containerName="registry-server" Jan 27 09:17:37 crc kubenswrapper[4799]: E0127 09:17:37.358700 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b762ff5-aa70-4df6-9723-887c090f3337" containerName="extract-utilities" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.358706 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b762ff5-aa70-4df6-9723-887c090f3337" containerName="extract-utilities" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.371215 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="65d33f0a-43f7-41ec-878e-b7c01550bfa1" containerName="registry-server" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.371272 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b762ff5-aa70-4df6-9723-887c090f3337" containerName="registry-server" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.371851 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-vr2jr"] Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.372525 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-33d6-account-create-update-nmrkh" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.373963 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vr2jr" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.374912 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.379682 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-33d6-account-create-update-nmrkh"] Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.387512 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vr2jr"] Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.505504 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr5kl\" (UniqueName: \"kubernetes.io/projected/f0071dd9-0cd6-4403-bfaa-3469fc70f3d8-kube-api-access-kr5kl\") pod \"glance-33d6-account-create-update-nmrkh\" (UID: \"f0071dd9-0cd6-4403-bfaa-3469fc70f3d8\") " pod="openstack/glance-33d6-account-create-update-nmrkh" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.505871 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0071dd9-0cd6-4403-bfaa-3469fc70f3d8-operator-scripts\") pod \"glance-33d6-account-create-update-nmrkh\" (UID: \"f0071dd9-0cd6-4403-bfaa-3469fc70f3d8\") " pod="openstack/glance-33d6-account-create-update-nmrkh" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.505907 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mj5q\" (UniqueName: \"kubernetes.io/projected/c403195d-3c11-49a2-9f59-31ab7b208057-kube-api-access-4mj5q\") pod \"glance-db-create-vr2jr\" (UID: \"c403195d-3c11-49a2-9f59-31ab7b208057\") " pod="openstack/glance-db-create-vr2jr" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.505932 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c403195d-3c11-49a2-9f59-31ab7b208057-operator-scripts\") pod \"glance-db-create-vr2jr\" (UID: \"c403195d-3c11-49a2-9f59-31ab7b208057\") " pod="openstack/glance-db-create-vr2jr" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.607909 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr5kl\" (UniqueName: \"kubernetes.io/projected/f0071dd9-0cd6-4403-bfaa-3469fc70f3d8-kube-api-access-kr5kl\") pod \"glance-33d6-account-create-update-nmrkh\" (UID: \"f0071dd9-0cd6-4403-bfaa-3469fc70f3d8\") " pod="openstack/glance-33d6-account-create-update-nmrkh" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.608000 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0071dd9-0cd6-4403-bfaa-3469fc70f3d8-operator-scripts\") pod \"glance-33d6-account-create-update-nmrkh\" (UID: \"f0071dd9-0cd6-4403-bfaa-3469fc70f3d8\") " pod="openstack/glance-33d6-account-create-update-nmrkh" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.608039 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mj5q\" (UniqueName: \"kubernetes.io/projected/c403195d-3c11-49a2-9f59-31ab7b208057-kube-api-access-4mj5q\") pod \"glance-db-create-vr2jr\" (UID: \"c403195d-3c11-49a2-9f59-31ab7b208057\") " pod="openstack/glance-db-create-vr2jr" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.608063 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c403195d-3c11-49a2-9f59-31ab7b208057-operator-scripts\") pod \"glance-db-create-vr2jr\" (UID: \"c403195d-3c11-49a2-9f59-31ab7b208057\") " pod="openstack/glance-db-create-vr2jr" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.608888 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0071dd9-0cd6-4403-bfaa-3469fc70f3d8-operator-scripts\") pod \"glance-33d6-account-create-update-nmrkh\" (UID: \"f0071dd9-0cd6-4403-bfaa-3469fc70f3d8\") " pod="openstack/glance-33d6-account-create-update-nmrkh" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.608917 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c403195d-3c11-49a2-9f59-31ab7b208057-operator-scripts\") pod \"glance-db-create-vr2jr\" (UID: \"c403195d-3c11-49a2-9f59-31ab7b208057\") " pod="openstack/glance-db-create-vr2jr" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.627686 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mj5q\" (UniqueName: \"kubernetes.io/projected/c403195d-3c11-49a2-9f59-31ab7b208057-kube-api-access-4mj5q\") pod \"glance-db-create-vr2jr\" (UID: \"c403195d-3c11-49a2-9f59-31ab7b208057\") " pod="openstack/glance-db-create-vr2jr" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.629398 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr5kl\" (UniqueName: \"kubernetes.io/projected/f0071dd9-0cd6-4403-bfaa-3469fc70f3d8-kube-api-access-kr5kl\") pod \"glance-33d6-account-create-update-nmrkh\" (UID: \"f0071dd9-0cd6-4403-bfaa-3469fc70f3d8\") " pod="openstack/glance-33d6-account-create-update-nmrkh" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.695488 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-33d6-account-create-update-nmrkh" Jan 27 09:17:37 crc kubenswrapper[4799]: I0127 09:17:37.714883 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vr2jr" Jan 27 09:17:38 crc kubenswrapper[4799]: I0127 09:17:38.134812 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-33d6-account-create-update-nmrkh"] Jan 27 09:17:38 crc kubenswrapper[4799]: I0127 09:17:38.216787 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vr2jr"] Jan 27 09:17:38 crc kubenswrapper[4799]: W0127 09:17:38.226508 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc403195d_3c11_49a2_9f59_31ab7b208057.slice/crio-4c97cfe1d185217bad62a7657c6a0212e2c220dd16ce4b02d45d32aba83d715b WatchSource:0}: Error finding container 4c97cfe1d185217bad62a7657c6a0212e2c220dd16ce4b02d45d32aba83d715b: Status 404 returned error can't find the container with id 4c97cfe1d185217bad62a7657c6a0212e2c220dd16ce4b02d45d32aba83d715b Jan 27 09:17:38 crc kubenswrapper[4799]: I0127 09:17:38.304937 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vr2jr" event={"ID":"c403195d-3c11-49a2-9f59-31ab7b208057","Type":"ContainerStarted","Data":"4c97cfe1d185217bad62a7657c6a0212e2c220dd16ce4b02d45d32aba83d715b"} Jan 27 09:17:38 crc kubenswrapper[4799]: I0127 09:17:38.306875 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-33d6-account-create-update-nmrkh" event={"ID":"f0071dd9-0cd6-4403-bfaa-3469fc70f3d8","Type":"ContainerStarted","Data":"e326d32c23af80db9f791d09580d930e6fa34dfbe74ded9b4b32a7e69b43c0bf"} Jan 27 09:17:38 crc kubenswrapper[4799]: I0127 09:17:38.333311 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-33d6-account-create-update-nmrkh" podStartSLOduration=1.333274659 podStartE2EDuration="1.333274659s" podCreationTimestamp="2026-01-27 09:17:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:17:38.320750658 +0000 UTC m=+5524.631854743" watchObservedRunningTime="2026-01-27 09:17:38.333274659 +0000 UTC m=+5524.644378724" Jan 27 09:17:39 crc kubenswrapper[4799]: I0127 09:17:39.315070 4799 generic.go:334] "Generic (PLEG): container finished" podID="f0071dd9-0cd6-4403-bfaa-3469fc70f3d8" containerID="760022c8897149c6bc551f19473d3fe27eddab0f2fc20be6d48b15c0eb956001" exitCode=0 Jan 27 09:17:39 crc kubenswrapper[4799]: I0127 09:17:39.315130 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-33d6-account-create-update-nmrkh" event={"ID":"f0071dd9-0cd6-4403-bfaa-3469fc70f3d8","Type":"ContainerDied","Data":"760022c8897149c6bc551f19473d3fe27eddab0f2fc20be6d48b15c0eb956001"} Jan 27 09:17:39 crc kubenswrapper[4799]: I0127 09:17:39.316609 4799 generic.go:334] "Generic (PLEG): container finished" podID="c403195d-3c11-49a2-9f59-31ab7b208057" containerID="601380be6a575a081a1de72464abe217aac630b6b3a5f0b2e3eb2fdf5a177437" exitCode=0 Jan 27 09:17:39 crc kubenswrapper[4799]: I0127 09:17:39.316640 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vr2jr" event={"ID":"c403195d-3c11-49a2-9f59-31ab7b208057","Type":"ContainerDied","Data":"601380be6a575a081a1de72464abe217aac630b6b3a5f0b2e3eb2fdf5a177437"} Jan 27 09:17:40 crc kubenswrapper[4799]: I0127 09:17:40.681557 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-33d6-account-create-update-nmrkh" Jan 27 09:17:40 crc kubenswrapper[4799]: I0127 09:17:40.688212 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vr2jr" Jan 27 09:17:40 crc kubenswrapper[4799]: I0127 09:17:40.757564 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c403195d-3c11-49a2-9f59-31ab7b208057-operator-scripts\") pod \"c403195d-3c11-49a2-9f59-31ab7b208057\" (UID: \"c403195d-3c11-49a2-9f59-31ab7b208057\") " Jan 27 09:17:40 crc kubenswrapper[4799]: I0127 09:17:40.757703 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr5kl\" (UniqueName: \"kubernetes.io/projected/f0071dd9-0cd6-4403-bfaa-3469fc70f3d8-kube-api-access-kr5kl\") pod \"f0071dd9-0cd6-4403-bfaa-3469fc70f3d8\" (UID: \"f0071dd9-0cd6-4403-bfaa-3469fc70f3d8\") " Jan 27 09:17:40 crc kubenswrapper[4799]: I0127 09:17:40.757754 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0071dd9-0cd6-4403-bfaa-3469fc70f3d8-operator-scripts\") pod \"f0071dd9-0cd6-4403-bfaa-3469fc70f3d8\" (UID: \"f0071dd9-0cd6-4403-bfaa-3469fc70f3d8\") " Jan 27 09:17:40 crc kubenswrapper[4799]: I0127 09:17:40.757845 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mj5q\" (UniqueName: \"kubernetes.io/projected/c403195d-3c11-49a2-9f59-31ab7b208057-kube-api-access-4mj5q\") pod \"c403195d-3c11-49a2-9f59-31ab7b208057\" (UID: \"c403195d-3c11-49a2-9f59-31ab7b208057\") " Jan 27 09:17:40 crc kubenswrapper[4799]: I0127 09:17:40.758724 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c403195d-3c11-49a2-9f59-31ab7b208057-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c403195d-3c11-49a2-9f59-31ab7b208057" (UID: "c403195d-3c11-49a2-9f59-31ab7b208057"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:40 crc kubenswrapper[4799]: I0127 09:17:40.758828 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0071dd9-0cd6-4403-bfaa-3469fc70f3d8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f0071dd9-0cd6-4403-bfaa-3469fc70f3d8" (UID: "f0071dd9-0cd6-4403-bfaa-3469fc70f3d8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:17:40 crc kubenswrapper[4799]: I0127 09:17:40.763419 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c403195d-3c11-49a2-9f59-31ab7b208057-kube-api-access-4mj5q" (OuterVolumeSpecName: "kube-api-access-4mj5q") pod "c403195d-3c11-49a2-9f59-31ab7b208057" (UID: "c403195d-3c11-49a2-9f59-31ab7b208057"). InnerVolumeSpecName "kube-api-access-4mj5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:17:40 crc kubenswrapper[4799]: I0127 09:17:40.763900 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0071dd9-0cd6-4403-bfaa-3469fc70f3d8-kube-api-access-kr5kl" (OuterVolumeSpecName: "kube-api-access-kr5kl") pod "f0071dd9-0cd6-4403-bfaa-3469fc70f3d8" (UID: "f0071dd9-0cd6-4403-bfaa-3469fc70f3d8"). InnerVolumeSpecName "kube-api-access-kr5kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:17:40 crc kubenswrapper[4799]: I0127 09:17:40.860007 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr5kl\" (UniqueName: \"kubernetes.io/projected/f0071dd9-0cd6-4403-bfaa-3469fc70f3d8-kube-api-access-kr5kl\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:40 crc kubenswrapper[4799]: I0127 09:17:40.860043 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0071dd9-0cd6-4403-bfaa-3469fc70f3d8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:40 crc kubenswrapper[4799]: I0127 09:17:40.860056 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mj5q\" (UniqueName: \"kubernetes.io/projected/c403195d-3c11-49a2-9f59-31ab7b208057-kube-api-access-4mj5q\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:40 crc kubenswrapper[4799]: I0127 09:17:40.860067 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c403195d-3c11-49a2-9f59-31ab7b208057-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:41 crc kubenswrapper[4799]: I0127 09:17:41.337144 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-33d6-account-create-update-nmrkh" event={"ID":"f0071dd9-0cd6-4403-bfaa-3469fc70f3d8","Type":"ContainerDied","Data":"e326d32c23af80db9f791d09580d930e6fa34dfbe74ded9b4b32a7e69b43c0bf"} Jan 27 09:17:41 crc kubenswrapper[4799]: I0127 09:17:41.337186 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e326d32c23af80db9f791d09580d930e6fa34dfbe74ded9b4b32a7e69b43c0bf" Jan 27 09:17:41 crc kubenswrapper[4799]: I0127 09:17:41.337191 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-33d6-account-create-update-nmrkh" Jan 27 09:17:41 crc kubenswrapper[4799]: I0127 09:17:41.338846 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vr2jr" event={"ID":"c403195d-3c11-49a2-9f59-31ab7b208057","Type":"ContainerDied","Data":"4c97cfe1d185217bad62a7657c6a0212e2c220dd16ce4b02d45d32aba83d715b"} Jan 27 09:17:41 crc kubenswrapper[4799]: I0127 09:17:41.338873 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c97cfe1d185217bad62a7657c6a0212e2c220dd16ce4b02d45d32aba83d715b" Jan 27 09:17:41 crc kubenswrapper[4799]: I0127 09:17:41.338918 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vr2jr" Jan 27 09:17:42 crc kubenswrapper[4799]: I0127 09:17:42.557625 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-cqbgh"] Jan 27 09:17:42 crc kubenswrapper[4799]: E0127 09:17:42.559530 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0071dd9-0cd6-4403-bfaa-3469fc70f3d8" containerName="mariadb-account-create-update" Jan 27 09:17:42 crc kubenswrapper[4799]: I0127 09:17:42.559552 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0071dd9-0cd6-4403-bfaa-3469fc70f3d8" containerName="mariadb-account-create-update" Jan 27 09:17:42 crc kubenswrapper[4799]: E0127 09:17:42.559595 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c403195d-3c11-49a2-9f59-31ab7b208057" containerName="mariadb-database-create" Jan 27 09:17:42 crc kubenswrapper[4799]: I0127 09:17:42.559604 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c403195d-3c11-49a2-9f59-31ab7b208057" containerName="mariadb-database-create" Jan 27 09:17:42 crc kubenswrapper[4799]: I0127 09:17:42.559825 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c403195d-3c11-49a2-9f59-31ab7b208057" containerName="mariadb-database-create" Jan 27 09:17:42 crc kubenswrapper[4799]: I0127 09:17:42.559845 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0071dd9-0cd6-4403-bfaa-3469fc70f3d8" containerName="mariadb-account-create-update" Jan 27 09:17:42 crc kubenswrapper[4799]: I0127 09:17:42.560608 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cqbgh" Jan 27 09:17:42 crc kubenswrapper[4799]: I0127 09:17:42.567978 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-cqbgh"] Jan 27 09:17:42 crc kubenswrapper[4799]: I0127 09:17:42.599751 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 27 09:17:42 crc kubenswrapper[4799]: I0127 09:17:42.599821 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-c7tlt" Jan 27 09:17:42 crc kubenswrapper[4799]: I0127 09:17:42.692251 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-config-data\") pod \"glance-db-sync-cqbgh\" (UID: \"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba\") " pod="openstack/glance-db-sync-cqbgh" Jan 27 09:17:42 crc kubenswrapper[4799]: I0127 09:17:42.692367 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcg9h\" (UniqueName: \"kubernetes.io/projected/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-kube-api-access-xcg9h\") pod \"glance-db-sync-cqbgh\" (UID: \"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba\") " pod="openstack/glance-db-sync-cqbgh" Jan 27 09:17:42 crc kubenswrapper[4799]: I0127 09:17:42.692437 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-db-sync-config-data\") pod \"glance-db-sync-cqbgh\" (UID: \"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba\") " pod="openstack/glance-db-sync-cqbgh" Jan 27 09:17:42 crc kubenswrapper[4799]: I0127 09:17:42.692461 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-combined-ca-bundle\") pod \"glance-db-sync-cqbgh\" (UID: \"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba\") " pod="openstack/glance-db-sync-cqbgh" Jan 27 09:17:42 crc kubenswrapper[4799]: I0127 09:17:42.794471 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-config-data\") pod \"glance-db-sync-cqbgh\" (UID: \"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba\") " pod="openstack/glance-db-sync-cqbgh" Jan 27 09:17:42 crc kubenswrapper[4799]: I0127 09:17:42.794561 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcg9h\" (UniqueName: \"kubernetes.io/projected/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-kube-api-access-xcg9h\") pod \"glance-db-sync-cqbgh\" (UID: \"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba\") " pod="openstack/glance-db-sync-cqbgh" Jan 27 09:17:42 crc kubenswrapper[4799]: I0127 09:17:42.794598 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-db-sync-config-data\") pod \"glance-db-sync-cqbgh\" (UID: \"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba\") " pod="openstack/glance-db-sync-cqbgh" Jan 27 09:17:42 crc kubenswrapper[4799]: I0127 09:17:42.794620 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-combined-ca-bundle\") pod \"glance-db-sync-cqbgh\" (UID: \"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba\") " pod="openstack/glance-db-sync-cqbgh" Jan 27 09:17:42 crc kubenswrapper[4799]: I0127 09:17:42.799255 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-config-data\") pod \"glance-db-sync-cqbgh\" (UID: \"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba\") " pod="openstack/glance-db-sync-cqbgh" Jan 27 09:17:42 crc kubenswrapper[4799]: I0127 09:17:42.799334 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-combined-ca-bundle\") pod \"glance-db-sync-cqbgh\" (UID: \"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba\") " pod="openstack/glance-db-sync-cqbgh" Jan 27 09:17:42 crc kubenswrapper[4799]: I0127 09:17:42.799521 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-db-sync-config-data\") pod \"glance-db-sync-cqbgh\" (UID: \"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba\") " pod="openstack/glance-db-sync-cqbgh" Jan 27 09:17:42 crc kubenswrapper[4799]: I0127 09:17:42.825293 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcg9h\" (UniqueName: \"kubernetes.io/projected/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-kube-api-access-xcg9h\") pod \"glance-db-sync-cqbgh\" (UID: \"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba\") " pod="openstack/glance-db-sync-cqbgh" Jan 27 09:17:42 crc kubenswrapper[4799]: I0127 09:17:42.912002 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cqbgh" Jan 27 09:17:43 crc kubenswrapper[4799]: I0127 09:17:43.421466 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-cqbgh"] Jan 27 09:17:44 crc kubenswrapper[4799]: I0127 09:17:44.371021 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cqbgh" event={"ID":"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba","Type":"ContainerStarted","Data":"51308f0a6c44f816de55643d5180b5202c7a566b138fea94dccada83476d833c"} Jan 27 09:17:44 crc kubenswrapper[4799]: I0127 09:17:44.371573 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cqbgh" event={"ID":"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba","Type":"ContainerStarted","Data":"8e28575f830db5a77ca915ce4ddc3a35710178e7947be7a1dd3bc8315c48a120"} Jan 27 09:17:44 crc kubenswrapper[4799]: I0127 09:17:44.393439 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-cqbgh" podStartSLOduration=2.393418967 podStartE2EDuration="2.393418967s" podCreationTimestamp="2026-01-27 09:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:17:44.387826425 +0000 UTC m=+5530.698930510" watchObservedRunningTime="2026-01-27 09:17:44.393418967 +0000 UTC m=+5530.704523032" Jan 27 09:17:46 crc kubenswrapper[4799]: I0127 09:17:46.451884 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:17:46 crc kubenswrapper[4799]: E0127 09:17:46.452400 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:17:48 crc kubenswrapper[4799]: I0127 09:17:48.404961 4799 generic.go:334] "Generic (PLEG): container finished" podID="7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba" containerID="51308f0a6c44f816de55643d5180b5202c7a566b138fea94dccada83476d833c" exitCode=0 Jan 27 09:17:48 crc kubenswrapper[4799]: I0127 09:17:48.405007 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cqbgh" event={"ID":"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba","Type":"ContainerDied","Data":"51308f0a6c44f816de55643d5180b5202c7a566b138fea94dccada83476d833c"} Jan 27 09:17:49 crc kubenswrapper[4799]: I0127 09:17:49.808724 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cqbgh" Jan 27 09:17:49 crc kubenswrapper[4799]: I0127 09:17:49.948827 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcg9h\" (UniqueName: \"kubernetes.io/projected/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-kube-api-access-xcg9h\") pod \"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba\" (UID: \"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba\") " Jan 27 09:17:49 crc kubenswrapper[4799]: I0127 09:17:49.948929 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-combined-ca-bundle\") pod \"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba\" (UID: \"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba\") " Jan 27 09:17:49 crc kubenswrapper[4799]: I0127 09:17:49.948977 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-db-sync-config-data\") pod \"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba\" (UID: \"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba\") " Jan 27 09:17:49 crc kubenswrapper[4799]: I0127 09:17:49.949080 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-config-data\") pod \"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba\" (UID: \"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba\") " Jan 27 09:17:49 crc kubenswrapper[4799]: I0127 09:17:49.954246 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba" (UID: "7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:17:49 crc kubenswrapper[4799]: I0127 09:17:49.954322 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-kube-api-access-xcg9h" (OuterVolumeSpecName: "kube-api-access-xcg9h") pod "7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba" (UID: "7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba"). InnerVolumeSpecName "kube-api-access-xcg9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:17:49 crc kubenswrapper[4799]: I0127 09:17:49.972643 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba" (UID: "7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:17:49 crc kubenswrapper[4799]: I0127 09:17:49.999008 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-config-data" (OuterVolumeSpecName: "config-data") pod "7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba" (UID: "7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.051321 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.051356 4799 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.051366 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.051401 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcg9h\" (UniqueName: \"kubernetes.io/projected/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba-kube-api-access-xcg9h\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.424115 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cqbgh" event={"ID":"7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba","Type":"ContainerDied","Data":"8e28575f830db5a77ca915ce4ddc3a35710178e7947be7a1dd3bc8315c48a120"} Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.424438 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e28575f830db5a77ca915ce4ddc3a35710178e7947be7a1dd3bc8315c48a120" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.424400 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cqbgh" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.744448 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:17:50 crc kubenswrapper[4799]: E0127 09:17:50.744798 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba" containerName="glance-db-sync" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.744815 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba" containerName="glance-db-sync" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.744995 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba" containerName="glance-db-sync" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.745922 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.747825 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-c7tlt" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.749595 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.749616 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.751714 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.758644 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.865003 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkq2b\" (UniqueName: \"kubernetes.io/projected/945342c1-2981-48bf-8830-0452148a0efb-kube-api-access-jkq2b\") pod \"glance-default-external-api-0\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.865060 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/945342c1-2981-48bf-8830-0452148a0efb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.865353 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/945342c1-2981-48bf-8830-0452148a0efb-scripts\") pod \"glance-default-external-api-0\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.865379 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/945342c1-2981-48bf-8830-0452148a0efb-config-data\") pod \"glance-default-external-api-0\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.865434 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945342c1-2981-48bf-8830-0452148a0efb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.865460 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/945342c1-2981-48bf-8830-0452148a0efb-logs\") pod \"glance-default-external-api-0\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.865856 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/945342c1-2981-48bf-8830-0452148a0efb-ceph\") pod \"glance-default-external-api-0\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.903409 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c89755789-9zlzq"] Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.907523 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89755789-9zlzq" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.920366 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89755789-9zlzq"] Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.966972 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/945342c1-2981-48bf-8830-0452148a0efb-config-data\") pod \"glance-default-external-api-0\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.967020 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-dns-svc\") pod \"dnsmasq-dns-6c89755789-9zlzq\" (UID: \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\") " pod="openstack/dnsmasq-dns-6c89755789-9zlzq" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.967046 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89755789-9zlzq\" (UID: \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\") " pod="openstack/dnsmasq-dns-6c89755789-9zlzq" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.967106 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945342c1-2981-48bf-8830-0452148a0efb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.967199 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/945342c1-2981-48bf-8830-0452148a0efb-logs\") pod \"glance-default-external-api-0\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.967288 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md78t\" (UniqueName: \"kubernetes.io/projected/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-kube-api-access-md78t\") pod \"dnsmasq-dns-6c89755789-9zlzq\" (UID: \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\") " pod="openstack/dnsmasq-dns-6c89755789-9zlzq" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.967346 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/945342c1-2981-48bf-8830-0452148a0efb-ceph\") pod \"glance-default-external-api-0\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.967393 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-ovsdbserver-nb\") pod \"dnsmasq-dns-6c89755789-9zlzq\" (UID: \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\") " pod="openstack/dnsmasq-dns-6c89755789-9zlzq" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.967721 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkq2b\" (UniqueName: \"kubernetes.io/projected/945342c1-2981-48bf-8830-0452148a0efb-kube-api-access-jkq2b\") pod \"glance-default-external-api-0\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.967764 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-config\") pod \"dnsmasq-dns-6c89755789-9zlzq\" (UID: \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\") " pod="openstack/dnsmasq-dns-6c89755789-9zlzq" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.967819 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/945342c1-2981-48bf-8830-0452148a0efb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.967876 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/945342c1-2981-48bf-8830-0452148a0efb-scripts\") pod \"glance-default-external-api-0\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.968345 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/945342c1-2981-48bf-8830-0452148a0efb-logs\") pod \"glance-default-external-api-0\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.969405 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/945342c1-2981-48bf-8830-0452148a0efb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.973961 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/945342c1-2981-48bf-8830-0452148a0efb-config-data\") pod \"glance-default-external-api-0\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.974949 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945342c1-2981-48bf-8830-0452148a0efb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.985961 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/945342c1-2981-48bf-8830-0452148a0efb-scripts\") pod \"glance-default-external-api-0\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:50 crc kubenswrapper[4799]: I0127 09:17:50.991002 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/945342c1-2981-48bf-8830-0452148a0efb-ceph\") pod \"glance-default-external-api-0\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.006086 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkq2b\" (UniqueName: \"kubernetes.io/projected/945342c1-2981-48bf-8830-0452148a0efb-kube-api-access-jkq2b\") pod \"glance-default-external-api-0\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.024639 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.026731 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.031076 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.056044 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.069899 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-dns-svc\") pod \"dnsmasq-dns-6c89755789-9zlzq\" (UID: \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\") " pod="openstack/dnsmasq-dns-6c89755789-9zlzq" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.069951 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89755789-9zlzq\" (UID: \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\") " pod="openstack/dnsmasq-dns-6c89755789-9zlzq" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.070030 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md78t\" (UniqueName: \"kubernetes.io/projected/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-kube-api-access-md78t\") pod \"dnsmasq-dns-6c89755789-9zlzq\" (UID: \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\") " pod="openstack/dnsmasq-dns-6c89755789-9zlzq" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.070065 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-ovsdbserver-nb\") pod \"dnsmasq-dns-6c89755789-9zlzq\" (UID: \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\") " pod="openstack/dnsmasq-dns-6c89755789-9zlzq" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.070136 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-config\") pod \"dnsmasq-dns-6c89755789-9zlzq\" (UID: \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\") " pod="openstack/dnsmasq-dns-6c89755789-9zlzq" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.071168 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-dns-svc\") pod \"dnsmasq-dns-6c89755789-9zlzq\" (UID: \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\") " pod="openstack/dnsmasq-dns-6c89755789-9zlzq" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.073523 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-config\") pod \"dnsmasq-dns-6c89755789-9zlzq\" (UID: \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\") " pod="openstack/dnsmasq-dns-6c89755789-9zlzq" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.073541 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89755789-9zlzq\" (UID: \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\") " pod="openstack/dnsmasq-dns-6c89755789-9zlzq" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.074214 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-ovsdbserver-nb\") pod \"dnsmasq-dns-6c89755789-9zlzq\" (UID: \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\") " pod="openstack/dnsmasq-dns-6c89755789-9zlzq" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.074715 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.092593 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md78t\" (UniqueName: \"kubernetes.io/projected/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-kube-api-access-md78t\") pod \"dnsmasq-dns-6c89755789-9zlzq\" (UID: \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\") " pod="openstack/dnsmasq-dns-6c89755789-9zlzq" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.172238 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66de97b5-915d-4c78-87c1-902d07d0fe55-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.172316 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/66de97b5-915d-4c78-87c1-902d07d0fe55-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.172355 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66de97b5-915d-4c78-87c1-902d07d0fe55-scripts\") pod \"glance-default-internal-api-0\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.172430 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqs7w\" (UniqueName: \"kubernetes.io/projected/66de97b5-915d-4c78-87c1-902d07d0fe55-kube-api-access-dqs7w\") pod \"glance-default-internal-api-0\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.172478 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66de97b5-915d-4c78-87c1-902d07d0fe55-config-data\") pod \"glance-default-internal-api-0\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.172542 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66de97b5-915d-4c78-87c1-902d07d0fe55-logs\") pod \"glance-default-internal-api-0\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.172589 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/66de97b5-915d-4c78-87c1-902d07d0fe55-ceph\") pod \"glance-default-internal-api-0\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.229775 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89755789-9zlzq" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.274594 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66de97b5-915d-4c78-87c1-902d07d0fe55-config-data\") pod \"glance-default-internal-api-0\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.274677 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66de97b5-915d-4c78-87c1-902d07d0fe55-logs\") pod \"glance-default-internal-api-0\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.274724 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/66de97b5-915d-4c78-87c1-902d07d0fe55-ceph\") pod \"glance-default-internal-api-0\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.274783 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66de97b5-915d-4c78-87c1-902d07d0fe55-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.274802 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/66de97b5-915d-4c78-87c1-902d07d0fe55-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.274825 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66de97b5-915d-4c78-87c1-902d07d0fe55-scripts\") pod \"glance-default-internal-api-0\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.274861 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqs7w\" (UniqueName: \"kubernetes.io/projected/66de97b5-915d-4c78-87c1-902d07d0fe55-kube-api-access-dqs7w\") pod \"glance-default-internal-api-0\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.276920 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66de97b5-915d-4c78-87c1-902d07d0fe55-logs\") pod \"glance-default-internal-api-0\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.277561 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/66de97b5-915d-4c78-87c1-902d07d0fe55-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.280624 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66de97b5-915d-4c78-87c1-902d07d0fe55-config-data\") pod \"glance-default-internal-api-0\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.281879 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/66de97b5-915d-4c78-87c1-902d07d0fe55-ceph\") pod \"glance-default-internal-api-0\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.283942 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66de97b5-915d-4c78-87c1-902d07d0fe55-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.296713 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqs7w\" (UniqueName: \"kubernetes.io/projected/66de97b5-915d-4c78-87c1-902d07d0fe55-kube-api-access-dqs7w\") pod \"glance-default-internal-api-0\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.306707 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66de97b5-915d-4c78-87c1-902d07d0fe55-scripts\") pod \"glance-default-internal-api-0\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.376163 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.680495 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89755789-9zlzq"] Jan 27 09:17:51 crc kubenswrapper[4799]: I0127 09:17:51.831997 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:17:52 crc kubenswrapper[4799]: I0127 09:17:52.143709 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:17:52 crc kubenswrapper[4799]: I0127 09:17:52.223216 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:17:52 crc kubenswrapper[4799]: I0127 09:17:52.461200 4799 generic.go:334] "Generic (PLEG): container finished" podID="3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7" containerID="2a56c2149dfcf424dae40b7223abfd8344c2c6ba5025f0474c74e864ead0071a" exitCode=0 Jan 27 09:17:52 crc kubenswrapper[4799]: I0127 09:17:52.478004 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89755789-9zlzq" event={"ID":"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7","Type":"ContainerDied","Data":"2a56c2149dfcf424dae40b7223abfd8344c2c6ba5025f0474c74e864ead0071a"} Jan 27 09:17:52 crc kubenswrapper[4799]: I0127 09:17:52.478053 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89755789-9zlzq" event={"ID":"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7","Type":"ContainerStarted","Data":"5d42cea24bfdfc5055913a84d1017a98a94c32e82791f0b306ef8ec684760558"} Jan 27 09:17:52 crc kubenswrapper[4799]: I0127 09:17:52.478065 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"66de97b5-915d-4c78-87c1-902d07d0fe55","Type":"ContainerStarted","Data":"83c6ec2445ccf8397820bfd57ebf7796afedf67604e54ada22c4db1ad173e7fc"} Jan 27 09:17:52 crc kubenswrapper[4799]: I0127 09:17:52.478077 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"945342c1-2981-48bf-8830-0452148a0efb","Type":"ContainerStarted","Data":"e6fd8369d20784fb4d7bee3232fa19bcd757e079a669e846dce496607190e96e"} Jan 27 09:17:53 crc kubenswrapper[4799]: I0127 09:17:53.491725 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"66de97b5-915d-4c78-87c1-902d07d0fe55","Type":"ContainerStarted","Data":"221a716dc78f91af371a68de888f23556c77ab36951a5b6ecdc33067f6809032"} Jan 27 09:17:53 crc kubenswrapper[4799]: I0127 09:17:53.492226 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"66de97b5-915d-4c78-87c1-902d07d0fe55","Type":"ContainerStarted","Data":"f260bf55fea5873b42d53fc81b7b58ea028121f85a85f960b001cfbe4d2164a4"} Jan 27 09:17:53 crc kubenswrapper[4799]: I0127 09:17:53.494130 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"945342c1-2981-48bf-8830-0452148a0efb","Type":"ContainerStarted","Data":"009238a0ef28e89baac2a9a3d15c4e699ca950f600fcbb40e05a497e350c4823"} Jan 27 09:17:53 crc kubenswrapper[4799]: I0127 09:17:53.494180 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"945342c1-2981-48bf-8830-0452148a0efb","Type":"ContainerStarted","Data":"dab8383bde0ca157ce73f3fd0c9c3167cf15c3bc94d141572b993b60913c9021"} Jan 27 09:17:53 crc kubenswrapper[4799]: I0127 09:17:53.494239 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="945342c1-2981-48bf-8830-0452148a0efb" containerName="glance-log" containerID="cri-o://dab8383bde0ca157ce73f3fd0c9c3167cf15c3bc94d141572b993b60913c9021" gracePeriod=30 Jan 27 09:17:53 crc kubenswrapper[4799]: I0127 09:17:53.494258 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="945342c1-2981-48bf-8830-0452148a0efb" containerName="glance-httpd" containerID="cri-o://009238a0ef28e89baac2a9a3d15c4e699ca950f600fcbb40e05a497e350c4823" gracePeriod=30 Jan 27 09:17:53 crc kubenswrapper[4799]: I0127 09:17:53.504025 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89755789-9zlzq" event={"ID":"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7","Type":"ContainerStarted","Data":"8c3bf9d4a26248c434e36b38e98151e9e7eac708eb369aa05414deabe10dfbda"} Jan 27 09:17:53 crc kubenswrapper[4799]: I0127 09:17:53.504568 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c89755789-9zlzq" Jan 27 09:17:53 crc kubenswrapper[4799]: I0127 09:17:53.525722 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.525696033 podStartE2EDuration="3.525696033s" podCreationTimestamp="2026-01-27 09:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:17:53.512954837 +0000 UTC m=+5539.824058902" watchObservedRunningTime="2026-01-27 09:17:53.525696033 +0000 UTC m=+5539.836800098" Jan 27 09:17:53 crc kubenswrapper[4799]: I0127 09:17:53.537714 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.537693149 podStartE2EDuration="3.537693149s" podCreationTimestamp="2026-01-27 09:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:17:53.531506191 +0000 UTC m=+5539.842610276" watchObservedRunningTime="2026-01-27 09:17:53.537693149 +0000 UTC m=+5539.848797214" Jan 27 09:17:53 crc kubenswrapper[4799]: I0127 09:17:53.556909 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c89755789-9zlzq" podStartSLOduration=3.556888842 podStartE2EDuration="3.556888842s" podCreationTimestamp="2026-01-27 09:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:17:53.551089284 +0000 UTC m=+5539.862193349" watchObservedRunningTime="2026-01-27 09:17:53.556888842 +0000 UTC m=+5539.867992917" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.202209 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.359883 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/945342c1-2981-48bf-8830-0452148a0efb-scripts\") pod \"945342c1-2981-48bf-8830-0452148a0efb\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.360003 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/945342c1-2981-48bf-8830-0452148a0efb-config-data\") pod \"945342c1-2981-48bf-8830-0452148a0efb\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.360035 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945342c1-2981-48bf-8830-0452148a0efb-combined-ca-bundle\") pod \"945342c1-2981-48bf-8830-0452148a0efb\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.360073 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/945342c1-2981-48bf-8830-0452148a0efb-logs\") pod \"945342c1-2981-48bf-8830-0452148a0efb\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.360126 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkq2b\" (UniqueName: \"kubernetes.io/projected/945342c1-2981-48bf-8830-0452148a0efb-kube-api-access-jkq2b\") pod \"945342c1-2981-48bf-8830-0452148a0efb\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.360199 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/945342c1-2981-48bf-8830-0452148a0efb-httpd-run\") pod \"945342c1-2981-48bf-8830-0452148a0efb\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.360262 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/945342c1-2981-48bf-8830-0452148a0efb-ceph\") pod \"945342c1-2981-48bf-8830-0452148a0efb\" (UID: \"945342c1-2981-48bf-8830-0452148a0efb\") " Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.364085 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/945342c1-2981-48bf-8830-0452148a0efb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "945342c1-2981-48bf-8830-0452148a0efb" (UID: "945342c1-2981-48bf-8830-0452148a0efb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.365567 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/945342c1-2981-48bf-8830-0452148a0efb-logs" (OuterVolumeSpecName: "logs") pod "945342c1-2981-48bf-8830-0452148a0efb" (UID: "945342c1-2981-48bf-8830-0452148a0efb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.380648 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.381246 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/945342c1-2981-48bf-8830-0452148a0efb-scripts" (OuterVolumeSpecName: "scripts") pod "945342c1-2981-48bf-8830-0452148a0efb" (UID: "945342c1-2981-48bf-8830-0452148a0efb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.381433 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/945342c1-2981-48bf-8830-0452148a0efb-kube-api-access-jkq2b" (OuterVolumeSpecName: "kube-api-access-jkq2b") pod "945342c1-2981-48bf-8830-0452148a0efb" (UID: "945342c1-2981-48bf-8830-0452148a0efb"). InnerVolumeSpecName "kube-api-access-jkq2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.381601 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/945342c1-2981-48bf-8830-0452148a0efb-ceph" (OuterVolumeSpecName: "ceph") pod "945342c1-2981-48bf-8830-0452148a0efb" (UID: "945342c1-2981-48bf-8830-0452148a0efb"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.425890 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/945342c1-2981-48bf-8830-0452148a0efb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "945342c1-2981-48bf-8830-0452148a0efb" (UID: "945342c1-2981-48bf-8830-0452148a0efb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.435274 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/945342c1-2981-48bf-8830-0452148a0efb-config-data" (OuterVolumeSpecName: "config-data") pod "945342c1-2981-48bf-8830-0452148a0efb" (UID: "945342c1-2981-48bf-8830-0452148a0efb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.461747 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/945342c1-2981-48bf-8830-0452148a0efb-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.461793 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945342c1-2981-48bf-8830-0452148a0efb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.461809 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/945342c1-2981-48bf-8830-0452148a0efb-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.461822 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkq2b\" (UniqueName: \"kubernetes.io/projected/945342c1-2981-48bf-8830-0452148a0efb-kube-api-access-jkq2b\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.461833 4799 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/945342c1-2981-48bf-8830-0452148a0efb-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.461845 4799 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/945342c1-2981-48bf-8830-0452148a0efb-ceph\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.461856 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/945342c1-2981-48bf-8830-0452148a0efb-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.526898 4799 generic.go:334] "Generic (PLEG): container finished" podID="945342c1-2981-48bf-8830-0452148a0efb" containerID="009238a0ef28e89baac2a9a3d15c4e699ca950f600fcbb40e05a497e350c4823" exitCode=0 Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.527145 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.527228 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"945342c1-2981-48bf-8830-0452148a0efb","Type":"ContainerDied","Data":"009238a0ef28e89baac2a9a3d15c4e699ca950f600fcbb40e05a497e350c4823"} Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.527338 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"945342c1-2981-48bf-8830-0452148a0efb","Type":"ContainerDied","Data":"dab8383bde0ca157ce73f3fd0c9c3167cf15c3bc94d141572b993b60913c9021"} Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.527362 4799 scope.go:117] "RemoveContainer" containerID="009238a0ef28e89baac2a9a3d15c4e699ca950f600fcbb40e05a497e350c4823" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.526931 4799 generic.go:334] "Generic (PLEG): container finished" podID="945342c1-2981-48bf-8830-0452148a0efb" containerID="dab8383bde0ca157ce73f3fd0c9c3167cf15c3bc94d141572b993b60913c9021" exitCode=143 Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.529859 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"945342c1-2981-48bf-8830-0452148a0efb","Type":"ContainerDied","Data":"e6fd8369d20784fb4d7bee3232fa19bcd757e079a669e846dce496607190e96e"} Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.565835 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.578560 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.582006 4799 scope.go:117] "RemoveContainer" containerID="dab8383bde0ca157ce73f3fd0c9c3167cf15c3bc94d141572b993b60913c9021" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.592440 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:17:54 crc kubenswrapper[4799]: E0127 09:17:54.592954 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="945342c1-2981-48bf-8830-0452148a0efb" containerName="glance-httpd" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.592981 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="945342c1-2981-48bf-8830-0452148a0efb" containerName="glance-httpd" Jan 27 09:17:54 crc kubenswrapper[4799]: E0127 09:17:54.593009 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="945342c1-2981-48bf-8830-0452148a0efb" containerName="glance-log" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.593019 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="945342c1-2981-48bf-8830-0452148a0efb" containerName="glance-log" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.593232 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="945342c1-2981-48bf-8830-0452148a0efb" containerName="glance-log" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.593258 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="945342c1-2981-48bf-8830-0452148a0efb" containerName="glance-httpd" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.594501 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.599443 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.608577 4799 scope.go:117] "RemoveContainer" containerID="009238a0ef28e89baac2a9a3d15c4e699ca950f600fcbb40e05a497e350c4823" Jan 27 09:17:54 crc kubenswrapper[4799]: E0127 09:17:54.615912 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"009238a0ef28e89baac2a9a3d15c4e699ca950f600fcbb40e05a497e350c4823\": container with ID starting with 009238a0ef28e89baac2a9a3d15c4e699ca950f600fcbb40e05a497e350c4823 not found: ID does not exist" containerID="009238a0ef28e89baac2a9a3d15c4e699ca950f600fcbb40e05a497e350c4823" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.615967 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"009238a0ef28e89baac2a9a3d15c4e699ca950f600fcbb40e05a497e350c4823"} err="failed to get container status \"009238a0ef28e89baac2a9a3d15c4e699ca950f600fcbb40e05a497e350c4823\": rpc error: code = NotFound desc = could not find container \"009238a0ef28e89baac2a9a3d15c4e699ca950f600fcbb40e05a497e350c4823\": container with ID starting with 009238a0ef28e89baac2a9a3d15c4e699ca950f600fcbb40e05a497e350c4823 not found: ID does not exist" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.615999 4799 scope.go:117] "RemoveContainer" containerID="dab8383bde0ca157ce73f3fd0c9c3167cf15c3bc94d141572b993b60913c9021" Jan 27 09:17:54 crc kubenswrapper[4799]: E0127 09:17:54.616497 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dab8383bde0ca157ce73f3fd0c9c3167cf15c3bc94d141572b993b60913c9021\": container with ID starting with dab8383bde0ca157ce73f3fd0c9c3167cf15c3bc94d141572b993b60913c9021 not found: ID does not exist" containerID="dab8383bde0ca157ce73f3fd0c9c3167cf15c3bc94d141572b993b60913c9021" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.616526 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dab8383bde0ca157ce73f3fd0c9c3167cf15c3bc94d141572b993b60913c9021"} err="failed to get container status \"dab8383bde0ca157ce73f3fd0c9c3167cf15c3bc94d141572b993b60913c9021\": rpc error: code = NotFound desc = could not find container \"dab8383bde0ca157ce73f3fd0c9c3167cf15c3bc94d141572b993b60913c9021\": container with ID starting with dab8383bde0ca157ce73f3fd0c9c3167cf15c3bc94d141572b993b60913c9021 not found: ID does not exist" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.616546 4799 scope.go:117] "RemoveContainer" containerID="009238a0ef28e89baac2a9a3d15c4e699ca950f600fcbb40e05a497e350c4823" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.616726 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"009238a0ef28e89baac2a9a3d15c4e699ca950f600fcbb40e05a497e350c4823"} err="failed to get container status \"009238a0ef28e89baac2a9a3d15c4e699ca950f600fcbb40e05a497e350c4823\": rpc error: code = NotFound desc = could not find container \"009238a0ef28e89baac2a9a3d15c4e699ca950f600fcbb40e05a497e350c4823\": container with ID starting with 009238a0ef28e89baac2a9a3d15c4e699ca950f600fcbb40e05a497e350c4823 not found: ID does not exist" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.616761 4799 scope.go:117] "RemoveContainer" containerID="dab8383bde0ca157ce73f3fd0c9c3167cf15c3bc94d141572b993b60913c9021" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.616965 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dab8383bde0ca157ce73f3fd0c9c3167cf15c3bc94d141572b993b60913c9021"} err="failed to get container status \"dab8383bde0ca157ce73f3fd0c9c3167cf15c3bc94d141572b993b60913c9021\": rpc error: code = NotFound desc = could not find container \"dab8383bde0ca157ce73f3fd0c9c3167cf15c3bc94d141572b993b60913c9021\": container with ID starting with dab8383bde0ca157ce73f3fd0c9c3167cf15c3bc94d141572b993b60913c9021 not found: ID does not exist" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.618843 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.668200 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/743f1808-211a-4ebd-9a0e-32af8ccf1ba8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"743f1808-211a-4ebd-9a0e-32af8ccf1ba8\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.668341 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/743f1808-211a-4ebd-9a0e-32af8ccf1ba8-config-data\") pod \"glance-default-external-api-0\" (UID: \"743f1808-211a-4ebd-9a0e-32af8ccf1ba8\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.668378 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/743f1808-211a-4ebd-9a0e-32af8ccf1ba8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"743f1808-211a-4ebd-9a0e-32af8ccf1ba8\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.668432 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/743f1808-211a-4ebd-9a0e-32af8ccf1ba8-logs\") pod \"glance-default-external-api-0\" (UID: \"743f1808-211a-4ebd-9a0e-32af8ccf1ba8\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.668613 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/743f1808-211a-4ebd-9a0e-32af8ccf1ba8-ceph\") pod \"glance-default-external-api-0\" (UID: \"743f1808-211a-4ebd-9a0e-32af8ccf1ba8\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.668666 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/743f1808-211a-4ebd-9a0e-32af8ccf1ba8-scripts\") pod \"glance-default-external-api-0\" (UID: \"743f1808-211a-4ebd-9a0e-32af8ccf1ba8\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.668874 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp7bz\" (UniqueName: \"kubernetes.io/projected/743f1808-211a-4ebd-9a0e-32af8ccf1ba8-kube-api-access-mp7bz\") pod \"glance-default-external-api-0\" (UID: \"743f1808-211a-4ebd-9a0e-32af8ccf1ba8\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.770470 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/743f1808-211a-4ebd-9a0e-32af8ccf1ba8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"743f1808-211a-4ebd-9a0e-32af8ccf1ba8\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.770543 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/743f1808-211a-4ebd-9a0e-32af8ccf1ba8-config-data\") pod \"glance-default-external-api-0\" (UID: \"743f1808-211a-4ebd-9a0e-32af8ccf1ba8\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.770579 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/743f1808-211a-4ebd-9a0e-32af8ccf1ba8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"743f1808-211a-4ebd-9a0e-32af8ccf1ba8\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.770609 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/743f1808-211a-4ebd-9a0e-32af8ccf1ba8-logs\") pod \"glance-default-external-api-0\" (UID: \"743f1808-211a-4ebd-9a0e-32af8ccf1ba8\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.770659 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/743f1808-211a-4ebd-9a0e-32af8ccf1ba8-ceph\") pod \"glance-default-external-api-0\" (UID: \"743f1808-211a-4ebd-9a0e-32af8ccf1ba8\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.770705 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/743f1808-211a-4ebd-9a0e-32af8ccf1ba8-scripts\") pod \"glance-default-external-api-0\" (UID: \"743f1808-211a-4ebd-9a0e-32af8ccf1ba8\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.770815 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp7bz\" (UniqueName: \"kubernetes.io/projected/743f1808-211a-4ebd-9a0e-32af8ccf1ba8-kube-api-access-mp7bz\") pod \"glance-default-external-api-0\" (UID: \"743f1808-211a-4ebd-9a0e-32af8ccf1ba8\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.771683 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/743f1808-211a-4ebd-9a0e-32af8ccf1ba8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"743f1808-211a-4ebd-9a0e-32af8ccf1ba8\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.772023 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/743f1808-211a-4ebd-9a0e-32af8ccf1ba8-logs\") pod \"glance-default-external-api-0\" (UID: \"743f1808-211a-4ebd-9a0e-32af8ccf1ba8\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.777848 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/743f1808-211a-4ebd-9a0e-32af8ccf1ba8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"743f1808-211a-4ebd-9a0e-32af8ccf1ba8\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.777891 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/743f1808-211a-4ebd-9a0e-32af8ccf1ba8-config-data\") pod \"glance-default-external-api-0\" (UID: \"743f1808-211a-4ebd-9a0e-32af8ccf1ba8\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.778395 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/743f1808-211a-4ebd-9a0e-32af8ccf1ba8-ceph\") pod \"glance-default-external-api-0\" (UID: \"743f1808-211a-4ebd-9a0e-32af8ccf1ba8\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.781973 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/743f1808-211a-4ebd-9a0e-32af8ccf1ba8-scripts\") pod \"glance-default-external-api-0\" (UID: \"743f1808-211a-4ebd-9a0e-32af8ccf1ba8\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.792214 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp7bz\" (UniqueName: \"kubernetes.io/projected/743f1808-211a-4ebd-9a0e-32af8ccf1ba8-kube-api-access-mp7bz\") pod \"glance-default-external-api-0\" (UID: \"743f1808-211a-4ebd-9a0e-32af8ccf1ba8\") " pod="openstack/glance-default-external-api-0" Jan 27 09:17:54 crc kubenswrapper[4799]: I0127 09:17:54.927547 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 09:17:55 crc kubenswrapper[4799]: I0127 09:17:55.481089 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 09:17:55 crc kubenswrapper[4799]: I0127 09:17:55.576248 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"743f1808-211a-4ebd-9a0e-32af8ccf1ba8","Type":"ContainerStarted","Data":"8b2230b7b8be38bda1b21198d04ac46cc0d4a838a7449456e8415ab8a4388283"} Jan 27 09:17:55 crc kubenswrapper[4799]: I0127 09:17:55.577916 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="66de97b5-915d-4c78-87c1-902d07d0fe55" containerName="glance-log" containerID="cri-o://f260bf55fea5873b42d53fc81b7b58ea028121f85a85f960b001cfbe4d2164a4" gracePeriod=30 Jan 27 09:17:55 crc kubenswrapper[4799]: I0127 09:17:55.578088 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="66de97b5-915d-4c78-87c1-902d07d0fe55" containerName="glance-httpd" containerID="cri-o://221a716dc78f91af371a68de888f23556c77ab36951a5b6ecdc33067f6809032" gracePeriod=30 Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.206553 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.320003 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66de97b5-915d-4c78-87c1-902d07d0fe55-combined-ca-bundle\") pod \"66de97b5-915d-4c78-87c1-902d07d0fe55\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.320102 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/66de97b5-915d-4c78-87c1-902d07d0fe55-ceph\") pod \"66de97b5-915d-4c78-87c1-902d07d0fe55\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.320146 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66de97b5-915d-4c78-87c1-902d07d0fe55-config-data\") pod \"66de97b5-915d-4c78-87c1-902d07d0fe55\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.320264 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66de97b5-915d-4c78-87c1-902d07d0fe55-logs\") pod \"66de97b5-915d-4c78-87c1-902d07d0fe55\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.320295 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66de97b5-915d-4c78-87c1-902d07d0fe55-scripts\") pod \"66de97b5-915d-4c78-87c1-902d07d0fe55\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.320507 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/66de97b5-915d-4c78-87c1-902d07d0fe55-httpd-run\") pod \"66de97b5-915d-4c78-87c1-902d07d0fe55\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.320581 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqs7w\" (UniqueName: \"kubernetes.io/projected/66de97b5-915d-4c78-87c1-902d07d0fe55-kube-api-access-dqs7w\") pod \"66de97b5-915d-4c78-87c1-902d07d0fe55\" (UID: \"66de97b5-915d-4c78-87c1-902d07d0fe55\") " Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.320725 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66de97b5-915d-4c78-87c1-902d07d0fe55-logs" (OuterVolumeSpecName: "logs") pod "66de97b5-915d-4c78-87c1-902d07d0fe55" (UID: "66de97b5-915d-4c78-87c1-902d07d0fe55"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.320917 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66de97b5-915d-4c78-87c1-902d07d0fe55-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "66de97b5-915d-4c78-87c1-902d07d0fe55" (UID: "66de97b5-915d-4c78-87c1-902d07d0fe55"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.321009 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66de97b5-915d-4c78-87c1-902d07d0fe55-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.321027 4799 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/66de97b5-915d-4c78-87c1-902d07d0fe55-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.324828 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66de97b5-915d-4c78-87c1-902d07d0fe55-ceph" (OuterVolumeSpecName: "ceph") pod "66de97b5-915d-4c78-87c1-902d07d0fe55" (UID: "66de97b5-915d-4c78-87c1-902d07d0fe55"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.327418 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66de97b5-915d-4c78-87c1-902d07d0fe55-kube-api-access-dqs7w" (OuterVolumeSpecName: "kube-api-access-dqs7w") pod "66de97b5-915d-4c78-87c1-902d07d0fe55" (UID: "66de97b5-915d-4c78-87c1-902d07d0fe55"). InnerVolumeSpecName "kube-api-access-dqs7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.338233 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66de97b5-915d-4c78-87c1-902d07d0fe55-scripts" (OuterVolumeSpecName: "scripts") pod "66de97b5-915d-4c78-87c1-902d07d0fe55" (UID: "66de97b5-915d-4c78-87c1-902d07d0fe55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.347784 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66de97b5-915d-4c78-87c1-902d07d0fe55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66de97b5-915d-4c78-87c1-902d07d0fe55" (UID: "66de97b5-915d-4c78-87c1-902d07d0fe55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.374619 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66de97b5-915d-4c78-87c1-902d07d0fe55-config-data" (OuterVolumeSpecName: "config-data") pod "66de97b5-915d-4c78-87c1-902d07d0fe55" (UID: "66de97b5-915d-4c78-87c1-902d07d0fe55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.423008 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66de97b5-915d-4c78-87c1-902d07d0fe55-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.423054 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqs7w\" (UniqueName: \"kubernetes.io/projected/66de97b5-915d-4c78-87c1-902d07d0fe55-kube-api-access-dqs7w\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.423070 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66de97b5-915d-4c78-87c1-902d07d0fe55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.423083 4799 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/66de97b5-915d-4c78-87c1-902d07d0fe55-ceph\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.423092 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66de97b5-915d-4c78-87c1-902d07d0fe55-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.460922 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="945342c1-2981-48bf-8830-0452148a0efb" path="/var/lib/kubelet/pods/945342c1-2981-48bf-8830-0452148a0efb/volumes" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.588595 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"743f1808-211a-4ebd-9a0e-32af8ccf1ba8","Type":"ContainerStarted","Data":"98d848164ffe79069adb5dd69c8973cfe970cec00f5b12df403592c5e6f33c6f"} Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.588915 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"743f1808-211a-4ebd-9a0e-32af8ccf1ba8","Type":"ContainerStarted","Data":"7dd470a3d40b7f530980b660be2129d97ca7937a85f75ca783338a4396c73130"} Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.591691 4799 generic.go:334] "Generic (PLEG): container finished" podID="66de97b5-915d-4c78-87c1-902d07d0fe55" containerID="221a716dc78f91af371a68de888f23556c77ab36951a5b6ecdc33067f6809032" exitCode=0 Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.591744 4799 generic.go:334] "Generic (PLEG): container finished" podID="66de97b5-915d-4c78-87c1-902d07d0fe55" containerID="f260bf55fea5873b42d53fc81b7b58ea028121f85a85f960b001cfbe4d2164a4" exitCode=143 Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.591770 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"66de97b5-915d-4c78-87c1-902d07d0fe55","Type":"ContainerDied","Data":"221a716dc78f91af371a68de888f23556c77ab36951a5b6ecdc33067f6809032"} Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.591833 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"66de97b5-915d-4c78-87c1-902d07d0fe55","Type":"ContainerDied","Data":"f260bf55fea5873b42d53fc81b7b58ea028121f85a85f960b001cfbe4d2164a4"} Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.591776 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.591855 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"66de97b5-915d-4c78-87c1-902d07d0fe55","Type":"ContainerDied","Data":"83c6ec2445ccf8397820bfd57ebf7796afedf67604e54ada22c4db1ad173e7fc"} Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.591878 4799 scope.go:117] "RemoveContainer" containerID="221a716dc78f91af371a68de888f23556c77ab36951a5b6ecdc33067f6809032" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.613457 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.613435316 podStartE2EDuration="2.613435316s" podCreationTimestamp="2026-01-27 09:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:17:56.612966864 +0000 UTC m=+5542.924070949" watchObservedRunningTime="2026-01-27 09:17:56.613435316 +0000 UTC m=+5542.924539381" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.624734 4799 scope.go:117] "RemoveContainer" containerID="f260bf55fea5873b42d53fc81b7b58ea028121f85a85f960b001cfbe4d2164a4" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.638753 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.653178 4799 scope.go:117] "RemoveContainer" containerID="221a716dc78f91af371a68de888f23556c77ab36951a5b6ecdc33067f6809032" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.653227 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:17:56 crc kubenswrapper[4799]: E0127 09:17:56.653589 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"221a716dc78f91af371a68de888f23556c77ab36951a5b6ecdc33067f6809032\": container with ID starting with 221a716dc78f91af371a68de888f23556c77ab36951a5b6ecdc33067f6809032 not found: ID does not exist" containerID="221a716dc78f91af371a68de888f23556c77ab36951a5b6ecdc33067f6809032" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.653621 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"221a716dc78f91af371a68de888f23556c77ab36951a5b6ecdc33067f6809032"} err="failed to get container status \"221a716dc78f91af371a68de888f23556c77ab36951a5b6ecdc33067f6809032\": rpc error: code = NotFound desc = could not find container \"221a716dc78f91af371a68de888f23556c77ab36951a5b6ecdc33067f6809032\": container with ID starting with 221a716dc78f91af371a68de888f23556c77ab36951a5b6ecdc33067f6809032 not found: ID does not exist" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.653642 4799 scope.go:117] "RemoveContainer" containerID="f260bf55fea5873b42d53fc81b7b58ea028121f85a85f960b001cfbe4d2164a4" Jan 27 09:17:56 crc kubenswrapper[4799]: E0127 09:17:56.653978 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f260bf55fea5873b42d53fc81b7b58ea028121f85a85f960b001cfbe4d2164a4\": container with ID starting with f260bf55fea5873b42d53fc81b7b58ea028121f85a85f960b001cfbe4d2164a4 not found: ID does not exist" containerID="f260bf55fea5873b42d53fc81b7b58ea028121f85a85f960b001cfbe4d2164a4" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.654034 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f260bf55fea5873b42d53fc81b7b58ea028121f85a85f960b001cfbe4d2164a4"} err="failed to get container status \"f260bf55fea5873b42d53fc81b7b58ea028121f85a85f960b001cfbe4d2164a4\": rpc error: code = NotFound desc = could not find container \"f260bf55fea5873b42d53fc81b7b58ea028121f85a85f960b001cfbe4d2164a4\": container with ID starting with f260bf55fea5873b42d53fc81b7b58ea028121f85a85f960b001cfbe4d2164a4 not found: ID does not exist" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.654062 4799 scope.go:117] "RemoveContainer" containerID="221a716dc78f91af371a68de888f23556c77ab36951a5b6ecdc33067f6809032" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.654392 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"221a716dc78f91af371a68de888f23556c77ab36951a5b6ecdc33067f6809032"} err="failed to get container status \"221a716dc78f91af371a68de888f23556c77ab36951a5b6ecdc33067f6809032\": rpc error: code = NotFound desc = could not find container \"221a716dc78f91af371a68de888f23556c77ab36951a5b6ecdc33067f6809032\": container with ID starting with 221a716dc78f91af371a68de888f23556c77ab36951a5b6ecdc33067f6809032 not found: ID does not exist" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.654437 4799 scope.go:117] "RemoveContainer" containerID="f260bf55fea5873b42d53fc81b7b58ea028121f85a85f960b001cfbe4d2164a4" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.654705 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f260bf55fea5873b42d53fc81b7b58ea028121f85a85f960b001cfbe4d2164a4"} err="failed to get container status \"f260bf55fea5873b42d53fc81b7b58ea028121f85a85f960b001cfbe4d2164a4\": rpc error: code = NotFound desc = could not find container \"f260bf55fea5873b42d53fc81b7b58ea028121f85a85f960b001cfbe4d2164a4\": container with ID starting with f260bf55fea5873b42d53fc81b7b58ea028121f85a85f960b001cfbe4d2164a4 not found: ID does not exist" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.663628 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:17:56 crc kubenswrapper[4799]: E0127 09:17:56.664047 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66de97b5-915d-4c78-87c1-902d07d0fe55" containerName="glance-httpd" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.664070 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="66de97b5-915d-4c78-87c1-902d07d0fe55" containerName="glance-httpd" Jan 27 09:17:56 crc kubenswrapper[4799]: E0127 09:17:56.664079 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66de97b5-915d-4c78-87c1-902d07d0fe55" containerName="glance-log" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.664086 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="66de97b5-915d-4c78-87c1-902d07d0fe55" containerName="glance-log" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.664319 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="66de97b5-915d-4c78-87c1-902d07d0fe55" containerName="glance-log" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.664338 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="66de97b5-915d-4c78-87c1-902d07d0fe55" containerName="glance-httpd" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.665217 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.668146 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.671948 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.831412 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b78008e-aa20-42d5-a0a3-ec4c0481a0b6-logs\") pod \"glance-default-internal-api-0\" (UID: \"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.831473 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b78008e-aa20-42d5-a0a3-ec4c0481a0b6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.831494 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b78008e-aa20-42d5-a0a3-ec4c0481a0b6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.831559 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b78008e-aa20-42d5-a0a3-ec4c0481a0b6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.831589 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkwqn\" (UniqueName: \"kubernetes.io/projected/6b78008e-aa20-42d5-a0a3-ec4c0481a0b6-kube-api-access-mkwqn\") pod \"glance-default-internal-api-0\" (UID: \"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.831752 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6b78008e-aa20-42d5-a0a3-ec4c0481a0b6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.831912 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6b78008e-aa20-42d5-a0a3-ec4c0481a0b6-ceph\") pod \"glance-default-internal-api-0\" (UID: \"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.933294 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6b78008e-aa20-42d5-a0a3-ec4c0481a0b6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.933407 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6b78008e-aa20-42d5-a0a3-ec4c0481a0b6-ceph\") pod \"glance-default-internal-api-0\" (UID: \"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.933464 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b78008e-aa20-42d5-a0a3-ec4c0481a0b6-logs\") pod \"glance-default-internal-api-0\" (UID: \"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.933488 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b78008e-aa20-42d5-a0a3-ec4c0481a0b6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.933513 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b78008e-aa20-42d5-a0a3-ec4c0481a0b6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.933591 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b78008e-aa20-42d5-a0a3-ec4c0481a0b6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.933635 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkwqn\" (UniqueName: \"kubernetes.io/projected/6b78008e-aa20-42d5-a0a3-ec4c0481a0b6-kube-api-access-mkwqn\") pod \"glance-default-internal-api-0\" (UID: \"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.933835 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6b78008e-aa20-42d5-a0a3-ec4c0481a0b6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.933964 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b78008e-aa20-42d5-a0a3-ec4c0481a0b6-logs\") pod \"glance-default-internal-api-0\" (UID: \"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.938852 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6b78008e-aa20-42d5-a0a3-ec4c0481a0b6-ceph\") pod \"glance-default-internal-api-0\" (UID: \"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.938937 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b78008e-aa20-42d5-a0a3-ec4c0481a0b6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.939262 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b78008e-aa20-42d5-a0a3-ec4c0481a0b6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.939349 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b78008e-aa20-42d5-a0a3-ec4c0481a0b6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.954577 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkwqn\" (UniqueName: \"kubernetes.io/projected/6b78008e-aa20-42d5-a0a3-ec4c0481a0b6-kube-api-access-mkwqn\") pod \"glance-default-internal-api-0\" (UID: \"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6\") " pod="openstack/glance-default-internal-api-0" Jan 27 09:17:56 crc kubenswrapper[4799]: I0127 09:17:56.983622 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 09:17:57 crc kubenswrapper[4799]: W0127 09:17:57.513117 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b78008e_aa20_42d5_a0a3_ec4c0481a0b6.slice/crio-44ea1e455690dd3c7d4d57eef78da8b361511cfd948a3ce282163c3d36f50bde WatchSource:0}: Error finding container 44ea1e455690dd3c7d4d57eef78da8b361511cfd948a3ce282163c3d36f50bde: Status 404 returned error can't find the container with id 44ea1e455690dd3c7d4d57eef78da8b361511cfd948a3ce282163c3d36f50bde Jan 27 09:17:57 crc kubenswrapper[4799]: I0127 09:17:57.519694 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 09:17:57 crc kubenswrapper[4799]: I0127 09:17:57.606917 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6","Type":"ContainerStarted","Data":"44ea1e455690dd3c7d4d57eef78da8b361511cfd948a3ce282163c3d36f50bde"} Jan 27 09:17:58 crc kubenswrapper[4799]: I0127 09:17:58.464890 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66de97b5-915d-4c78-87c1-902d07d0fe55" path="/var/lib/kubelet/pods/66de97b5-915d-4c78-87c1-902d07d0fe55/volumes" Jan 27 09:17:58 crc kubenswrapper[4799]: I0127 09:17:58.618811 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6","Type":"ContainerStarted","Data":"b1888b76712aa94a53c680344b9f820ab68a901e931fb0ce35a1a0e29e98ab1a"} Jan 27 09:17:58 crc kubenswrapper[4799]: I0127 09:17:58.618864 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6b78008e-aa20-42d5-a0a3-ec4c0481a0b6","Type":"ContainerStarted","Data":"10ef9508a96f965ffc4d5a51c3581a27ceb60c0b3751233a2fbc75b7735019be"} Jan 27 09:17:58 crc kubenswrapper[4799]: I0127 09:17:58.651516 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.65149288 podStartE2EDuration="2.65149288s" podCreationTimestamp="2026-01-27 09:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:17:58.64124269 +0000 UTC m=+5544.952346755" watchObservedRunningTime="2026-01-27 09:17:58.65149288 +0000 UTC m=+5544.962596965" Jan 27 09:18:00 crc kubenswrapper[4799]: I0127 09:18:00.452497 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:18:00 crc kubenswrapper[4799]: E0127 09:18:00.453671 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:18:01 crc kubenswrapper[4799]: I0127 09:18:01.232254 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c89755789-9zlzq" Jan 27 09:18:01 crc kubenswrapper[4799]: I0127 09:18:01.303841 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dddc8d79-4xd27"] Jan 27 09:18:01 crc kubenswrapper[4799]: I0127 09:18:01.304152 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dddc8d79-4xd27" podUID="cfd7952b-2e54-458c-9b3b-770466bcc0e7" containerName="dnsmasq-dns" containerID="cri-o://e337ca39ecfd1ea51aac6e0c6349469b8a1dde1dbd6cf6c224a63e7de679f34f" gracePeriod=10 Jan 27 09:18:01 crc kubenswrapper[4799]: I0127 09:18:01.652768 4799 generic.go:334] "Generic (PLEG): container finished" podID="cfd7952b-2e54-458c-9b3b-770466bcc0e7" containerID="e337ca39ecfd1ea51aac6e0c6349469b8a1dde1dbd6cf6c224a63e7de679f34f" exitCode=0 Jan 27 09:18:01 crc kubenswrapper[4799]: I0127 09:18:01.653146 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dddc8d79-4xd27" event={"ID":"cfd7952b-2e54-458c-9b3b-770466bcc0e7","Type":"ContainerDied","Data":"e337ca39ecfd1ea51aac6e0c6349469b8a1dde1dbd6cf6c224a63e7de679f34f"} Jan 27 09:18:02 crc kubenswrapper[4799]: I0127 09:18:02.326255 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dddc8d79-4xd27" Jan 27 09:18:02 crc kubenswrapper[4799]: I0127 09:18:02.456312 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-ovsdbserver-sb\") pod \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\" (UID: \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\") " Jan 27 09:18:02 crc kubenswrapper[4799]: I0127 09:18:02.456386 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-ovsdbserver-nb\") pod \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\" (UID: \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\") " Jan 27 09:18:02 crc kubenswrapper[4799]: I0127 09:18:02.456492 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w75sf\" (UniqueName: \"kubernetes.io/projected/cfd7952b-2e54-458c-9b3b-770466bcc0e7-kube-api-access-w75sf\") pod \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\" (UID: \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\") " Jan 27 09:18:02 crc kubenswrapper[4799]: I0127 09:18:02.456518 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-config\") pod \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\" (UID: \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\") " Jan 27 09:18:02 crc kubenswrapper[4799]: I0127 09:18:02.456572 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-dns-svc\") pod \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\" (UID: \"cfd7952b-2e54-458c-9b3b-770466bcc0e7\") " Jan 27 09:18:02 crc kubenswrapper[4799]: I0127 09:18:02.465016 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfd7952b-2e54-458c-9b3b-770466bcc0e7-kube-api-access-w75sf" (OuterVolumeSpecName: "kube-api-access-w75sf") pod "cfd7952b-2e54-458c-9b3b-770466bcc0e7" (UID: "cfd7952b-2e54-458c-9b3b-770466bcc0e7"). InnerVolumeSpecName "kube-api-access-w75sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:18:02 crc kubenswrapper[4799]: I0127 09:18:02.507947 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cfd7952b-2e54-458c-9b3b-770466bcc0e7" (UID: "cfd7952b-2e54-458c-9b3b-770466bcc0e7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:18:02 crc kubenswrapper[4799]: I0127 09:18:02.510907 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-config" (OuterVolumeSpecName: "config") pod "cfd7952b-2e54-458c-9b3b-770466bcc0e7" (UID: "cfd7952b-2e54-458c-9b3b-770466bcc0e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:18:02 crc kubenswrapper[4799]: I0127 09:18:02.511739 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cfd7952b-2e54-458c-9b3b-770466bcc0e7" (UID: "cfd7952b-2e54-458c-9b3b-770466bcc0e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:18:02 crc kubenswrapper[4799]: I0127 09:18:02.512454 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cfd7952b-2e54-458c-9b3b-770466bcc0e7" (UID: "cfd7952b-2e54-458c-9b3b-770466bcc0e7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:18:02 crc kubenswrapper[4799]: I0127 09:18:02.558665 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 09:18:02 crc kubenswrapper[4799]: I0127 09:18:02.558697 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 09:18:02 crc kubenswrapper[4799]: I0127 09:18:02.558707 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w75sf\" (UniqueName: \"kubernetes.io/projected/cfd7952b-2e54-458c-9b3b-770466bcc0e7-kube-api-access-w75sf\") on node \"crc\" DevicePath \"\"" Jan 27 09:18:02 crc kubenswrapper[4799]: I0127 09:18:02.558716 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:18:02 crc kubenswrapper[4799]: I0127 09:18:02.558725 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfd7952b-2e54-458c-9b3b-770466bcc0e7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:18:02 crc kubenswrapper[4799]: I0127 09:18:02.664603 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dddc8d79-4xd27" event={"ID":"cfd7952b-2e54-458c-9b3b-770466bcc0e7","Type":"ContainerDied","Data":"58f7c93d627b46aeb14903b6dfdbc30c40daa8af8bff3b7a093b65fa0d75fcf6"} Jan 27 09:18:02 crc kubenswrapper[4799]: I0127 09:18:02.664663 4799 scope.go:117] "RemoveContainer" containerID="e337ca39ecfd1ea51aac6e0c6349469b8a1dde1dbd6cf6c224a63e7de679f34f" Jan 27 09:18:02 crc kubenswrapper[4799]: I0127 09:18:02.664796 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dddc8d79-4xd27" Jan 27 09:18:02 crc kubenswrapper[4799]: I0127 09:18:02.691613 4799 scope.go:117] "RemoveContainer" containerID="afa225297c65da3940e9983d850b361ea1096dc7222bd568447140a09de4aa16" Jan 27 09:18:02 crc kubenswrapper[4799]: I0127 09:18:02.703513 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dddc8d79-4xd27"] Jan 27 09:18:02 crc kubenswrapper[4799]: I0127 09:18:02.710324 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dddc8d79-4xd27"] Jan 27 09:18:04 crc kubenswrapper[4799]: I0127 09:18:04.465582 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfd7952b-2e54-458c-9b3b-770466bcc0e7" path="/var/lib/kubelet/pods/cfd7952b-2e54-458c-9b3b-770466bcc0e7/volumes" Jan 27 09:18:04 crc kubenswrapper[4799]: I0127 09:18:04.929141 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 09:18:04 crc kubenswrapper[4799]: I0127 09:18:04.929234 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 09:18:04 crc kubenswrapper[4799]: I0127 09:18:04.966108 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 09:18:04 crc kubenswrapper[4799]: I0127 09:18:04.998156 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 09:18:05 crc kubenswrapper[4799]: I0127 09:18:05.699785 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 09:18:05 crc kubenswrapper[4799]: I0127 09:18:05.700137 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 09:18:06 crc kubenswrapper[4799]: I0127 09:18:06.984014 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 09:18:06 crc kubenswrapper[4799]: I0127 09:18:06.984094 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 09:18:07 crc kubenswrapper[4799]: I0127 09:18:07.034071 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 09:18:07 crc kubenswrapper[4799]: I0127 09:18:07.042238 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 09:18:07 crc kubenswrapper[4799]: I0127 09:18:07.718685 4799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 09:18:07 crc kubenswrapper[4799]: I0127 09:18:07.718708 4799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 09:18:07 crc kubenswrapper[4799]: I0127 09:18:07.719138 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 09:18:07 crc kubenswrapper[4799]: I0127 09:18:07.719174 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 09:18:07 crc kubenswrapper[4799]: I0127 09:18:07.847970 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 09:18:07 crc kubenswrapper[4799]: I0127 09:18:07.855678 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 09:18:09 crc kubenswrapper[4799]: I0127 09:18:09.868332 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 09:18:09 crc kubenswrapper[4799]: I0127 09:18:09.868929 4799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 09:18:09 crc kubenswrapper[4799]: I0127 09:18:09.869571 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 09:18:14 crc kubenswrapper[4799]: I0127 09:18:14.463818 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:18:14 crc kubenswrapper[4799]: E0127 09:18:14.465715 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.355013 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-dthfg"] Jan 27 09:18:17 crc kubenswrapper[4799]: E0127 09:18:17.356577 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd7952b-2e54-458c-9b3b-770466bcc0e7" containerName="init" Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.356594 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd7952b-2e54-458c-9b3b-770466bcc0e7" containerName="init" Jan 27 09:18:17 crc kubenswrapper[4799]: E0127 09:18:17.356627 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd7952b-2e54-458c-9b3b-770466bcc0e7" containerName="dnsmasq-dns" Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.356634 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd7952b-2e54-458c-9b3b-770466bcc0e7" containerName="dnsmasq-dns" Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.356838 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfd7952b-2e54-458c-9b3b-770466bcc0e7" containerName="dnsmasq-dns" Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.357786 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dthfg" Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.367187 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-dthfg"] Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.411103 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn25s\" (UniqueName: \"kubernetes.io/projected/6d6dce50-cc86-4160-960d-e175a1044a74-kube-api-access-dn25s\") pod \"placement-db-create-dthfg\" (UID: \"6d6dce50-cc86-4160-960d-e175a1044a74\") " pod="openstack/placement-db-create-dthfg" Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.411183 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d6dce50-cc86-4160-960d-e175a1044a74-operator-scripts\") pod \"placement-db-create-dthfg\" (UID: \"6d6dce50-cc86-4160-960d-e175a1044a74\") " pod="openstack/placement-db-create-dthfg" Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.421856 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-91cb-account-create-update-t4vmh"] Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.423142 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-91cb-account-create-update-t4vmh" Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.425458 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.431615 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-91cb-account-create-update-t4vmh"] Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.513176 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d6dce50-cc86-4160-960d-e175a1044a74-operator-scripts\") pod \"placement-db-create-dthfg\" (UID: \"6d6dce50-cc86-4160-960d-e175a1044a74\") " pod="openstack/placement-db-create-dthfg" Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.513481 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwszx\" (UniqueName: \"kubernetes.io/projected/f47ce090-4047-4274-b1f9-6d3b2c467791-kube-api-access-kwszx\") pod \"placement-91cb-account-create-update-t4vmh\" (UID: \"f47ce090-4047-4274-b1f9-6d3b2c467791\") " pod="openstack/placement-91cb-account-create-update-t4vmh" Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.513593 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f47ce090-4047-4274-b1f9-6d3b2c467791-operator-scripts\") pod \"placement-91cb-account-create-update-t4vmh\" (UID: \"f47ce090-4047-4274-b1f9-6d3b2c467791\") " pod="openstack/placement-91cb-account-create-update-t4vmh" Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.513644 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn25s\" (UniqueName: \"kubernetes.io/projected/6d6dce50-cc86-4160-960d-e175a1044a74-kube-api-access-dn25s\") pod \"placement-db-create-dthfg\" (UID: \"6d6dce50-cc86-4160-960d-e175a1044a74\") " pod="openstack/placement-db-create-dthfg" Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.514632 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d6dce50-cc86-4160-960d-e175a1044a74-operator-scripts\") pod \"placement-db-create-dthfg\" (UID: \"6d6dce50-cc86-4160-960d-e175a1044a74\") " pod="openstack/placement-db-create-dthfg" Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.533432 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn25s\" (UniqueName: \"kubernetes.io/projected/6d6dce50-cc86-4160-960d-e175a1044a74-kube-api-access-dn25s\") pod \"placement-db-create-dthfg\" (UID: \"6d6dce50-cc86-4160-960d-e175a1044a74\") " pod="openstack/placement-db-create-dthfg" Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.615006 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f47ce090-4047-4274-b1f9-6d3b2c467791-operator-scripts\") pod \"placement-91cb-account-create-update-t4vmh\" (UID: \"f47ce090-4047-4274-b1f9-6d3b2c467791\") " pod="openstack/placement-91cb-account-create-update-t4vmh" Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.615190 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwszx\" (UniqueName: \"kubernetes.io/projected/f47ce090-4047-4274-b1f9-6d3b2c467791-kube-api-access-kwszx\") pod \"placement-91cb-account-create-update-t4vmh\" (UID: \"f47ce090-4047-4274-b1f9-6d3b2c467791\") " pod="openstack/placement-91cb-account-create-update-t4vmh" Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.615976 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f47ce090-4047-4274-b1f9-6d3b2c467791-operator-scripts\") pod \"placement-91cb-account-create-update-t4vmh\" (UID: \"f47ce090-4047-4274-b1f9-6d3b2c467791\") " pod="openstack/placement-91cb-account-create-update-t4vmh" Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.633952 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwszx\" (UniqueName: \"kubernetes.io/projected/f47ce090-4047-4274-b1f9-6d3b2c467791-kube-api-access-kwszx\") pod \"placement-91cb-account-create-update-t4vmh\" (UID: \"f47ce090-4047-4274-b1f9-6d3b2c467791\") " pod="openstack/placement-91cb-account-create-update-t4vmh" Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.679883 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dthfg" Jan 27 09:18:17 crc kubenswrapper[4799]: I0127 09:18:17.739271 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-91cb-account-create-update-t4vmh" Jan 27 09:18:18 crc kubenswrapper[4799]: I0127 09:18:18.118254 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-dthfg"] Jan 27 09:18:18 crc kubenswrapper[4799]: W0127 09:18:18.118356 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d6dce50_cc86_4160_960d_e175a1044a74.slice/crio-500bc73f015e1115cf1647a0bf778f21bb70985c036eb78dcac2d3115c1c1a81 WatchSource:0}: Error finding container 500bc73f015e1115cf1647a0bf778f21bb70985c036eb78dcac2d3115c1c1a81: Status 404 returned error can't find the container with id 500bc73f015e1115cf1647a0bf778f21bb70985c036eb78dcac2d3115c1c1a81 Jan 27 09:18:18 crc kubenswrapper[4799]: I0127 09:18:18.244350 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-91cb-account-create-update-t4vmh"] Jan 27 09:18:18 crc kubenswrapper[4799]: W0127 09:18:18.248096 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf47ce090_4047_4274_b1f9_6d3b2c467791.slice/crio-eb437f49f5ce78cbc0f98f13890f26d50f8ded2479b51dbc4f2a7821862d2176 WatchSource:0}: Error finding container eb437f49f5ce78cbc0f98f13890f26d50f8ded2479b51dbc4f2a7821862d2176: Status 404 returned error can't find the container with id eb437f49f5ce78cbc0f98f13890f26d50f8ded2479b51dbc4f2a7821862d2176 Jan 27 09:18:18 crc kubenswrapper[4799]: I0127 09:18:18.834802 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-91cb-account-create-update-t4vmh" event={"ID":"f47ce090-4047-4274-b1f9-6d3b2c467791","Type":"ContainerStarted","Data":"a4d62733d08b775cc558b53e2efb5f17698c8e9b63ab6aa88f46f6dc2f5179d1"} Jan 27 09:18:18 crc kubenswrapper[4799]: I0127 09:18:18.835283 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-91cb-account-create-update-t4vmh" event={"ID":"f47ce090-4047-4274-b1f9-6d3b2c467791","Type":"ContainerStarted","Data":"eb437f49f5ce78cbc0f98f13890f26d50f8ded2479b51dbc4f2a7821862d2176"} Jan 27 09:18:18 crc kubenswrapper[4799]: I0127 09:18:18.840003 4799 generic.go:334] "Generic (PLEG): container finished" podID="6d6dce50-cc86-4160-960d-e175a1044a74" containerID="d92728707d8dc5fefcd6e43857c9d7157e13b0096d98e4e881748a1b0f7a66d7" exitCode=0 Jan 27 09:18:18 crc kubenswrapper[4799]: I0127 09:18:18.840052 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dthfg" event={"ID":"6d6dce50-cc86-4160-960d-e175a1044a74","Type":"ContainerDied","Data":"d92728707d8dc5fefcd6e43857c9d7157e13b0096d98e4e881748a1b0f7a66d7"} Jan 27 09:18:18 crc kubenswrapper[4799]: I0127 09:18:18.840104 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dthfg" event={"ID":"6d6dce50-cc86-4160-960d-e175a1044a74","Type":"ContainerStarted","Data":"500bc73f015e1115cf1647a0bf778f21bb70985c036eb78dcac2d3115c1c1a81"} Jan 27 09:18:18 crc kubenswrapper[4799]: I0127 09:18:18.865543 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-91cb-account-create-update-t4vmh" podStartSLOduration=1.865500685 podStartE2EDuration="1.865500685s" podCreationTimestamp="2026-01-27 09:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:18:18.855880773 +0000 UTC m=+5565.166984858" watchObservedRunningTime="2026-01-27 09:18:18.865500685 +0000 UTC m=+5565.176604750" Jan 27 09:18:19 crc kubenswrapper[4799]: I0127 09:18:19.854863 4799 generic.go:334] "Generic (PLEG): container finished" podID="f47ce090-4047-4274-b1f9-6d3b2c467791" containerID="a4d62733d08b775cc558b53e2efb5f17698c8e9b63ab6aa88f46f6dc2f5179d1" exitCode=0 Jan 27 09:18:19 crc kubenswrapper[4799]: I0127 09:18:19.854933 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-91cb-account-create-update-t4vmh" event={"ID":"f47ce090-4047-4274-b1f9-6d3b2c467791","Type":"ContainerDied","Data":"a4d62733d08b775cc558b53e2efb5f17698c8e9b63ab6aa88f46f6dc2f5179d1"} Jan 27 09:18:20 crc kubenswrapper[4799]: I0127 09:18:20.180972 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dthfg" Jan 27 09:18:20 crc kubenswrapper[4799]: I0127 09:18:20.266228 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d6dce50-cc86-4160-960d-e175a1044a74-operator-scripts\") pod \"6d6dce50-cc86-4160-960d-e175a1044a74\" (UID: \"6d6dce50-cc86-4160-960d-e175a1044a74\") " Jan 27 09:18:20 crc kubenswrapper[4799]: I0127 09:18:20.266421 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn25s\" (UniqueName: \"kubernetes.io/projected/6d6dce50-cc86-4160-960d-e175a1044a74-kube-api-access-dn25s\") pod \"6d6dce50-cc86-4160-960d-e175a1044a74\" (UID: \"6d6dce50-cc86-4160-960d-e175a1044a74\") " Jan 27 09:18:20 crc kubenswrapper[4799]: I0127 09:18:20.268277 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d6dce50-cc86-4160-960d-e175a1044a74-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d6dce50-cc86-4160-960d-e175a1044a74" (UID: "6d6dce50-cc86-4160-960d-e175a1044a74"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:18:20 crc kubenswrapper[4799]: I0127 09:18:20.302528 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d6dce50-cc86-4160-960d-e175a1044a74-kube-api-access-dn25s" (OuterVolumeSpecName: "kube-api-access-dn25s") pod "6d6dce50-cc86-4160-960d-e175a1044a74" (UID: "6d6dce50-cc86-4160-960d-e175a1044a74"). InnerVolumeSpecName "kube-api-access-dn25s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:18:20 crc kubenswrapper[4799]: I0127 09:18:20.369680 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn25s\" (UniqueName: \"kubernetes.io/projected/6d6dce50-cc86-4160-960d-e175a1044a74-kube-api-access-dn25s\") on node \"crc\" DevicePath \"\"" Jan 27 09:18:20 crc kubenswrapper[4799]: I0127 09:18:20.369737 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d6dce50-cc86-4160-960d-e175a1044a74-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:18:20 crc kubenswrapper[4799]: I0127 09:18:20.867171 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dthfg" Jan 27 09:18:20 crc kubenswrapper[4799]: I0127 09:18:20.867188 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dthfg" event={"ID":"6d6dce50-cc86-4160-960d-e175a1044a74","Type":"ContainerDied","Data":"500bc73f015e1115cf1647a0bf778f21bb70985c036eb78dcac2d3115c1c1a81"} Jan 27 09:18:20 crc kubenswrapper[4799]: I0127 09:18:20.867247 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="500bc73f015e1115cf1647a0bf778f21bb70985c036eb78dcac2d3115c1c1a81" Jan 27 09:18:21 crc kubenswrapper[4799]: I0127 09:18:21.194726 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-91cb-account-create-update-t4vmh" Jan 27 09:18:21 crc kubenswrapper[4799]: I0127 09:18:21.284484 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwszx\" (UniqueName: \"kubernetes.io/projected/f47ce090-4047-4274-b1f9-6d3b2c467791-kube-api-access-kwszx\") pod \"f47ce090-4047-4274-b1f9-6d3b2c467791\" (UID: \"f47ce090-4047-4274-b1f9-6d3b2c467791\") " Jan 27 09:18:21 crc kubenswrapper[4799]: I0127 09:18:21.284592 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f47ce090-4047-4274-b1f9-6d3b2c467791-operator-scripts\") pod \"f47ce090-4047-4274-b1f9-6d3b2c467791\" (UID: \"f47ce090-4047-4274-b1f9-6d3b2c467791\") " Jan 27 09:18:21 crc kubenswrapper[4799]: I0127 09:18:21.285208 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47ce090-4047-4274-b1f9-6d3b2c467791-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f47ce090-4047-4274-b1f9-6d3b2c467791" (UID: "f47ce090-4047-4274-b1f9-6d3b2c467791"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:18:21 crc kubenswrapper[4799]: I0127 09:18:21.286055 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f47ce090-4047-4274-b1f9-6d3b2c467791-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:18:21 crc kubenswrapper[4799]: I0127 09:18:21.295948 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f47ce090-4047-4274-b1f9-6d3b2c467791-kube-api-access-kwszx" (OuterVolumeSpecName: "kube-api-access-kwszx") pod "f47ce090-4047-4274-b1f9-6d3b2c467791" (UID: "f47ce090-4047-4274-b1f9-6d3b2c467791"). InnerVolumeSpecName "kube-api-access-kwszx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:18:21 crc kubenswrapper[4799]: I0127 09:18:21.387911 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwszx\" (UniqueName: \"kubernetes.io/projected/f47ce090-4047-4274-b1f9-6d3b2c467791-kube-api-access-kwszx\") on node \"crc\" DevicePath \"\"" Jan 27 09:18:21 crc kubenswrapper[4799]: I0127 09:18:21.882790 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-91cb-account-create-update-t4vmh" event={"ID":"f47ce090-4047-4274-b1f9-6d3b2c467791","Type":"ContainerDied","Data":"eb437f49f5ce78cbc0f98f13890f26d50f8ded2479b51dbc4f2a7821862d2176"} Jan 27 09:18:21 crc kubenswrapper[4799]: I0127 09:18:21.882834 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb437f49f5ce78cbc0f98f13890f26d50f8ded2479b51dbc4f2a7821862d2176" Jan 27 09:18:21 crc kubenswrapper[4799]: I0127 09:18:21.882910 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-91cb-account-create-update-t4vmh" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.750108 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-567d4c69c7-bsznb"] Jan 27 09:18:22 crc kubenswrapper[4799]: E0127 09:18:22.750776 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d6dce50-cc86-4160-960d-e175a1044a74" containerName="mariadb-database-create" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.750807 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d6dce50-cc86-4160-960d-e175a1044a74" containerName="mariadb-database-create" Jan 27 09:18:22 crc kubenswrapper[4799]: E0127 09:18:22.750857 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f47ce090-4047-4274-b1f9-6d3b2c467791" containerName="mariadb-account-create-update" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.750870 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f47ce090-4047-4274-b1f9-6d3b2c467791" containerName="mariadb-account-create-update" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.751115 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f47ce090-4047-4274-b1f9-6d3b2c467791" containerName="mariadb-account-create-update" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.751157 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d6dce50-cc86-4160-960d-e175a1044a74" containerName="mariadb-database-create" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.752817 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.770716 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-567d4c69c7-bsznb"] Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.786903 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-92vlt"] Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.801612 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-92vlt" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.810867 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.810996 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.811320 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-ffn6h" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.816288 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-ovsdbserver-sb\") pod \"dnsmasq-dns-567d4c69c7-bsznb\" (UID: \"bb9cf218-3d46-4767-82c8-7a8a0d569065\") " pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.816405 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-ovsdbserver-nb\") pod \"dnsmasq-dns-567d4c69c7-bsznb\" (UID: \"bb9cf218-3d46-4767-82c8-7a8a0d569065\") " pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.816540 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-dns-svc\") pod \"dnsmasq-dns-567d4c69c7-bsznb\" (UID: \"bb9cf218-3d46-4767-82c8-7a8a0d569065\") " pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.816627 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-config\") pod \"dnsmasq-dns-567d4c69c7-bsznb\" (UID: \"bb9cf218-3d46-4767-82c8-7a8a0d569065\") " pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.816753 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw2n5\" (UniqueName: \"kubernetes.io/projected/bb9cf218-3d46-4767-82c8-7a8a0d569065-kube-api-access-zw2n5\") pod \"dnsmasq-dns-567d4c69c7-bsznb\" (UID: \"bb9cf218-3d46-4767-82c8-7a8a0d569065\") " pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.818741 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-92vlt"] Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.919280 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw2n5\" (UniqueName: \"kubernetes.io/projected/bb9cf218-3d46-4767-82c8-7a8a0d569065-kube-api-access-zw2n5\") pod \"dnsmasq-dns-567d4c69c7-bsznb\" (UID: \"bb9cf218-3d46-4767-82c8-7a8a0d569065\") " pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.919547 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78b45b9-36fa-4026-8f87-42cb45906e0d-combined-ca-bundle\") pod \"placement-db-sync-92vlt\" (UID: \"f78b45b9-36fa-4026-8f87-42cb45906e0d\") " pod="openstack/placement-db-sync-92vlt" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.919620 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-ovsdbserver-sb\") pod \"dnsmasq-dns-567d4c69c7-bsznb\" (UID: \"bb9cf218-3d46-4767-82c8-7a8a0d569065\") " pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.919731 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-ovsdbserver-nb\") pod \"dnsmasq-dns-567d4c69c7-bsznb\" (UID: \"bb9cf218-3d46-4767-82c8-7a8a0d569065\") " pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.919859 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hslk\" (UniqueName: \"kubernetes.io/projected/f78b45b9-36fa-4026-8f87-42cb45906e0d-kube-api-access-5hslk\") pod \"placement-db-sync-92vlt\" (UID: \"f78b45b9-36fa-4026-8f87-42cb45906e0d\") " pod="openstack/placement-db-sync-92vlt" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.919931 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f78b45b9-36fa-4026-8f87-42cb45906e0d-scripts\") pod \"placement-db-sync-92vlt\" (UID: \"f78b45b9-36fa-4026-8f87-42cb45906e0d\") " pod="openstack/placement-db-sync-92vlt" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.920066 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-dns-svc\") pod \"dnsmasq-dns-567d4c69c7-bsznb\" (UID: \"bb9cf218-3d46-4767-82c8-7a8a0d569065\") " pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.920150 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78b45b9-36fa-4026-8f87-42cb45906e0d-config-data\") pod \"placement-db-sync-92vlt\" (UID: \"f78b45b9-36fa-4026-8f87-42cb45906e0d\") " pod="openstack/placement-db-sync-92vlt" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.920241 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-config\") pod \"dnsmasq-dns-567d4c69c7-bsznb\" (UID: \"bb9cf218-3d46-4767-82c8-7a8a0d569065\") " pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.920275 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f78b45b9-36fa-4026-8f87-42cb45906e0d-logs\") pod \"placement-db-sync-92vlt\" (UID: \"f78b45b9-36fa-4026-8f87-42cb45906e0d\") " pod="openstack/placement-db-sync-92vlt" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.921129 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-ovsdbserver-sb\") pod \"dnsmasq-dns-567d4c69c7-bsznb\" (UID: \"bb9cf218-3d46-4767-82c8-7a8a0d569065\") " pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.922946 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-dns-svc\") pod \"dnsmasq-dns-567d4c69c7-bsznb\" (UID: \"bb9cf218-3d46-4767-82c8-7a8a0d569065\") " pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.923351 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-ovsdbserver-nb\") pod \"dnsmasq-dns-567d4c69c7-bsznb\" (UID: \"bb9cf218-3d46-4767-82c8-7a8a0d569065\") " pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.925088 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-config\") pod \"dnsmasq-dns-567d4c69c7-bsznb\" (UID: \"bb9cf218-3d46-4767-82c8-7a8a0d569065\") " pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" Jan 27 09:18:22 crc kubenswrapper[4799]: I0127 09:18:22.961683 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw2n5\" (UniqueName: \"kubernetes.io/projected/bb9cf218-3d46-4767-82c8-7a8a0d569065-kube-api-access-zw2n5\") pod \"dnsmasq-dns-567d4c69c7-bsznb\" (UID: \"bb9cf218-3d46-4767-82c8-7a8a0d569065\") " pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" Jan 27 09:18:23 crc kubenswrapper[4799]: I0127 09:18:23.022640 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f78b45b9-36fa-4026-8f87-42cb45906e0d-logs\") pod \"placement-db-sync-92vlt\" (UID: \"f78b45b9-36fa-4026-8f87-42cb45906e0d\") " pod="openstack/placement-db-sync-92vlt" Jan 27 09:18:23 crc kubenswrapper[4799]: I0127 09:18:23.022802 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78b45b9-36fa-4026-8f87-42cb45906e0d-combined-ca-bundle\") pod \"placement-db-sync-92vlt\" (UID: \"f78b45b9-36fa-4026-8f87-42cb45906e0d\") " pod="openstack/placement-db-sync-92vlt" Jan 27 09:18:23 crc kubenswrapper[4799]: I0127 09:18:23.022896 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hslk\" (UniqueName: \"kubernetes.io/projected/f78b45b9-36fa-4026-8f87-42cb45906e0d-kube-api-access-5hslk\") pod \"placement-db-sync-92vlt\" (UID: \"f78b45b9-36fa-4026-8f87-42cb45906e0d\") " pod="openstack/placement-db-sync-92vlt" Jan 27 09:18:23 crc kubenswrapper[4799]: I0127 09:18:23.022934 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f78b45b9-36fa-4026-8f87-42cb45906e0d-scripts\") pod \"placement-db-sync-92vlt\" (UID: \"f78b45b9-36fa-4026-8f87-42cb45906e0d\") " pod="openstack/placement-db-sync-92vlt" Jan 27 09:18:23 crc kubenswrapper[4799]: I0127 09:18:23.022999 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78b45b9-36fa-4026-8f87-42cb45906e0d-config-data\") pod \"placement-db-sync-92vlt\" (UID: \"f78b45b9-36fa-4026-8f87-42cb45906e0d\") " pod="openstack/placement-db-sync-92vlt" Jan 27 09:18:23 crc kubenswrapper[4799]: I0127 09:18:23.023089 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f78b45b9-36fa-4026-8f87-42cb45906e0d-logs\") pod \"placement-db-sync-92vlt\" (UID: \"f78b45b9-36fa-4026-8f87-42cb45906e0d\") " pod="openstack/placement-db-sync-92vlt" Jan 27 09:18:23 crc kubenswrapper[4799]: I0127 09:18:23.026888 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f78b45b9-36fa-4026-8f87-42cb45906e0d-scripts\") pod \"placement-db-sync-92vlt\" (UID: \"f78b45b9-36fa-4026-8f87-42cb45906e0d\") " pod="openstack/placement-db-sync-92vlt" Jan 27 09:18:23 crc kubenswrapper[4799]: I0127 09:18:23.027037 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78b45b9-36fa-4026-8f87-42cb45906e0d-combined-ca-bundle\") pod \"placement-db-sync-92vlt\" (UID: \"f78b45b9-36fa-4026-8f87-42cb45906e0d\") " pod="openstack/placement-db-sync-92vlt" Jan 27 09:18:23 crc kubenswrapper[4799]: I0127 09:18:23.027550 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78b45b9-36fa-4026-8f87-42cb45906e0d-config-data\") pod \"placement-db-sync-92vlt\" (UID: \"f78b45b9-36fa-4026-8f87-42cb45906e0d\") " pod="openstack/placement-db-sync-92vlt" Jan 27 09:18:23 crc kubenswrapper[4799]: I0127 09:18:23.038790 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hslk\" (UniqueName: \"kubernetes.io/projected/f78b45b9-36fa-4026-8f87-42cb45906e0d-kube-api-access-5hslk\") pod \"placement-db-sync-92vlt\" (UID: \"f78b45b9-36fa-4026-8f87-42cb45906e0d\") " pod="openstack/placement-db-sync-92vlt" Jan 27 09:18:23 crc kubenswrapper[4799]: I0127 09:18:23.081648 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" Jan 27 09:18:23 crc kubenswrapper[4799]: I0127 09:18:23.126609 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-92vlt" Jan 27 09:18:23 crc kubenswrapper[4799]: I0127 09:18:23.716225 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-92vlt"] Jan 27 09:18:23 crc kubenswrapper[4799]: I0127 09:18:23.800049 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-567d4c69c7-bsznb"] Jan 27 09:18:23 crc kubenswrapper[4799]: W0127 09:18:23.801690 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb9cf218_3d46_4767_82c8_7a8a0d569065.slice/crio-b6182598eb570c1934ec27c55ae34fc6951a763957613993e12146e2dab31540 WatchSource:0}: Error finding container b6182598eb570c1934ec27c55ae34fc6951a763957613993e12146e2dab31540: Status 404 returned error can't find the container with id b6182598eb570c1934ec27c55ae34fc6951a763957613993e12146e2dab31540 Jan 27 09:18:23 crc kubenswrapper[4799]: I0127 09:18:23.901832 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" event={"ID":"bb9cf218-3d46-4767-82c8-7a8a0d569065","Type":"ContainerStarted","Data":"b6182598eb570c1934ec27c55ae34fc6951a763957613993e12146e2dab31540"} Jan 27 09:18:23 crc kubenswrapper[4799]: I0127 09:18:23.903219 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-92vlt" event={"ID":"f78b45b9-36fa-4026-8f87-42cb45906e0d","Type":"ContainerStarted","Data":"387a1280aecaa3a9e066dd399390616d3ce598d56d36109f259eeb98b0900e7b"} Jan 27 09:18:24 crc kubenswrapper[4799]: I0127 09:18:24.915494 4799 generic.go:334] "Generic (PLEG): container finished" podID="bb9cf218-3d46-4767-82c8-7a8a0d569065" containerID="f947f78ed0e84ff3b0f6d1e8d150584c6ebc35378c57e84ffdbf82c5972969d4" exitCode=0 Jan 27 09:18:24 crc kubenswrapper[4799]: I0127 09:18:24.915554 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" event={"ID":"bb9cf218-3d46-4767-82c8-7a8a0d569065","Type":"ContainerDied","Data":"f947f78ed0e84ff3b0f6d1e8d150584c6ebc35378c57e84ffdbf82c5972969d4"} Jan 27 09:18:24 crc kubenswrapper[4799]: I0127 09:18:24.919437 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-92vlt" event={"ID":"f78b45b9-36fa-4026-8f87-42cb45906e0d","Type":"ContainerStarted","Data":"84cd476b3734168b1816354fb839e0d21ef1eb27aefd2c60c3bddc6b7782b72f"} Jan 27 09:18:25 crc kubenswrapper[4799]: I0127 09:18:25.007266 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-92vlt" podStartSLOduration=3.007235202 podStartE2EDuration="3.007235202s" podCreationTimestamp="2026-01-27 09:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:18:24.957825458 +0000 UTC m=+5571.268929523" watchObservedRunningTime="2026-01-27 09:18:25.007235202 +0000 UTC m=+5571.318339267" Jan 27 09:18:25 crc kubenswrapper[4799]: I0127 09:18:25.932469 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" event={"ID":"bb9cf218-3d46-4767-82c8-7a8a0d569065","Type":"ContainerStarted","Data":"cde58785e9861b0deb5fedfe490ae4ec5a67858bc8e363c87a84536fc5a95b80"} Jan 27 09:18:25 crc kubenswrapper[4799]: I0127 09:18:25.932861 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" Jan 27 09:18:28 crc kubenswrapper[4799]: I0127 09:18:28.451655 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:18:28 crc kubenswrapper[4799]: E0127 09:18:28.452268 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:18:28 crc kubenswrapper[4799]: I0127 09:18:28.963506 4799 generic.go:334] "Generic (PLEG): container finished" podID="f78b45b9-36fa-4026-8f87-42cb45906e0d" containerID="84cd476b3734168b1816354fb839e0d21ef1eb27aefd2c60c3bddc6b7782b72f" exitCode=0 Jan 27 09:18:28 crc kubenswrapper[4799]: I0127 09:18:28.963548 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-92vlt" event={"ID":"f78b45b9-36fa-4026-8f87-42cb45906e0d","Type":"ContainerDied","Data":"84cd476b3734168b1816354fb839e0d21ef1eb27aefd2c60c3bddc6b7782b72f"} Jan 27 09:18:28 crc kubenswrapper[4799]: I0127 09:18:28.985050 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" podStartSLOduration=6.985026842 podStartE2EDuration="6.985026842s" podCreationTimestamp="2026-01-27 09:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:18:25.949251813 +0000 UTC m=+5572.260355898" watchObservedRunningTime="2026-01-27 09:18:28.985026842 +0000 UTC m=+5575.296130917" Jan 27 09:18:30 crc kubenswrapper[4799]: I0127 09:18:30.389226 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-92vlt" Jan 27 09:18:30 crc kubenswrapper[4799]: I0127 09:18:30.493065 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78b45b9-36fa-4026-8f87-42cb45906e0d-combined-ca-bundle\") pod \"f78b45b9-36fa-4026-8f87-42cb45906e0d\" (UID: \"f78b45b9-36fa-4026-8f87-42cb45906e0d\") " Jan 27 09:18:30 crc kubenswrapper[4799]: I0127 09:18:30.493495 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f78b45b9-36fa-4026-8f87-42cb45906e0d-scripts\") pod \"f78b45b9-36fa-4026-8f87-42cb45906e0d\" (UID: \"f78b45b9-36fa-4026-8f87-42cb45906e0d\") " Jan 27 09:18:30 crc kubenswrapper[4799]: I0127 09:18:30.493627 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hslk\" (UniqueName: \"kubernetes.io/projected/f78b45b9-36fa-4026-8f87-42cb45906e0d-kube-api-access-5hslk\") pod \"f78b45b9-36fa-4026-8f87-42cb45906e0d\" (UID: \"f78b45b9-36fa-4026-8f87-42cb45906e0d\") " Jan 27 09:18:30 crc kubenswrapper[4799]: I0127 09:18:30.493910 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78b45b9-36fa-4026-8f87-42cb45906e0d-config-data\") pod \"f78b45b9-36fa-4026-8f87-42cb45906e0d\" (UID: \"f78b45b9-36fa-4026-8f87-42cb45906e0d\") " Jan 27 09:18:30 crc kubenswrapper[4799]: I0127 09:18:30.493948 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f78b45b9-36fa-4026-8f87-42cb45906e0d-logs\") pod \"f78b45b9-36fa-4026-8f87-42cb45906e0d\" (UID: \"f78b45b9-36fa-4026-8f87-42cb45906e0d\") " Jan 27 09:18:30 crc kubenswrapper[4799]: I0127 09:18:30.494748 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f78b45b9-36fa-4026-8f87-42cb45906e0d-logs" (OuterVolumeSpecName: "logs") pod "f78b45b9-36fa-4026-8f87-42cb45906e0d" (UID: "f78b45b9-36fa-4026-8f87-42cb45906e0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:18:30 crc kubenswrapper[4799]: I0127 09:18:30.500409 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f78b45b9-36fa-4026-8f87-42cb45906e0d-kube-api-access-5hslk" (OuterVolumeSpecName: "kube-api-access-5hslk") pod "f78b45b9-36fa-4026-8f87-42cb45906e0d" (UID: "f78b45b9-36fa-4026-8f87-42cb45906e0d"). InnerVolumeSpecName "kube-api-access-5hslk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:18:30 crc kubenswrapper[4799]: I0127 09:18:30.502565 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f78b45b9-36fa-4026-8f87-42cb45906e0d-scripts" (OuterVolumeSpecName: "scripts") pod "f78b45b9-36fa-4026-8f87-42cb45906e0d" (UID: "f78b45b9-36fa-4026-8f87-42cb45906e0d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:18:30 crc kubenswrapper[4799]: I0127 09:18:30.523515 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f78b45b9-36fa-4026-8f87-42cb45906e0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f78b45b9-36fa-4026-8f87-42cb45906e0d" (UID: "f78b45b9-36fa-4026-8f87-42cb45906e0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:18:30 crc kubenswrapper[4799]: I0127 09:18:30.525552 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f78b45b9-36fa-4026-8f87-42cb45906e0d-config-data" (OuterVolumeSpecName: "config-data") pod "f78b45b9-36fa-4026-8f87-42cb45906e0d" (UID: "f78b45b9-36fa-4026-8f87-42cb45906e0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:18:30 crc kubenswrapper[4799]: I0127 09:18:30.596205 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f78b45b9-36fa-4026-8f87-42cb45906e0d-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:18:30 crc kubenswrapper[4799]: I0127 09:18:30.596567 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hslk\" (UniqueName: \"kubernetes.io/projected/f78b45b9-36fa-4026-8f87-42cb45906e0d-kube-api-access-5hslk\") on node \"crc\" DevicePath \"\"" Jan 27 09:18:30 crc kubenswrapper[4799]: I0127 09:18:30.596580 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78b45b9-36fa-4026-8f87-42cb45906e0d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:18:30 crc kubenswrapper[4799]: I0127 09:18:30.596590 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f78b45b9-36fa-4026-8f87-42cb45906e0d-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:18:30 crc kubenswrapper[4799]: I0127 09:18:30.596598 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78b45b9-36fa-4026-8f87-42cb45906e0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:18:30 crc kubenswrapper[4799]: I0127 09:18:30.981025 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-92vlt" event={"ID":"f78b45b9-36fa-4026-8f87-42cb45906e0d","Type":"ContainerDied","Data":"387a1280aecaa3a9e066dd399390616d3ce598d56d36109f259eeb98b0900e7b"} Jan 27 09:18:30 crc kubenswrapper[4799]: I0127 09:18:30.981067 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="387a1280aecaa3a9e066dd399390616d3ce598d56d36109f259eeb98b0900e7b" Jan 27 09:18:30 crc kubenswrapper[4799]: I0127 09:18:30.981113 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-92vlt" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.083391 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6b699b8774-fd6pb"] Jan 27 09:18:31 crc kubenswrapper[4799]: E0127 09:18:31.083792 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78b45b9-36fa-4026-8f87-42cb45906e0d" containerName="placement-db-sync" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.083810 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78b45b9-36fa-4026-8f87-42cb45906e0d" containerName="placement-db-sync" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.084036 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78b45b9-36fa-4026-8f87-42cb45906e0d" containerName="placement-db-sync" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.085596 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b699b8774-fd6pb" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.091567 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.091873 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-ffn6h" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.093642 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.099747 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b699b8774-fd6pb"] Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.236399 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56fbee0b-130b-44ee-ab60-336327c2e8c2-logs\") pod \"placement-6b699b8774-fd6pb\" (UID: \"56fbee0b-130b-44ee-ab60-336327c2e8c2\") " pod="openstack/placement-6b699b8774-fd6pb" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.236468 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56fbee0b-130b-44ee-ab60-336327c2e8c2-config-data\") pod \"placement-6b699b8774-fd6pb\" (UID: \"56fbee0b-130b-44ee-ab60-336327c2e8c2\") " pod="openstack/placement-6b699b8774-fd6pb" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.236504 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56fbee0b-130b-44ee-ab60-336327c2e8c2-scripts\") pod \"placement-6b699b8774-fd6pb\" (UID: \"56fbee0b-130b-44ee-ab60-336327c2e8c2\") " pod="openstack/placement-6b699b8774-fd6pb" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.238883 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jg7t\" (UniqueName: \"kubernetes.io/projected/56fbee0b-130b-44ee-ab60-336327c2e8c2-kube-api-access-7jg7t\") pod \"placement-6b699b8774-fd6pb\" (UID: \"56fbee0b-130b-44ee-ab60-336327c2e8c2\") " pod="openstack/placement-6b699b8774-fd6pb" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.238953 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56fbee0b-130b-44ee-ab60-336327c2e8c2-combined-ca-bundle\") pod \"placement-6b699b8774-fd6pb\" (UID: \"56fbee0b-130b-44ee-ab60-336327c2e8c2\") " pod="openstack/placement-6b699b8774-fd6pb" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.340644 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jg7t\" (UniqueName: \"kubernetes.io/projected/56fbee0b-130b-44ee-ab60-336327c2e8c2-kube-api-access-7jg7t\") pod \"placement-6b699b8774-fd6pb\" (UID: \"56fbee0b-130b-44ee-ab60-336327c2e8c2\") " pod="openstack/placement-6b699b8774-fd6pb" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.340719 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56fbee0b-130b-44ee-ab60-336327c2e8c2-combined-ca-bundle\") pod \"placement-6b699b8774-fd6pb\" (UID: \"56fbee0b-130b-44ee-ab60-336327c2e8c2\") " pod="openstack/placement-6b699b8774-fd6pb" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.340764 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56fbee0b-130b-44ee-ab60-336327c2e8c2-logs\") pod \"placement-6b699b8774-fd6pb\" (UID: \"56fbee0b-130b-44ee-ab60-336327c2e8c2\") " pod="openstack/placement-6b699b8774-fd6pb" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.340791 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56fbee0b-130b-44ee-ab60-336327c2e8c2-config-data\") pod \"placement-6b699b8774-fd6pb\" (UID: \"56fbee0b-130b-44ee-ab60-336327c2e8c2\") " pod="openstack/placement-6b699b8774-fd6pb" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.340819 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56fbee0b-130b-44ee-ab60-336327c2e8c2-scripts\") pod \"placement-6b699b8774-fd6pb\" (UID: \"56fbee0b-130b-44ee-ab60-336327c2e8c2\") " pod="openstack/placement-6b699b8774-fd6pb" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.341596 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56fbee0b-130b-44ee-ab60-336327c2e8c2-logs\") pod \"placement-6b699b8774-fd6pb\" (UID: \"56fbee0b-130b-44ee-ab60-336327c2e8c2\") " pod="openstack/placement-6b699b8774-fd6pb" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.350043 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56fbee0b-130b-44ee-ab60-336327c2e8c2-combined-ca-bundle\") pod \"placement-6b699b8774-fd6pb\" (UID: \"56fbee0b-130b-44ee-ab60-336327c2e8c2\") " pod="openstack/placement-6b699b8774-fd6pb" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.350534 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56fbee0b-130b-44ee-ab60-336327c2e8c2-scripts\") pod \"placement-6b699b8774-fd6pb\" (UID: \"56fbee0b-130b-44ee-ab60-336327c2e8c2\") " pod="openstack/placement-6b699b8774-fd6pb" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.350948 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56fbee0b-130b-44ee-ab60-336327c2e8c2-config-data\") pod \"placement-6b699b8774-fd6pb\" (UID: \"56fbee0b-130b-44ee-ab60-336327c2e8c2\") " pod="openstack/placement-6b699b8774-fd6pb" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.363446 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jg7t\" (UniqueName: \"kubernetes.io/projected/56fbee0b-130b-44ee-ab60-336327c2e8c2-kube-api-access-7jg7t\") pod \"placement-6b699b8774-fd6pb\" (UID: \"56fbee0b-130b-44ee-ab60-336327c2e8c2\") " pod="openstack/placement-6b699b8774-fd6pb" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.446710 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b699b8774-fd6pb" Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.914905 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b699b8774-fd6pb"] Jan 27 09:18:31 crc kubenswrapper[4799]: I0127 09:18:31.990106 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b699b8774-fd6pb" event={"ID":"56fbee0b-130b-44ee-ab60-336327c2e8c2","Type":"ContainerStarted","Data":"116bb41427310123009e20578808966f08a343f9cf1592b535cee95d50664686"} Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.002913 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b699b8774-fd6pb" event={"ID":"56fbee0b-130b-44ee-ab60-336327c2e8c2","Type":"ContainerStarted","Data":"acadb625fdd2d3f271eaa6058f32c4cc76f0b1acc3693cdeb7ae4239c4d7bb33"} Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.003562 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b699b8774-fd6pb" event={"ID":"56fbee0b-130b-44ee-ab60-336327c2e8c2","Type":"ContainerStarted","Data":"ef5e2b96f222df035e5f3a118bd530400496afb77df72b6d4787be985e460534"} Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.003589 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b699b8774-fd6pb" Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.003602 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b699b8774-fd6pb" Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.025348 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6b699b8774-fd6pb" podStartSLOduration=2.025331774 podStartE2EDuration="2.025331774s" podCreationTimestamp="2026-01-27 09:18:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:18:33.021180351 +0000 UTC m=+5579.332284426" watchObservedRunningTime="2026-01-27 09:18:33.025331774 +0000 UTC m=+5579.336435839" Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.083476 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.170199 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89755789-9zlzq"] Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.170538 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c89755789-9zlzq" podUID="3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7" containerName="dnsmasq-dns" containerID="cri-o://8c3bf9d4a26248c434e36b38e98151e9e7eac708eb369aa05414deabe10dfbda" gracePeriod=10 Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.730326 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89755789-9zlzq" Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.892853 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-ovsdbserver-nb\") pod \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\" (UID: \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\") " Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.892969 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-ovsdbserver-sb\") pod \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\" (UID: \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\") " Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.893163 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-config\") pod \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\" (UID: \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\") " Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.893278 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-dns-svc\") pod \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\" (UID: \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\") " Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.893488 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md78t\" (UniqueName: \"kubernetes.io/projected/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-kube-api-access-md78t\") pod \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\" (UID: \"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7\") " Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.901602 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-kube-api-access-md78t" (OuterVolumeSpecName: "kube-api-access-md78t") pod "3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7" (UID: "3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7"). InnerVolumeSpecName "kube-api-access-md78t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.944344 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7" (UID: "3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.948709 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7" (UID: "3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.960862 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-config" (OuterVolumeSpecName: "config") pod "3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7" (UID: "3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.963767 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7" (UID: "3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.995827 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.995898 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.995912 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md78t\" (UniqueName: \"kubernetes.io/projected/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-kube-api-access-md78t\") on node \"crc\" DevicePath \"\"" Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.995927 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 09:18:33 crc kubenswrapper[4799]: I0127 09:18:33.995941 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 09:18:34 crc kubenswrapper[4799]: I0127 09:18:34.013772 4799 generic.go:334] "Generic (PLEG): container finished" podID="3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7" containerID="8c3bf9d4a26248c434e36b38e98151e9e7eac708eb369aa05414deabe10dfbda" exitCode=0 Jan 27 09:18:34 crc kubenswrapper[4799]: I0127 09:18:34.013856 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89755789-9zlzq" event={"ID":"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7","Type":"ContainerDied","Data":"8c3bf9d4a26248c434e36b38e98151e9e7eac708eb369aa05414deabe10dfbda"} Jan 27 09:18:34 crc kubenswrapper[4799]: I0127 09:18:34.013944 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89755789-9zlzq" event={"ID":"3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7","Type":"ContainerDied","Data":"5d42cea24bfdfc5055913a84d1017a98a94c32e82791f0b306ef8ec684760558"} Jan 27 09:18:34 crc kubenswrapper[4799]: I0127 09:18:34.013967 4799 scope.go:117] "RemoveContainer" containerID="8c3bf9d4a26248c434e36b38e98151e9e7eac708eb369aa05414deabe10dfbda" Jan 27 09:18:34 crc kubenswrapper[4799]: I0127 09:18:34.013883 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89755789-9zlzq" Jan 27 09:18:34 crc kubenswrapper[4799]: I0127 09:18:34.042195 4799 scope.go:117] "RemoveContainer" containerID="2a56c2149dfcf424dae40b7223abfd8344c2c6ba5025f0474c74e864ead0071a" Jan 27 09:18:34 crc kubenswrapper[4799]: I0127 09:18:34.065377 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89755789-9zlzq"] Jan 27 09:18:34 crc kubenswrapper[4799]: I0127 09:18:34.069174 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c89755789-9zlzq"] Jan 27 09:18:34 crc kubenswrapper[4799]: I0127 09:18:34.090112 4799 scope.go:117] "RemoveContainer" containerID="8c3bf9d4a26248c434e36b38e98151e9e7eac708eb369aa05414deabe10dfbda" Jan 27 09:18:34 crc kubenswrapper[4799]: E0127 09:18:34.090774 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c3bf9d4a26248c434e36b38e98151e9e7eac708eb369aa05414deabe10dfbda\": container with ID starting with 8c3bf9d4a26248c434e36b38e98151e9e7eac708eb369aa05414deabe10dfbda not found: ID does not exist" containerID="8c3bf9d4a26248c434e36b38e98151e9e7eac708eb369aa05414deabe10dfbda" Jan 27 09:18:34 crc kubenswrapper[4799]: I0127 09:18:34.090829 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c3bf9d4a26248c434e36b38e98151e9e7eac708eb369aa05414deabe10dfbda"} err="failed to get container status \"8c3bf9d4a26248c434e36b38e98151e9e7eac708eb369aa05414deabe10dfbda\": rpc error: code = NotFound desc = could not find container \"8c3bf9d4a26248c434e36b38e98151e9e7eac708eb369aa05414deabe10dfbda\": container with ID starting with 8c3bf9d4a26248c434e36b38e98151e9e7eac708eb369aa05414deabe10dfbda not found: ID does not exist" Jan 27 09:18:34 crc kubenswrapper[4799]: I0127 09:18:34.090856 4799 scope.go:117] "RemoveContainer" containerID="2a56c2149dfcf424dae40b7223abfd8344c2c6ba5025f0474c74e864ead0071a" Jan 27 09:18:34 crc kubenswrapper[4799]: E0127 09:18:34.091627 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a56c2149dfcf424dae40b7223abfd8344c2c6ba5025f0474c74e864ead0071a\": container with ID starting with 2a56c2149dfcf424dae40b7223abfd8344c2c6ba5025f0474c74e864ead0071a not found: ID does not exist" containerID="2a56c2149dfcf424dae40b7223abfd8344c2c6ba5025f0474c74e864ead0071a" Jan 27 09:18:34 crc kubenswrapper[4799]: I0127 09:18:34.091739 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a56c2149dfcf424dae40b7223abfd8344c2c6ba5025f0474c74e864ead0071a"} err="failed to get container status \"2a56c2149dfcf424dae40b7223abfd8344c2c6ba5025f0474c74e864ead0071a\": rpc error: code = NotFound desc = could not find container \"2a56c2149dfcf424dae40b7223abfd8344c2c6ba5025f0474c74e864ead0071a\": container with ID starting with 2a56c2149dfcf424dae40b7223abfd8344c2c6ba5025f0474c74e864ead0071a not found: ID does not exist" Jan 27 09:18:34 crc kubenswrapper[4799]: I0127 09:18:34.463376 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7" path="/var/lib/kubelet/pods/3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7/volumes" Jan 27 09:18:40 crc kubenswrapper[4799]: I0127 09:18:40.349378 4799 scope.go:117] "RemoveContainer" containerID="b7158ecd5088e1489baf596a70b53b667c35a12b56f093612c92a19fc89ff77e" Jan 27 09:18:40 crc kubenswrapper[4799]: I0127 09:18:40.377261 4799 scope.go:117] "RemoveContainer" containerID="eef2cd17b4b5022e908e1a96b9120ea50b652e8ca3046f5298cb91026d255390" Jan 27 09:18:40 crc kubenswrapper[4799]: I0127 09:18:40.451520 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:18:40 crc kubenswrapper[4799]: E0127 09:18:40.452001 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:18:54 crc kubenswrapper[4799]: I0127 09:18:54.458449 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:18:55 crc kubenswrapper[4799]: I0127 09:18:55.192284 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"6d7b80b862be55b1b06bdfee48de3c7e8807494274cc669fc0412f97be57e1fc"} Jan 27 09:19:02 crc kubenswrapper[4799]: I0127 09:19:02.462723 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b699b8774-fd6pb" Jan 27 09:19:02 crc kubenswrapper[4799]: I0127 09:19:02.523970 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b699b8774-fd6pb" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.639953 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-w6gg8"] Jan 27 09:19:27 crc kubenswrapper[4799]: E0127 09:19:27.640767 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7" containerName="dnsmasq-dns" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.640781 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7" containerName="dnsmasq-dns" Jan 27 09:19:27 crc kubenswrapper[4799]: E0127 09:19:27.640795 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7" containerName="init" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.640801 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7" containerName="init" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.640999 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="3511ec8a-f6f0-4d21-bab3-6f89ff14a1e7" containerName="dnsmasq-dns" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.641577 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-w6gg8" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.665371 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-w6gg8"] Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.738595 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-mp6ct"] Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.739922 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mp6ct" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.748440 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-mp6ct"] Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.775955 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brndh\" (UniqueName: \"kubernetes.io/projected/ad63280e-42ab-4d15-8f88-f19cd766140f-kube-api-access-brndh\") pod \"nova-api-db-create-w6gg8\" (UID: \"ad63280e-42ab-4d15-8f88-f19cd766140f\") " pod="openstack/nova-api-db-create-w6gg8" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.776033 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad63280e-42ab-4d15-8f88-f19cd766140f-operator-scripts\") pod \"nova-api-db-create-w6gg8\" (UID: \"ad63280e-42ab-4d15-8f88-f19cd766140f\") " pod="openstack/nova-api-db-create-w6gg8" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.846137 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-4456-account-create-update-tsj7t"] Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.847882 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4456-account-create-update-tsj7t" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.855548 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.863253 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4456-account-create-update-tsj7t"] Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.877634 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc217648-43cf-48fc-a4e1-e371aacddb31-operator-scripts\") pod \"nova-cell0-db-create-mp6ct\" (UID: \"dc217648-43cf-48fc-a4e1-e371aacddb31\") " pod="openstack/nova-cell0-db-create-mp6ct" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.877746 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtb6m\" (UniqueName: \"kubernetes.io/projected/dc217648-43cf-48fc-a4e1-e371aacddb31-kube-api-access-rtb6m\") pod \"nova-cell0-db-create-mp6ct\" (UID: \"dc217648-43cf-48fc-a4e1-e371aacddb31\") " pod="openstack/nova-cell0-db-create-mp6ct" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.877818 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brndh\" (UniqueName: \"kubernetes.io/projected/ad63280e-42ab-4d15-8f88-f19cd766140f-kube-api-access-brndh\") pod \"nova-api-db-create-w6gg8\" (UID: \"ad63280e-42ab-4d15-8f88-f19cd766140f\") " pod="openstack/nova-api-db-create-w6gg8" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.877901 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad63280e-42ab-4d15-8f88-f19cd766140f-operator-scripts\") pod \"nova-api-db-create-w6gg8\" (UID: \"ad63280e-42ab-4d15-8f88-f19cd766140f\") " pod="openstack/nova-api-db-create-w6gg8" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.878773 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad63280e-42ab-4d15-8f88-f19cd766140f-operator-scripts\") pod \"nova-api-db-create-w6gg8\" (UID: \"ad63280e-42ab-4d15-8f88-f19cd766140f\") " pod="openstack/nova-api-db-create-w6gg8" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.901065 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brndh\" (UniqueName: \"kubernetes.io/projected/ad63280e-42ab-4d15-8f88-f19cd766140f-kube-api-access-brndh\") pod \"nova-api-db-create-w6gg8\" (UID: \"ad63280e-42ab-4d15-8f88-f19cd766140f\") " pod="openstack/nova-api-db-create-w6gg8" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.943899 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-dcrt4"] Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.945240 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dcrt4" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.961483 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-w6gg8" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.980138 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtb6m\" (UniqueName: \"kubernetes.io/projected/dc217648-43cf-48fc-a4e1-e371aacddb31-kube-api-access-rtb6m\") pod \"nova-cell0-db-create-mp6ct\" (UID: \"dc217648-43cf-48fc-a4e1-e371aacddb31\") " pod="openstack/nova-cell0-db-create-mp6ct" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.980875 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsw8f\" (UniqueName: \"kubernetes.io/projected/a70183b9-dfc0-4f3e-838f-81c806acd0fc-kube-api-access-fsw8f\") pod \"nova-api-4456-account-create-update-tsj7t\" (UID: \"a70183b9-dfc0-4f3e-838f-81c806acd0fc\") " pod="openstack/nova-api-4456-account-create-update-tsj7t" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.981023 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a70183b9-dfc0-4f3e-838f-81c806acd0fc-operator-scripts\") pod \"nova-api-4456-account-create-update-tsj7t\" (UID: \"a70183b9-dfc0-4f3e-838f-81c806acd0fc\") " pod="openstack/nova-api-4456-account-create-update-tsj7t" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.981106 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc217648-43cf-48fc-a4e1-e371aacddb31-operator-scripts\") pod \"nova-cell0-db-create-mp6ct\" (UID: \"dc217648-43cf-48fc-a4e1-e371aacddb31\") " pod="openstack/nova-cell0-db-create-mp6ct" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.982048 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc217648-43cf-48fc-a4e1-e371aacddb31-operator-scripts\") pod \"nova-cell0-db-create-mp6ct\" (UID: \"dc217648-43cf-48fc-a4e1-e371aacddb31\") " pod="openstack/nova-cell0-db-create-mp6ct" Jan 27 09:19:27 crc kubenswrapper[4799]: I0127 09:19:27.985181 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dcrt4"] Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.001678 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtb6m\" (UniqueName: \"kubernetes.io/projected/dc217648-43cf-48fc-a4e1-e371aacddb31-kube-api-access-rtb6m\") pod \"nova-cell0-db-create-mp6ct\" (UID: \"dc217648-43cf-48fc-a4e1-e371aacddb31\") " pod="openstack/nova-cell0-db-create-mp6ct" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.057249 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mp6ct" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.060687 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5ffc-account-create-update-xkr67"] Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.062285 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5ffc-account-create-update-xkr67" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.066135 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.083628 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c1026c8-75da-4392-99b6-96ccacb81316-operator-scripts\") pod \"nova-cell1-db-create-dcrt4\" (UID: \"7c1026c8-75da-4392-99b6-96ccacb81316\") " pod="openstack/nova-cell1-db-create-dcrt4" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.083683 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsw8f\" (UniqueName: \"kubernetes.io/projected/a70183b9-dfc0-4f3e-838f-81c806acd0fc-kube-api-access-fsw8f\") pod \"nova-api-4456-account-create-update-tsj7t\" (UID: \"a70183b9-dfc0-4f3e-838f-81c806acd0fc\") " pod="openstack/nova-api-4456-account-create-update-tsj7t" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.083736 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qd6l\" (UniqueName: \"kubernetes.io/projected/7c1026c8-75da-4392-99b6-96ccacb81316-kube-api-access-4qd6l\") pod \"nova-cell1-db-create-dcrt4\" (UID: \"7c1026c8-75da-4392-99b6-96ccacb81316\") " pod="openstack/nova-cell1-db-create-dcrt4" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.083763 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a70183b9-dfc0-4f3e-838f-81c806acd0fc-operator-scripts\") pod \"nova-api-4456-account-create-update-tsj7t\" (UID: \"a70183b9-dfc0-4f3e-838f-81c806acd0fc\") " pod="openstack/nova-api-4456-account-create-update-tsj7t" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.084880 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a70183b9-dfc0-4f3e-838f-81c806acd0fc-operator-scripts\") pod \"nova-api-4456-account-create-update-tsj7t\" (UID: \"a70183b9-dfc0-4f3e-838f-81c806acd0fc\") " pod="openstack/nova-api-4456-account-create-update-tsj7t" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.132399 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5ffc-account-create-update-xkr67"] Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.158911 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsw8f\" (UniqueName: \"kubernetes.io/projected/a70183b9-dfc0-4f3e-838f-81c806acd0fc-kube-api-access-fsw8f\") pod \"nova-api-4456-account-create-update-tsj7t\" (UID: \"a70183b9-dfc0-4f3e-838f-81c806acd0fc\") " pod="openstack/nova-api-4456-account-create-update-tsj7t" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.174943 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4456-account-create-update-tsj7t" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.204106 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c1026c8-75da-4392-99b6-96ccacb81316-operator-scripts\") pod \"nova-cell1-db-create-dcrt4\" (UID: \"7c1026c8-75da-4392-99b6-96ccacb81316\") " pod="openstack/nova-cell1-db-create-dcrt4" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.204938 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qd6l\" (UniqueName: \"kubernetes.io/projected/7c1026c8-75da-4392-99b6-96ccacb81316-kube-api-access-4qd6l\") pod \"nova-cell1-db-create-dcrt4\" (UID: \"7c1026c8-75da-4392-99b6-96ccacb81316\") " pod="openstack/nova-cell1-db-create-dcrt4" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.205318 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwm72\" (UniqueName: \"kubernetes.io/projected/03cda70d-3c33-441f-bd8f-f838f16d2563-kube-api-access-kwm72\") pod \"nova-cell0-5ffc-account-create-update-xkr67\" (UID: \"03cda70d-3c33-441f-bd8f-f838f16d2563\") " pod="openstack/nova-cell0-5ffc-account-create-update-xkr67" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.205552 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c1026c8-75da-4392-99b6-96ccacb81316-operator-scripts\") pod \"nova-cell1-db-create-dcrt4\" (UID: \"7c1026c8-75da-4392-99b6-96ccacb81316\") " pod="openstack/nova-cell1-db-create-dcrt4" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.209611 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03cda70d-3c33-441f-bd8f-f838f16d2563-operator-scripts\") pod \"nova-cell0-5ffc-account-create-update-xkr67\" (UID: \"03cda70d-3c33-441f-bd8f-f838f16d2563\") " pod="openstack/nova-cell0-5ffc-account-create-update-xkr67" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.244276 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qd6l\" (UniqueName: \"kubernetes.io/projected/7c1026c8-75da-4392-99b6-96ccacb81316-kube-api-access-4qd6l\") pod \"nova-cell1-db-create-dcrt4\" (UID: \"7c1026c8-75da-4392-99b6-96ccacb81316\") " pod="openstack/nova-cell1-db-create-dcrt4" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.280002 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-019f-account-create-update-fjsnb"] Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.281843 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-019f-account-create-update-fjsnb" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.290032 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.291779 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dcrt4" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.301015 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-019f-account-create-update-fjsnb"] Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.315496 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03cda70d-3c33-441f-bd8f-f838f16d2563-operator-scripts\") pod \"nova-cell0-5ffc-account-create-update-xkr67\" (UID: \"03cda70d-3c33-441f-bd8f-f838f16d2563\") " pod="openstack/nova-cell0-5ffc-account-create-update-xkr67" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.315811 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwm72\" (UniqueName: \"kubernetes.io/projected/03cda70d-3c33-441f-bd8f-f838f16d2563-kube-api-access-kwm72\") pod \"nova-cell0-5ffc-account-create-update-xkr67\" (UID: \"03cda70d-3c33-441f-bd8f-f838f16d2563\") " pod="openstack/nova-cell0-5ffc-account-create-update-xkr67" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.316949 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03cda70d-3c33-441f-bd8f-f838f16d2563-operator-scripts\") pod \"nova-cell0-5ffc-account-create-update-xkr67\" (UID: \"03cda70d-3c33-441f-bd8f-f838f16d2563\") " pod="openstack/nova-cell0-5ffc-account-create-update-xkr67" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.336897 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwm72\" (UniqueName: \"kubernetes.io/projected/03cda70d-3c33-441f-bd8f-f838f16d2563-kube-api-access-kwm72\") pod \"nova-cell0-5ffc-account-create-update-xkr67\" (UID: \"03cda70d-3c33-441f-bd8f-f838f16d2563\") " pod="openstack/nova-cell0-5ffc-account-create-update-xkr67" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.421001 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bac970a-8e3f-4265-b625-3af6eeea7cbe-operator-scripts\") pod \"nova-cell1-019f-account-create-update-fjsnb\" (UID: \"8bac970a-8e3f-4265-b625-3af6eeea7cbe\") " pod="openstack/nova-cell1-019f-account-create-update-fjsnb" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.421137 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92wcw\" (UniqueName: \"kubernetes.io/projected/8bac970a-8e3f-4265-b625-3af6eeea7cbe-kube-api-access-92wcw\") pod \"nova-cell1-019f-account-create-update-fjsnb\" (UID: \"8bac970a-8e3f-4265-b625-3af6eeea7cbe\") " pod="openstack/nova-cell1-019f-account-create-update-fjsnb" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.505677 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5ffc-account-create-update-xkr67" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.524610 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bac970a-8e3f-4265-b625-3af6eeea7cbe-operator-scripts\") pod \"nova-cell1-019f-account-create-update-fjsnb\" (UID: \"8bac970a-8e3f-4265-b625-3af6eeea7cbe\") " pod="openstack/nova-cell1-019f-account-create-update-fjsnb" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.524722 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92wcw\" (UniqueName: \"kubernetes.io/projected/8bac970a-8e3f-4265-b625-3af6eeea7cbe-kube-api-access-92wcw\") pod \"nova-cell1-019f-account-create-update-fjsnb\" (UID: \"8bac970a-8e3f-4265-b625-3af6eeea7cbe\") " pod="openstack/nova-cell1-019f-account-create-update-fjsnb" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.531161 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bac970a-8e3f-4265-b625-3af6eeea7cbe-operator-scripts\") pod \"nova-cell1-019f-account-create-update-fjsnb\" (UID: \"8bac970a-8e3f-4265-b625-3af6eeea7cbe\") " pod="openstack/nova-cell1-019f-account-create-update-fjsnb" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.544053 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-w6gg8"] Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.563208 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92wcw\" (UniqueName: \"kubernetes.io/projected/8bac970a-8e3f-4265-b625-3af6eeea7cbe-kube-api-access-92wcw\") pod \"nova-cell1-019f-account-create-update-fjsnb\" (UID: \"8bac970a-8e3f-4265-b625-3af6eeea7cbe\") " pod="openstack/nova-cell1-019f-account-create-update-fjsnb" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.593988 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4456-account-create-update-tsj7t"] Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.628881 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-019f-account-create-update-fjsnb" Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.722572 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-mp6ct"] Jan 27 09:19:28 crc kubenswrapper[4799]: W0127 09:19:28.733623 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc217648_43cf_48fc_a4e1_e371aacddb31.slice/crio-95cd07ec7b76126ba440918434ea1bcf92deb15c29ccb38e5b72f89d4822a9d6 WatchSource:0}: Error finding container 95cd07ec7b76126ba440918434ea1bcf92deb15c29ccb38e5b72f89d4822a9d6: Status 404 returned error can't find the container with id 95cd07ec7b76126ba440918434ea1bcf92deb15c29ccb38e5b72f89d4822a9d6 Jan 27 09:19:28 crc kubenswrapper[4799]: I0127 09:19:28.876423 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dcrt4"] Jan 27 09:19:29 crc kubenswrapper[4799]: I0127 09:19:29.084351 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5ffc-account-create-update-xkr67"] Jan 27 09:19:29 crc kubenswrapper[4799]: I0127 09:19:29.251018 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-019f-account-create-update-fjsnb"] Jan 27 09:19:29 crc kubenswrapper[4799]: W0127 09:19:29.260661 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bac970a_8e3f_4265_b625_3af6eeea7cbe.slice/crio-94f182a64fd44c34c945b12d4c5f71c114c073600845751f57dd91be549b49e7 WatchSource:0}: Error finding container 94f182a64fd44c34c945b12d4c5f71c114c073600845751f57dd91be549b49e7: Status 404 returned error can't find the container with id 94f182a64fd44c34c945b12d4c5f71c114c073600845751f57dd91be549b49e7 Jan 27 09:19:29 crc kubenswrapper[4799]: I0127 09:19:29.511977 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5ffc-account-create-update-xkr67" event={"ID":"03cda70d-3c33-441f-bd8f-f838f16d2563","Type":"ContainerStarted","Data":"50394ff2912c3b423eec7eec525fd9dc01fd33a25935b881f697f2bf4d47031c"} Jan 27 09:19:29 crc kubenswrapper[4799]: I0127 09:19:29.512047 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5ffc-account-create-update-xkr67" event={"ID":"03cda70d-3c33-441f-bd8f-f838f16d2563","Type":"ContainerStarted","Data":"8562a55d41b4de851ca5a143dcd7bfe1c29e6b73ee980e283f3efbc4394feba4"} Jan 27 09:19:29 crc kubenswrapper[4799]: I0127 09:19:29.519436 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-019f-account-create-update-fjsnb" event={"ID":"8bac970a-8e3f-4265-b625-3af6eeea7cbe","Type":"ContainerStarted","Data":"34eb76d3005a8570a4ad70c772c712fb53a691575d2ee647ddc66e231c3f2c59"} Jan 27 09:19:29 crc kubenswrapper[4799]: I0127 09:19:29.519492 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-019f-account-create-update-fjsnb" event={"ID":"8bac970a-8e3f-4265-b625-3af6eeea7cbe","Type":"ContainerStarted","Data":"94f182a64fd44c34c945b12d4c5f71c114c073600845751f57dd91be549b49e7"} Jan 27 09:19:29 crc kubenswrapper[4799]: I0127 09:19:29.529796 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mp6ct" event={"ID":"dc217648-43cf-48fc-a4e1-e371aacddb31","Type":"ContainerStarted","Data":"ec23d0cbd1926f410934e781461973f2cebab5b193e91532e49a3ffc146d02b2"} Jan 27 09:19:29 crc kubenswrapper[4799]: I0127 09:19:29.529855 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mp6ct" event={"ID":"dc217648-43cf-48fc-a4e1-e371aacddb31","Type":"ContainerStarted","Data":"95cd07ec7b76126ba440918434ea1bcf92deb15c29ccb38e5b72f89d4822a9d6"} Jan 27 09:19:29 crc kubenswrapper[4799]: I0127 09:19:29.539611 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-5ffc-account-create-update-xkr67" podStartSLOduration=1.539585077 podStartE2EDuration="1.539585077s" podCreationTimestamp="2026-01-27 09:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:19:29.535060754 +0000 UTC m=+5635.846164819" watchObservedRunningTime="2026-01-27 09:19:29.539585077 +0000 UTC m=+5635.850689142" Jan 27 09:19:29 crc kubenswrapper[4799]: I0127 09:19:29.556632 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-w6gg8" event={"ID":"ad63280e-42ab-4d15-8f88-f19cd766140f","Type":"ContainerStarted","Data":"cd8052c3f085c82eb28dc088485a61689454ea99bf75dea8da82b9288d6a0640"} Jan 27 09:19:29 crc kubenswrapper[4799]: I0127 09:19:29.556688 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-w6gg8" event={"ID":"ad63280e-42ab-4d15-8f88-f19cd766140f","Type":"ContainerStarted","Data":"e5f985cefefb58d1531009eb02e80fe0bc70068036c6e48fc7575082550a0740"} Jan 27 09:19:29 crc kubenswrapper[4799]: I0127 09:19:29.570170 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-mp6ct" podStartSLOduration=2.570153999 podStartE2EDuration="2.570153999s" podCreationTimestamp="2026-01-27 09:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:19:29.56979354 +0000 UTC m=+5635.880897615" watchObservedRunningTime="2026-01-27 09:19:29.570153999 +0000 UTC m=+5635.881258054" Jan 27 09:19:29 crc kubenswrapper[4799]: I0127 09:19:29.573561 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4456-account-create-update-tsj7t" event={"ID":"a70183b9-dfc0-4f3e-838f-81c806acd0fc","Type":"ContainerStarted","Data":"b84c0ae62afdb4d4431fd7383acce54610ac98a3792aab706d6b8ce699c099e5"} Jan 27 09:19:29 crc kubenswrapper[4799]: I0127 09:19:29.573633 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4456-account-create-update-tsj7t" event={"ID":"a70183b9-dfc0-4f3e-838f-81c806acd0fc","Type":"ContainerStarted","Data":"66ec8b49f15e380c7838cf3cd7933a3e2748fe19a5dd91e2993997d2fff28043"} Jan 27 09:19:29 crc kubenswrapper[4799]: I0127 09:19:29.590512 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dcrt4" event={"ID":"7c1026c8-75da-4392-99b6-96ccacb81316","Type":"ContainerStarted","Data":"cf7afecaee69098e6d0e1c7274159019d7076388b526e03be5db931ca2817e00"} Jan 27 09:19:29 crc kubenswrapper[4799]: I0127 09:19:29.590568 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dcrt4" event={"ID":"7c1026c8-75da-4392-99b6-96ccacb81316","Type":"ContainerStarted","Data":"9ffd840198f2aedcae4e516200abec7ccdd43d244f889c829dd626744cefc7b5"} Jan 27 09:19:29 crc kubenswrapper[4799]: I0127 09:19:29.608225 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-w6gg8" podStartSLOduration=2.6081950750000003 podStartE2EDuration="2.608195075s" podCreationTimestamp="2026-01-27 09:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:19:29.599648312 +0000 UTC m=+5635.910752377" watchObservedRunningTime="2026-01-27 09:19:29.608195075 +0000 UTC m=+5635.919299140" Jan 27 09:19:29 crc kubenswrapper[4799]: I0127 09:19:29.623927 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-4456-account-create-update-tsj7t" podStartSLOduration=2.623899432 podStartE2EDuration="2.623899432s" podCreationTimestamp="2026-01-27 09:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:19:29.620985983 +0000 UTC m=+5635.932090048" watchObservedRunningTime="2026-01-27 09:19:29.623899432 +0000 UTC m=+5635.935003497" Jan 27 09:19:29 crc kubenswrapper[4799]: I0127 09:19:29.677260 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-dcrt4" podStartSLOduration=2.6772242029999997 podStartE2EDuration="2.677224203s" podCreationTimestamp="2026-01-27 09:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:19:29.661331291 +0000 UTC m=+5635.972435356" watchObservedRunningTime="2026-01-27 09:19:29.677224203 +0000 UTC m=+5635.988328268" Jan 27 09:19:30 crc kubenswrapper[4799]: I0127 09:19:30.601061 4799 generic.go:334] "Generic (PLEG): container finished" podID="a70183b9-dfc0-4f3e-838f-81c806acd0fc" containerID="b84c0ae62afdb4d4431fd7383acce54610ac98a3792aab706d6b8ce699c099e5" exitCode=0 Jan 27 09:19:30 crc kubenswrapper[4799]: I0127 09:19:30.601140 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4456-account-create-update-tsj7t" event={"ID":"a70183b9-dfc0-4f3e-838f-81c806acd0fc","Type":"ContainerDied","Data":"b84c0ae62afdb4d4431fd7383acce54610ac98a3792aab706d6b8ce699c099e5"} Jan 27 09:19:30 crc kubenswrapper[4799]: I0127 09:19:30.603066 4799 generic.go:334] "Generic (PLEG): container finished" podID="7c1026c8-75da-4392-99b6-96ccacb81316" containerID="cf7afecaee69098e6d0e1c7274159019d7076388b526e03be5db931ca2817e00" exitCode=0 Jan 27 09:19:30 crc kubenswrapper[4799]: I0127 09:19:30.603121 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dcrt4" event={"ID":"7c1026c8-75da-4392-99b6-96ccacb81316","Type":"ContainerDied","Data":"cf7afecaee69098e6d0e1c7274159019d7076388b526e03be5db931ca2817e00"} Jan 27 09:19:30 crc kubenswrapper[4799]: I0127 09:19:30.608687 4799 generic.go:334] "Generic (PLEG): container finished" podID="03cda70d-3c33-441f-bd8f-f838f16d2563" containerID="50394ff2912c3b423eec7eec525fd9dc01fd33a25935b881f697f2bf4d47031c" exitCode=0 Jan 27 09:19:30 crc kubenswrapper[4799]: I0127 09:19:30.608754 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5ffc-account-create-update-xkr67" event={"ID":"03cda70d-3c33-441f-bd8f-f838f16d2563","Type":"ContainerDied","Data":"50394ff2912c3b423eec7eec525fd9dc01fd33a25935b881f697f2bf4d47031c"} Jan 27 09:19:30 crc kubenswrapper[4799]: I0127 09:19:30.610638 4799 generic.go:334] "Generic (PLEG): container finished" podID="8bac970a-8e3f-4265-b625-3af6eeea7cbe" containerID="34eb76d3005a8570a4ad70c772c712fb53a691575d2ee647ddc66e231c3f2c59" exitCode=0 Jan 27 09:19:30 crc kubenswrapper[4799]: I0127 09:19:30.610709 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-019f-account-create-update-fjsnb" event={"ID":"8bac970a-8e3f-4265-b625-3af6eeea7cbe","Type":"ContainerDied","Data":"34eb76d3005a8570a4ad70c772c712fb53a691575d2ee647ddc66e231c3f2c59"} Jan 27 09:19:30 crc kubenswrapper[4799]: I0127 09:19:30.612781 4799 generic.go:334] "Generic (PLEG): container finished" podID="dc217648-43cf-48fc-a4e1-e371aacddb31" containerID="ec23d0cbd1926f410934e781461973f2cebab5b193e91532e49a3ffc146d02b2" exitCode=0 Jan 27 09:19:30 crc kubenswrapper[4799]: I0127 09:19:30.612833 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mp6ct" event={"ID":"dc217648-43cf-48fc-a4e1-e371aacddb31","Type":"ContainerDied","Data":"ec23d0cbd1926f410934e781461973f2cebab5b193e91532e49a3ffc146d02b2"} Jan 27 09:19:30 crc kubenswrapper[4799]: I0127 09:19:30.615037 4799 generic.go:334] "Generic (PLEG): container finished" podID="ad63280e-42ab-4d15-8f88-f19cd766140f" containerID="cd8052c3f085c82eb28dc088485a61689454ea99bf75dea8da82b9288d6a0640" exitCode=0 Jan 27 09:19:30 crc kubenswrapper[4799]: I0127 09:19:30.615077 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-w6gg8" event={"ID":"ad63280e-42ab-4d15-8f88-f19cd766140f","Type":"ContainerDied","Data":"cd8052c3f085c82eb28dc088485a61689454ea99bf75dea8da82b9288d6a0640"} Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.095212 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-w6gg8" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.202629 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad63280e-42ab-4d15-8f88-f19cd766140f-operator-scripts\") pod \"ad63280e-42ab-4d15-8f88-f19cd766140f\" (UID: \"ad63280e-42ab-4d15-8f88-f19cd766140f\") " Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.203070 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brndh\" (UniqueName: \"kubernetes.io/projected/ad63280e-42ab-4d15-8f88-f19cd766140f-kube-api-access-brndh\") pod \"ad63280e-42ab-4d15-8f88-f19cd766140f\" (UID: \"ad63280e-42ab-4d15-8f88-f19cd766140f\") " Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.203587 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad63280e-42ab-4d15-8f88-f19cd766140f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad63280e-42ab-4d15-8f88-f19cd766140f" (UID: "ad63280e-42ab-4d15-8f88-f19cd766140f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.213741 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad63280e-42ab-4d15-8f88-f19cd766140f-kube-api-access-brndh" (OuterVolumeSpecName: "kube-api-access-brndh") pod "ad63280e-42ab-4d15-8f88-f19cd766140f" (UID: "ad63280e-42ab-4d15-8f88-f19cd766140f"). InnerVolumeSpecName "kube-api-access-brndh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.261604 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-019f-account-create-update-fjsnb" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.269143 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5ffc-account-create-update-xkr67" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.281845 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mp6ct" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.289190 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4456-account-create-update-tsj7t" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.302512 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dcrt4" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.304875 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brndh\" (UniqueName: \"kubernetes.io/projected/ad63280e-42ab-4d15-8f88-f19cd766140f-kube-api-access-brndh\") on node \"crc\" DevicePath \"\"" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.304912 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad63280e-42ab-4d15-8f88-f19cd766140f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.405759 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qd6l\" (UniqueName: \"kubernetes.io/projected/7c1026c8-75da-4392-99b6-96ccacb81316-kube-api-access-4qd6l\") pod \"7c1026c8-75da-4392-99b6-96ccacb81316\" (UID: \"7c1026c8-75da-4392-99b6-96ccacb81316\") " Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.405844 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc217648-43cf-48fc-a4e1-e371aacddb31-operator-scripts\") pod \"dc217648-43cf-48fc-a4e1-e371aacddb31\" (UID: \"dc217648-43cf-48fc-a4e1-e371aacddb31\") " Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.405876 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtb6m\" (UniqueName: \"kubernetes.io/projected/dc217648-43cf-48fc-a4e1-e371aacddb31-kube-api-access-rtb6m\") pod \"dc217648-43cf-48fc-a4e1-e371aacddb31\" (UID: \"dc217648-43cf-48fc-a4e1-e371aacddb31\") " Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.405945 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a70183b9-dfc0-4f3e-838f-81c806acd0fc-operator-scripts\") pod \"a70183b9-dfc0-4f3e-838f-81c806acd0fc\" (UID: \"a70183b9-dfc0-4f3e-838f-81c806acd0fc\") " Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.405977 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92wcw\" (UniqueName: \"kubernetes.io/projected/8bac970a-8e3f-4265-b625-3af6eeea7cbe-kube-api-access-92wcw\") pod \"8bac970a-8e3f-4265-b625-3af6eeea7cbe\" (UID: \"8bac970a-8e3f-4265-b625-3af6eeea7cbe\") " Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.406004 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03cda70d-3c33-441f-bd8f-f838f16d2563-operator-scripts\") pod \"03cda70d-3c33-441f-bd8f-f838f16d2563\" (UID: \"03cda70d-3c33-441f-bd8f-f838f16d2563\") " Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.406029 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwm72\" (UniqueName: \"kubernetes.io/projected/03cda70d-3c33-441f-bd8f-f838f16d2563-kube-api-access-kwm72\") pod \"03cda70d-3c33-441f-bd8f-f838f16d2563\" (UID: \"03cda70d-3c33-441f-bd8f-f838f16d2563\") " Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.406046 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c1026c8-75da-4392-99b6-96ccacb81316-operator-scripts\") pod \"7c1026c8-75da-4392-99b6-96ccacb81316\" (UID: \"7c1026c8-75da-4392-99b6-96ccacb81316\") " Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.406117 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bac970a-8e3f-4265-b625-3af6eeea7cbe-operator-scripts\") pod \"8bac970a-8e3f-4265-b625-3af6eeea7cbe\" (UID: \"8bac970a-8e3f-4265-b625-3af6eeea7cbe\") " Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.406133 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsw8f\" (UniqueName: \"kubernetes.io/projected/a70183b9-dfc0-4f3e-838f-81c806acd0fc-kube-api-access-fsw8f\") pod \"a70183b9-dfc0-4f3e-838f-81c806acd0fc\" (UID: \"a70183b9-dfc0-4f3e-838f-81c806acd0fc\") " Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.406395 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc217648-43cf-48fc-a4e1-e371aacddb31-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc217648-43cf-48fc-a4e1-e371aacddb31" (UID: "dc217648-43cf-48fc-a4e1-e371aacddb31"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.406442 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a70183b9-dfc0-4f3e-838f-81c806acd0fc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a70183b9-dfc0-4f3e-838f-81c806acd0fc" (UID: "a70183b9-dfc0-4f3e-838f-81c806acd0fc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.406729 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc217648-43cf-48fc-a4e1-e371aacddb31-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.406753 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a70183b9-dfc0-4f3e-838f-81c806acd0fc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.407135 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03cda70d-3c33-441f-bd8f-f838f16d2563-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "03cda70d-3c33-441f-bd8f-f838f16d2563" (UID: "03cda70d-3c33-441f-bd8f-f838f16d2563"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.407542 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bac970a-8e3f-4265-b625-3af6eeea7cbe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8bac970a-8e3f-4265-b625-3af6eeea7cbe" (UID: "8bac970a-8e3f-4265-b625-3af6eeea7cbe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.407543 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c1026c8-75da-4392-99b6-96ccacb81316-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c1026c8-75da-4392-99b6-96ccacb81316" (UID: "7c1026c8-75da-4392-99b6-96ccacb81316"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.412592 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bac970a-8e3f-4265-b625-3af6eeea7cbe-kube-api-access-92wcw" (OuterVolumeSpecName: "kube-api-access-92wcw") pod "8bac970a-8e3f-4265-b625-3af6eeea7cbe" (UID: "8bac970a-8e3f-4265-b625-3af6eeea7cbe"). InnerVolumeSpecName "kube-api-access-92wcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.412657 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a70183b9-dfc0-4f3e-838f-81c806acd0fc-kube-api-access-fsw8f" (OuterVolumeSpecName: "kube-api-access-fsw8f") pod "a70183b9-dfc0-4f3e-838f-81c806acd0fc" (UID: "a70183b9-dfc0-4f3e-838f-81c806acd0fc"). InnerVolumeSpecName "kube-api-access-fsw8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.413320 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c1026c8-75da-4392-99b6-96ccacb81316-kube-api-access-4qd6l" (OuterVolumeSpecName: "kube-api-access-4qd6l") pod "7c1026c8-75da-4392-99b6-96ccacb81316" (UID: "7c1026c8-75da-4392-99b6-96ccacb81316"). InnerVolumeSpecName "kube-api-access-4qd6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.414203 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc217648-43cf-48fc-a4e1-e371aacddb31-kube-api-access-rtb6m" (OuterVolumeSpecName: "kube-api-access-rtb6m") pod "dc217648-43cf-48fc-a4e1-e371aacddb31" (UID: "dc217648-43cf-48fc-a4e1-e371aacddb31"). InnerVolumeSpecName "kube-api-access-rtb6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.414349 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03cda70d-3c33-441f-bd8f-f838f16d2563-kube-api-access-kwm72" (OuterVolumeSpecName: "kube-api-access-kwm72") pod "03cda70d-3c33-441f-bd8f-f838f16d2563" (UID: "03cda70d-3c33-441f-bd8f-f838f16d2563"). InnerVolumeSpecName "kube-api-access-kwm72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.510972 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtb6m\" (UniqueName: \"kubernetes.io/projected/dc217648-43cf-48fc-a4e1-e371aacddb31-kube-api-access-rtb6m\") on node \"crc\" DevicePath \"\"" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.511058 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92wcw\" (UniqueName: \"kubernetes.io/projected/8bac970a-8e3f-4265-b625-3af6eeea7cbe-kube-api-access-92wcw\") on node \"crc\" DevicePath \"\"" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.511077 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03cda70d-3c33-441f-bd8f-f838f16d2563-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.511093 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwm72\" (UniqueName: \"kubernetes.io/projected/03cda70d-3c33-441f-bd8f-f838f16d2563-kube-api-access-kwm72\") on node \"crc\" DevicePath \"\"" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.511113 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c1026c8-75da-4392-99b6-96ccacb81316-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.511224 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bac970a-8e3f-4265-b625-3af6eeea7cbe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.511242 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsw8f\" (UniqueName: \"kubernetes.io/projected/a70183b9-dfc0-4f3e-838f-81c806acd0fc-kube-api-access-fsw8f\") on node \"crc\" DevicePath \"\"" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.511259 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qd6l\" (UniqueName: \"kubernetes.io/projected/7c1026c8-75da-4392-99b6-96ccacb81316-kube-api-access-4qd6l\") on node \"crc\" DevicePath \"\"" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.634530 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mp6ct" event={"ID":"dc217648-43cf-48fc-a4e1-e371aacddb31","Type":"ContainerDied","Data":"95cd07ec7b76126ba440918434ea1bcf92deb15c29ccb38e5b72f89d4822a9d6"} Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.634602 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95cd07ec7b76126ba440918434ea1bcf92deb15c29ccb38e5b72f89d4822a9d6" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.634701 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mp6ct" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.638860 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-w6gg8" event={"ID":"ad63280e-42ab-4d15-8f88-f19cd766140f","Type":"ContainerDied","Data":"e5f985cefefb58d1531009eb02e80fe0bc70068036c6e48fc7575082550a0740"} Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.638913 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5f985cefefb58d1531009eb02e80fe0bc70068036c6e48fc7575082550a0740" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.639010 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-w6gg8" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.642209 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4456-account-create-update-tsj7t" event={"ID":"a70183b9-dfc0-4f3e-838f-81c806acd0fc","Type":"ContainerDied","Data":"66ec8b49f15e380c7838cf3cd7933a3e2748fe19a5dd91e2993997d2fff28043"} Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.642280 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66ec8b49f15e380c7838cf3cd7933a3e2748fe19a5dd91e2993997d2fff28043" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.642484 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4456-account-create-update-tsj7t" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.644269 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dcrt4" event={"ID":"7c1026c8-75da-4392-99b6-96ccacb81316","Type":"ContainerDied","Data":"9ffd840198f2aedcae4e516200abec7ccdd43d244f889c829dd626744cefc7b5"} Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.644330 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ffd840198f2aedcae4e516200abec7ccdd43d244f889c829dd626744cefc7b5" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.644360 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dcrt4" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.649094 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5ffc-account-create-update-xkr67" event={"ID":"03cda70d-3c33-441f-bd8f-f838f16d2563","Type":"ContainerDied","Data":"8562a55d41b4de851ca5a143dcd7bfe1c29e6b73ee980e283f3efbc4394feba4"} Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.649123 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5ffc-account-create-update-xkr67" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.649134 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8562a55d41b4de851ca5a143dcd7bfe1c29e6b73ee980e283f3efbc4394feba4" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.651752 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-019f-account-create-update-fjsnb" event={"ID":"8bac970a-8e3f-4265-b625-3af6eeea7cbe","Type":"ContainerDied","Data":"94f182a64fd44c34c945b12d4c5f71c114c073600845751f57dd91be549b49e7"} Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.651810 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94f182a64fd44c34c945b12d4c5f71c114c073600845751f57dd91be549b49e7" Jan 27 09:19:32 crc kubenswrapper[4799]: I0127 09:19:32.651885 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-019f-account-create-update-fjsnb" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.269944 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-csq5g"] Jan 27 09:19:38 crc kubenswrapper[4799]: E0127 09:19:38.270840 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad63280e-42ab-4d15-8f88-f19cd766140f" containerName="mariadb-database-create" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.270856 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad63280e-42ab-4d15-8f88-f19cd766140f" containerName="mariadb-database-create" Jan 27 09:19:38 crc kubenswrapper[4799]: E0127 09:19:38.270873 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03cda70d-3c33-441f-bd8f-f838f16d2563" containerName="mariadb-account-create-update" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.270880 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="03cda70d-3c33-441f-bd8f-f838f16d2563" containerName="mariadb-account-create-update" Jan 27 09:19:38 crc kubenswrapper[4799]: E0127 09:19:38.270900 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70183b9-dfc0-4f3e-838f-81c806acd0fc" containerName="mariadb-account-create-update" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.270908 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70183b9-dfc0-4f3e-838f-81c806acd0fc" containerName="mariadb-account-create-update" Jan 27 09:19:38 crc kubenswrapper[4799]: E0127 09:19:38.270930 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c1026c8-75da-4392-99b6-96ccacb81316" containerName="mariadb-database-create" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.270938 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c1026c8-75da-4392-99b6-96ccacb81316" containerName="mariadb-database-create" Jan 27 09:19:38 crc kubenswrapper[4799]: E0127 09:19:38.270946 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc217648-43cf-48fc-a4e1-e371aacddb31" containerName="mariadb-database-create" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.270953 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc217648-43cf-48fc-a4e1-e371aacddb31" containerName="mariadb-database-create" Jan 27 09:19:38 crc kubenswrapper[4799]: E0127 09:19:38.270970 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bac970a-8e3f-4265-b625-3af6eeea7cbe" containerName="mariadb-account-create-update" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.270980 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bac970a-8e3f-4265-b625-3af6eeea7cbe" containerName="mariadb-account-create-update" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.271147 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70183b9-dfc0-4f3e-838f-81c806acd0fc" containerName="mariadb-account-create-update" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.271166 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c1026c8-75da-4392-99b6-96ccacb81316" containerName="mariadb-database-create" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.271175 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad63280e-42ab-4d15-8f88-f19cd766140f" containerName="mariadb-database-create" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.271187 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc217648-43cf-48fc-a4e1-e371aacddb31" containerName="mariadb-database-create" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.271201 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="03cda70d-3c33-441f-bd8f-f838f16d2563" containerName="mariadb-account-create-update" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.271210 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bac970a-8e3f-4265-b625-3af6eeea7cbe" containerName="mariadb-account-create-update" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.271837 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-csq5g" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.274183 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-f5czv" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.276415 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.277272 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.284448 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-csq5g"] Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.340529 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-csq5g\" (UID: \"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe\") " pod="openstack/nova-cell0-conductor-db-sync-csq5g" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.340605 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-config-data\") pod \"nova-cell0-conductor-db-sync-csq5g\" (UID: \"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe\") " pod="openstack/nova-cell0-conductor-db-sync-csq5g" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.340730 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnn2g\" (UniqueName: \"kubernetes.io/projected/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-kube-api-access-wnn2g\") pod \"nova-cell0-conductor-db-sync-csq5g\" (UID: \"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe\") " pod="openstack/nova-cell0-conductor-db-sync-csq5g" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.340780 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-scripts\") pod \"nova-cell0-conductor-db-sync-csq5g\" (UID: \"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe\") " pod="openstack/nova-cell0-conductor-db-sync-csq5g" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.442636 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnn2g\" (UniqueName: \"kubernetes.io/projected/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-kube-api-access-wnn2g\") pod \"nova-cell0-conductor-db-sync-csq5g\" (UID: \"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe\") " pod="openstack/nova-cell0-conductor-db-sync-csq5g" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.442732 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-scripts\") pod \"nova-cell0-conductor-db-sync-csq5g\" (UID: \"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe\") " pod="openstack/nova-cell0-conductor-db-sync-csq5g" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.442789 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-csq5g\" (UID: \"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe\") " pod="openstack/nova-cell0-conductor-db-sync-csq5g" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.442818 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-config-data\") pod \"nova-cell0-conductor-db-sync-csq5g\" (UID: \"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe\") " pod="openstack/nova-cell0-conductor-db-sync-csq5g" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.449188 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-scripts\") pod \"nova-cell0-conductor-db-sync-csq5g\" (UID: \"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe\") " pod="openstack/nova-cell0-conductor-db-sync-csq5g" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.449872 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-csq5g\" (UID: \"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe\") " pod="openstack/nova-cell0-conductor-db-sync-csq5g" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.453095 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-config-data\") pod \"nova-cell0-conductor-db-sync-csq5g\" (UID: \"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe\") " pod="openstack/nova-cell0-conductor-db-sync-csq5g" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.468665 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnn2g\" (UniqueName: \"kubernetes.io/projected/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-kube-api-access-wnn2g\") pod \"nova-cell0-conductor-db-sync-csq5g\" (UID: \"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe\") " pod="openstack/nova-cell0-conductor-db-sync-csq5g" Jan 27 09:19:38 crc kubenswrapper[4799]: I0127 09:19:38.594939 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-csq5g" Jan 27 09:19:39 crc kubenswrapper[4799]: I0127 09:19:39.052458 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-csq5g"] Jan 27 09:19:39 crc kubenswrapper[4799]: I0127 09:19:39.742383 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-csq5g" event={"ID":"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe","Type":"ContainerStarted","Data":"18de9effd4f019730fcbfddf286d189168d7b5c8634c4ba495765e686576b063"} Jan 27 09:19:39 crc kubenswrapper[4799]: I0127 09:19:39.742924 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-csq5g" event={"ID":"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe","Type":"ContainerStarted","Data":"9867814ad61a88ea642a2a6d330963bdc9b1929208981c6baff2c96aee815659"} Jan 27 09:19:39 crc kubenswrapper[4799]: I0127 09:19:39.762072 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-csq5g" podStartSLOduration=1.762054677 podStartE2EDuration="1.762054677s" podCreationTimestamp="2026-01-27 09:19:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:19:39.760931937 +0000 UTC m=+5646.072036012" watchObservedRunningTime="2026-01-27 09:19:39.762054677 +0000 UTC m=+5646.073158732" Jan 27 09:19:40 crc kubenswrapper[4799]: I0127 09:19:40.524828 4799 scope.go:117] "RemoveContainer" containerID="fc867c06f720d548cefd95be7eaff8e162dde88bb372af9793b97d1e44e8e54c" Jan 27 09:19:46 crc kubenswrapper[4799]: E0127 09:19:46.906313 4799 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedc98ab8_f7f1_4eac_b4bc_483e1e6fefbe.slice/crio-18de9effd4f019730fcbfddf286d189168d7b5c8634c4ba495765e686576b063.scope\": RecentStats: unable to find data in memory cache]" Jan 27 09:19:47 crc kubenswrapper[4799]: I0127 09:19:47.807423 4799 generic.go:334] "Generic (PLEG): container finished" podID="edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe" containerID="18de9effd4f019730fcbfddf286d189168d7b5c8634c4ba495765e686576b063" exitCode=0 Jan 27 09:19:47 crc kubenswrapper[4799]: I0127 09:19:47.807511 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-csq5g" event={"ID":"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe","Type":"ContainerDied","Data":"18de9effd4f019730fcbfddf286d189168d7b5c8634c4ba495765e686576b063"} Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.124867 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-csq5g" Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.255071 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-scripts\") pod \"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe\" (UID: \"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe\") " Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.255181 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-combined-ca-bundle\") pod \"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe\" (UID: \"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe\") " Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.255214 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnn2g\" (UniqueName: \"kubernetes.io/projected/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-kube-api-access-wnn2g\") pod \"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe\" (UID: \"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe\") " Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.255240 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-config-data\") pod \"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe\" (UID: \"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe\") " Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.260977 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-kube-api-access-wnn2g" (OuterVolumeSpecName: "kube-api-access-wnn2g") pod "edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe" (UID: "edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe"). InnerVolumeSpecName "kube-api-access-wnn2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.261233 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-scripts" (OuterVolumeSpecName: "scripts") pod "edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe" (UID: "edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.281828 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe" (UID: "edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.287841 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-config-data" (OuterVolumeSpecName: "config-data") pod "edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe" (UID: "edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.358262 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.358344 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.358367 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnn2g\" (UniqueName: \"kubernetes.io/projected/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-kube-api-access-wnn2g\") on node \"crc\" DevicePath \"\"" Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.358387 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.824582 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-csq5g" event={"ID":"edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe","Type":"ContainerDied","Data":"9867814ad61a88ea642a2a6d330963bdc9b1929208981c6baff2c96aee815659"} Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.824629 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9867814ad61a88ea642a2a6d330963bdc9b1929208981c6baff2c96aee815659" Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.824716 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-csq5g" Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.909392 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 09:19:49 crc kubenswrapper[4799]: E0127 09:19:49.910123 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe" containerName="nova-cell0-conductor-db-sync" Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.910241 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe" containerName="nova-cell0-conductor-db-sync" Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.910655 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe" containerName="nova-cell0-conductor-db-sync" Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.911542 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.917099 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-f5czv" Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.917212 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 09:19:49 crc kubenswrapper[4799]: I0127 09:19:49.925001 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 09:19:50 crc kubenswrapper[4799]: I0127 09:19:50.072092 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf460e2-e400-4353-8e03-611ab39e1842-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9cf460e2-e400-4353-8e03-611ab39e1842\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:19:50 crc kubenswrapper[4799]: I0127 09:19:50.072166 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf460e2-e400-4353-8e03-611ab39e1842-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9cf460e2-e400-4353-8e03-611ab39e1842\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:19:50 crc kubenswrapper[4799]: I0127 09:19:50.072367 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnq2h\" (UniqueName: \"kubernetes.io/projected/9cf460e2-e400-4353-8e03-611ab39e1842-kube-api-access-xnq2h\") pod \"nova-cell0-conductor-0\" (UID: \"9cf460e2-e400-4353-8e03-611ab39e1842\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:19:50 crc kubenswrapper[4799]: I0127 09:19:50.173626 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnq2h\" (UniqueName: \"kubernetes.io/projected/9cf460e2-e400-4353-8e03-611ab39e1842-kube-api-access-xnq2h\") pod \"nova-cell0-conductor-0\" (UID: \"9cf460e2-e400-4353-8e03-611ab39e1842\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:19:50 crc kubenswrapper[4799]: I0127 09:19:50.173778 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf460e2-e400-4353-8e03-611ab39e1842-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9cf460e2-e400-4353-8e03-611ab39e1842\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:19:50 crc kubenswrapper[4799]: I0127 09:19:50.173834 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf460e2-e400-4353-8e03-611ab39e1842-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9cf460e2-e400-4353-8e03-611ab39e1842\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:19:50 crc kubenswrapper[4799]: I0127 09:19:50.179594 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf460e2-e400-4353-8e03-611ab39e1842-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9cf460e2-e400-4353-8e03-611ab39e1842\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:19:50 crc kubenswrapper[4799]: I0127 09:19:50.183913 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf460e2-e400-4353-8e03-611ab39e1842-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9cf460e2-e400-4353-8e03-611ab39e1842\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:19:50 crc kubenswrapper[4799]: I0127 09:19:50.199409 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnq2h\" (UniqueName: \"kubernetes.io/projected/9cf460e2-e400-4353-8e03-611ab39e1842-kube-api-access-xnq2h\") pod \"nova-cell0-conductor-0\" (UID: \"9cf460e2-e400-4353-8e03-611ab39e1842\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:19:50 crc kubenswrapper[4799]: I0127 09:19:50.229316 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 09:19:50 crc kubenswrapper[4799]: I0127 09:19:50.688514 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 09:19:50 crc kubenswrapper[4799]: I0127 09:19:50.842618 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9cf460e2-e400-4353-8e03-611ab39e1842","Type":"ContainerStarted","Data":"332146c89239a436fc72eb502b5554edbe1fad670e4cee96a736d8071c35a2b1"} Jan 27 09:19:51 crc kubenswrapper[4799]: I0127 09:19:51.858220 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9cf460e2-e400-4353-8e03-611ab39e1842","Type":"ContainerStarted","Data":"c18df90c75e23d615660d1f9bc3ad524cbab3165ca0bcdace0d6cd3bbf6d5df9"} Jan 27 09:19:51 crc kubenswrapper[4799]: I0127 09:19:51.858629 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 09:19:51 crc kubenswrapper[4799]: I0127 09:19:51.877541 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.877514841 podStartE2EDuration="2.877514841s" podCreationTimestamp="2026-01-27 09:19:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:19:51.871929589 +0000 UTC m=+5658.183033694" watchObservedRunningTime="2026-01-27 09:19:51.877514841 +0000 UTC m=+5658.188618916" Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.260658 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.695113 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-7tn7x"] Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.696530 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7tn7x" Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.698729 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.698924 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.710649 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7tn7x"] Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.856304 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.857709 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.865176 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.873659 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.889550 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d678b284-7ca9-4738-a934-e1638038844b-scripts\") pod \"nova-cell0-cell-mapping-7tn7x\" (UID: \"d678b284-7ca9-4738-a934-e1638038844b\") " pod="openstack/nova-cell0-cell-mapping-7tn7x" Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.889750 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfm9m\" (UniqueName: \"kubernetes.io/projected/d678b284-7ca9-4738-a934-e1638038844b-kube-api-access-wfm9m\") pod \"nova-cell0-cell-mapping-7tn7x\" (UID: \"d678b284-7ca9-4738-a934-e1638038844b\") " pod="openstack/nova-cell0-cell-mapping-7tn7x" Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.889837 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d678b284-7ca9-4738-a934-e1638038844b-config-data\") pod \"nova-cell0-cell-mapping-7tn7x\" (UID: \"d678b284-7ca9-4738-a934-e1638038844b\") " pod="openstack/nova-cell0-cell-mapping-7tn7x" Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.889874 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d678b284-7ca9-4738-a934-e1638038844b-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7tn7x\" (UID: \"d678b284-7ca9-4738-a934-e1638038844b\") " pod="openstack/nova-cell0-cell-mapping-7tn7x" Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.904958 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.912643 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.917651 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.967513 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.969217 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.986842 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.987374 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.995524 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d678b284-7ca9-4738-a934-e1638038844b-scripts\") pod \"nova-cell0-cell-mapping-7tn7x\" (UID: \"d678b284-7ca9-4738-a934-e1638038844b\") " pod="openstack/nova-cell0-cell-mapping-7tn7x" Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.995636 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59zk8\" (UniqueName: \"kubernetes.io/projected/2c40a416-327e-4270-a06b-a984eebb3d27-kube-api-access-59zk8\") pod \"nova-api-0\" (UID: \"2c40a416-327e-4270-a06b-a984eebb3d27\") " pod="openstack/nova-api-0" Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.995661 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c40a416-327e-4270-a06b-a984eebb3d27-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2c40a416-327e-4270-a06b-a984eebb3d27\") " pod="openstack/nova-api-0" Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.995681 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c40a416-327e-4270-a06b-a984eebb3d27-config-data\") pod \"nova-api-0\" (UID: \"2c40a416-327e-4270-a06b-a984eebb3d27\") " pod="openstack/nova-api-0" Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.995708 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfm9m\" (UniqueName: \"kubernetes.io/projected/d678b284-7ca9-4738-a934-e1638038844b-kube-api-access-wfm9m\") pod \"nova-cell0-cell-mapping-7tn7x\" (UID: \"d678b284-7ca9-4738-a934-e1638038844b\") " pod="openstack/nova-cell0-cell-mapping-7tn7x" Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.995745 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c40a416-327e-4270-a06b-a984eebb3d27-logs\") pod \"nova-api-0\" (UID: \"2c40a416-327e-4270-a06b-a984eebb3d27\") " pod="openstack/nova-api-0" Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.995791 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d678b284-7ca9-4738-a934-e1638038844b-config-data\") pod \"nova-cell0-cell-mapping-7tn7x\" (UID: \"d678b284-7ca9-4738-a934-e1638038844b\") " pod="openstack/nova-cell0-cell-mapping-7tn7x" Jan 27 09:19:55 crc kubenswrapper[4799]: I0127 09:19:55.995813 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d678b284-7ca9-4738-a934-e1638038844b-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7tn7x\" (UID: \"d678b284-7ca9-4738-a934-e1638038844b\") " pod="openstack/nova-cell0-cell-mapping-7tn7x" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.037698 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.045445 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfm9m\" (UniqueName: \"kubernetes.io/projected/d678b284-7ca9-4738-a934-e1638038844b-kube-api-access-wfm9m\") pod \"nova-cell0-cell-mapping-7tn7x\" (UID: \"d678b284-7ca9-4738-a934-e1638038844b\") " pod="openstack/nova-cell0-cell-mapping-7tn7x" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.059568 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d678b284-7ca9-4738-a934-e1638038844b-config-data\") pod \"nova-cell0-cell-mapping-7tn7x\" (UID: \"d678b284-7ca9-4738-a934-e1638038844b\") " pod="openstack/nova-cell0-cell-mapping-7tn7x" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.070633 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d678b284-7ca9-4738-a934-e1638038844b-scripts\") pod \"nova-cell0-cell-mapping-7tn7x\" (UID: \"d678b284-7ca9-4738-a934-e1638038844b\") " pod="openstack/nova-cell0-cell-mapping-7tn7x" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.076981 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d678b284-7ca9-4738-a934-e1638038844b-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7tn7x\" (UID: \"d678b284-7ca9-4738-a934-e1638038844b\") " pod="openstack/nova-cell0-cell-mapping-7tn7x" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.098767 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw9ms\" (UniqueName: \"kubernetes.io/projected/6cc3da9c-c322-43a7-8d4e-56518c6f70cc-kube-api-access-jw9ms\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cc3da9c-c322-43a7-8d4e-56518c6f70cc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.099094 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96147aa2-900d-4825-a662-2a37f034f0c3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96147aa2-900d-4825-a662-2a37f034f0c3\") " pod="openstack/nova-metadata-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.099208 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cc3da9c-c322-43a7-8d4e-56518c6f70cc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cc3da9c-c322-43a7-8d4e-56518c6f70cc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.099343 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96147aa2-900d-4825-a662-2a37f034f0c3-config-data\") pod \"nova-metadata-0\" (UID: \"96147aa2-900d-4825-a662-2a37f034f0c3\") " pod="openstack/nova-metadata-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.099466 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkq84\" (UniqueName: \"kubernetes.io/projected/96147aa2-900d-4825-a662-2a37f034f0c3-kube-api-access-wkq84\") pod \"nova-metadata-0\" (UID: \"96147aa2-900d-4825-a662-2a37f034f0c3\") " pod="openstack/nova-metadata-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.099610 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59zk8\" (UniqueName: \"kubernetes.io/projected/2c40a416-327e-4270-a06b-a984eebb3d27-kube-api-access-59zk8\") pod \"nova-api-0\" (UID: \"2c40a416-327e-4270-a06b-a984eebb3d27\") " pod="openstack/nova-api-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.099719 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c40a416-327e-4270-a06b-a984eebb3d27-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2c40a416-327e-4270-a06b-a984eebb3d27\") " pod="openstack/nova-api-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.099834 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96147aa2-900d-4825-a662-2a37f034f0c3-logs\") pod \"nova-metadata-0\" (UID: \"96147aa2-900d-4825-a662-2a37f034f0c3\") " pod="openstack/nova-metadata-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.099944 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c40a416-327e-4270-a06b-a984eebb3d27-config-data\") pod \"nova-api-0\" (UID: \"2c40a416-327e-4270-a06b-a984eebb3d27\") " pod="openstack/nova-api-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.100093 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c40a416-327e-4270-a06b-a984eebb3d27-logs\") pod \"nova-api-0\" (UID: \"2c40a416-327e-4270-a06b-a984eebb3d27\") " pod="openstack/nova-api-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.100224 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cc3da9c-c322-43a7-8d4e-56518c6f70cc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cc3da9c-c322-43a7-8d4e-56518c6f70cc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.100798 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c40a416-327e-4270-a06b-a984eebb3d27-logs\") pod \"nova-api-0\" (UID: \"2c40a416-327e-4270-a06b-a984eebb3d27\") " pod="openstack/nova-api-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.111058 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c40a416-327e-4270-a06b-a984eebb3d27-config-data\") pod \"nova-api-0\" (UID: \"2c40a416-327e-4270-a06b-a984eebb3d27\") " pod="openstack/nova-api-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.120042 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c40a416-327e-4270-a06b-a984eebb3d27-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2c40a416-327e-4270-a06b-a984eebb3d27\") " pod="openstack/nova-api-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.141278 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59zk8\" (UniqueName: \"kubernetes.io/projected/2c40a416-327e-4270-a06b-a984eebb3d27-kube-api-access-59zk8\") pod \"nova-api-0\" (UID: \"2c40a416-327e-4270-a06b-a984eebb3d27\") " pod="openstack/nova-api-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.144408 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.146116 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.150793 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.162853 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64dbbdfd45-nw7r2"] Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.165081 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.184258 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.191439 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.207033 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96147aa2-900d-4825-a662-2a37f034f0c3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96147aa2-900d-4825-a662-2a37f034f0c3\") " pod="openstack/nova-metadata-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.207434 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cc3da9c-c322-43a7-8d4e-56518c6f70cc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cc3da9c-c322-43a7-8d4e-56518c6f70cc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.208054 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96147aa2-900d-4825-a662-2a37f034f0c3-config-data\") pod \"nova-metadata-0\" (UID: \"96147aa2-900d-4825-a662-2a37f034f0c3\") " pod="openstack/nova-metadata-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.208201 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkq84\" (UniqueName: \"kubernetes.io/projected/96147aa2-900d-4825-a662-2a37f034f0c3-kube-api-access-wkq84\") pod \"nova-metadata-0\" (UID: \"96147aa2-900d-4825-a662-2a37f034f0c3\") " pod="openstack/nova-metadata-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.208397 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96147aa2-900d-4825-a662-2a37f034f0c3-logs\") pod \"nova-metadata-0\" (UID: \"96147aa2-900d-4825-a662-2a37f034f0c3\") " pod="openstack/nova-metadata-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.208620 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cc3da9c-c322-43a7-8d4e-56518c6f70cc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cc3da9c-c322-43a7-8d4e-56518c6f70cc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.209360 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96147aa2-900d-4825-a662-2a37f034f0c3-logs\") pod \"nova-metadata-0\" (UID: \"96147aa2-900d-4825-a662-2a37f034f0c3\") " pod="openstack/nova-metadata-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.210290 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw9ms\" (UniqueName: \"kubernetes.io/projected/6cc3da9c-c322-43a7-8d4e-56518c6f70cc-kube-api-access-jw9ms\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cc3da9c-c322-43a7-8d4e-56518c6f70cc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.210951 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cc3da9c-c322-43a7-8d4e-56518c6f70cc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cc3da9c-c322-43a7-8d4e-56518c6f70cc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.215394 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64dbbdfd45-nw7r2"] Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.224729 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96147aa2-900d-4825-a662-2a37f034f0c3-config-data\") pod \"nova-metadata-0\" (UID: \"96147aa2-900d-4825-a662-2a37f034f0c3\") " pod="openstack/nova-metadata-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.228939 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cc3da9c-c322-43a7-8d4e-56518c6f70cc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cc3da9c-c322-43a7-8d4e-56518c6f70cc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.231134 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96147aa2-900d-4825-a662-2a37f034f0c3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96147aa2-900d-4825-a662-2a37f034f0c3\") " pod="openstack/nova-metadata-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.234381 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkq84\" (UniqueName: \"kubernetes.io/projected/96147aa2-900d-4825-a662-2a37f034f0c3-kube-api-access-wkq84\") pod \"nova-metadata-0\" (UID: \"96147aa2-900d-4825-a662-2a37f034f0c3\") " pod="openstack/nova-metadata-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.238792 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw9ms\" (UniqueName: \"kubernetes.io/projected/6cc3da9c-c322-43a7-8d4e-56518c6f70cc-kube-api-access-jw9ms\") pod \"nova-cell1-novncproxy-0\" (UID: \"6cc3da9c-c322-43a7-8d4e-56518c6f70cc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.268998 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.302848 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.314553 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v5dt\" (UniqueName: \"kubernetes.io/projected/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-kube-api-access-4v5dt\") pod \"dnsmasq-dns-64dbbdfd45-nw7r2\" (UID: \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\") " pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.314601 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c19687-86a1-451e-9011-a17f777f9e39-config-data\") pod \"nova-scheduler-0\" (UID: \"82c19687-86a1-451e-9011-a17f777f9e39\") " pod="openstack/nova-scheduler-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.314683 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-ovsdbserver-nb\") pod \"dnsmasq-dns-64dbbdfd45-nw7r2\" (UID: \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\") " pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.314722 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-ovsdbserver-sb\") pod \"dnsmasq-dns-64dbbdfd45-nw7r2\" (UID: \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\") " pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.314757 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c19687-86a1-451e-9011-a17f777f9e39-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"82c19687-86a1-451e-9011-a17f777f9e39\") " pod="openstack/nova-scheduler-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.314774 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdlf4\" (UniqueName: \"kubernetes.io/projected/82c19687-86a1-451e-9011-a17f777f9e39-kube-api-access-tdlf4\") pod \"nova-scheduler-0\" (UID: \"82c19687-86a1-451e-9011-a17f777f9e39\") " pod="openstack/nova-scheduler-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.314805 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-config\") pod \"dnsmasq-dns-64dbbdfd45-nw7r2\" (UID: \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\") " pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.314837 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-dns-svc\") pod \"dnsmasq-dns-64dbbdfd45-nw7r2\" (UID: \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\") " pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.324369 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7tn7x" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.417999 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-config\") pod \"dnsmasq-dns-64dbbdfd45-nw7r2\" (UID: \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\") " pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.418074 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-dns-svc\") pod \"dnsmasq-dns-64dbbdfd45-nw7r2\" (UID: \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\") " pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.418132 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v5dt\" (UniqueName: \"kubernetes.io/projected/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-kube-api-access-4v5dt\") pod \"dnsmasq-dns-64dbbdfd45-nw7r2\" (UID: \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\") " pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.418158 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c19687-86a1-451e-9011-a17f777f9e39-config-data\") pod \"nova-scheduler-0\" (UID: \"82c19687-86a1-451e-9011-a17f777f9e39\") " pod="openstack/nova-scheduler-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.418224 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-ovsdbserver-nb\") pod \"dnsmasq-dns-64dbbdfd45-nw7r2\" (UID: \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\") " pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.418272 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-ovsdbserver-sb\") pod \"dnsmasq-dns-64dbbdfd45-nw7r2\" (UID: \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\") " pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.418339 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c19687-86a1-451e-9011-a17f777f9e39-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"82c19687-86a1-451e-9011-a17f777f9e39\") " pod="openstack/nova-scheduler-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.418361 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdlf4\" (UniqueName: \"kubernetes.io/projected/82c19687-86a1-451e-9011-a17f777f9e39-kube-api-access-tdlf4\") pod \"nova-scheduler-0\" (UID: \"82c19687-86a1-451e-9011-a17f777f9e39\") " pod="openstack/nova-scheduler-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.419783 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-config\") pod \"dnsmasq-dns-64dbbdfd45-nw7r2\" (UID: \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\") " pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.420466 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-dns-svc\") pod \"dnsmasq-dns-64dbbdfd45-nw7r2\" (UID: \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\") " pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.428108 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-ovsdbserver-sb\") pod \"dnsmasq-dns-64dbbdfd45-nw7r2\" (UID: \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\") " pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.429023 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-ovsdbserver-nb\") pod \"dnsmasq-dns-64dbbdfd45-nw7r2\" (UID: \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\") " pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.429228 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c19687-86a1-451e-9011-a17f777f9e39-config-data\") pod \"nova-scheduler-0\" (UID: \"82c19687-86a1-451e-9011-a17f777f9e39\") " pod="openstack/nova-scheduler-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.439163 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c19687-86a1-451e-9011-a17f777f9e39-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"82c19687-86a1-451e-9011-a17f777f9e39\") " pod="openstack/nova-scheduler-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.449198 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v5dt\" (UniqueName: \"kubernetes.io/projected/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-kube-api-access-4v5dt\") pod \"dnsmasq-dns-64dbbdfd45-nw7r2\" (UID: \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\") " pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.451837 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdlf4\" (UniqueName: \"kubernetes.io/projected/82c19687-86a1-451e-9011-a17f777f9e39-kube-api-access-tdlf4\") pod \"nova-scheduler-0\" (UID: \"82c19687-86a1-451e-9011-a17f777f9e39\") " pod="openstack/nova-scheduler-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.517748 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.533765 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.534471 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.856681 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.954871 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96147aa2-900d-4825-a662-2a37f034f0c3","Type":"ContainerStarted","Data":"925657144b00e89b3d65d15682e9effc1f14c6fa961012c517c706da36b110be"} Jan 27 09:19:56 crc kubenswrapper[4799]: I0127 09:19:56.958330 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c40a416-327e-4270-a06b-a984eebb3d27","Type":"ContainerStarted","Data":"bc976f02f129c6640a9be6f75d3fec54d88c5f458cdb792b20317d838e79c1d3"} Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.129255 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7tn7x"] Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.203979 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.214246 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9qp8p"] Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.215583 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9qp8p" Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.217992 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.219228 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.222137 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9qp8p"] Jan 27 09:19:57 crc kubenswrapper[4799]: W0127 09:19:57.269959 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd678b284_7ca9_4738_a934_e1638038844b.slice/crio-bb6090e4a7b683e79b9643c244c69f814188cb4703a1a389f256aa46fc6d7449 WatchSource:0}: Error finding container bb6090e4a7b683e79b9643c244c69f814188cb4703a1a389f256aa46fc6d7449: Status 404 returned error can't find the container with id bb6090e4a7b683e79b9643c244c69f814188cb4703a1a389f256aa46fc6d7449 Jan 27 09:19:57 crc kubenswrapper[4799]: W0127 09:19:57.272506 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cc3da9c_c322_43a7_8d4e_56518c6f70cc.slice/crio-c4f65c26d220acccfb8775a85138522087fd1dc0b100e178b498a32ca547d16a WatchSource:0}: Error finding container c4f65c26d220acccfb8775a85138522087fd1dc0b100e178b498a32ca547d16a: Status 404 returned error can't find the container with id c4f65c26d220acccfb8775a85138522087fd1dc0b100e178b498a32ca547d16a Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.336913 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.351808 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64dbbdfd45-nw7r2"] Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.365553 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b93d40-5c8a-47d2-8f67-3b22d2594c19-scripts\") pod \"nova-cell1-conductor-db-sync-9qp8p\" (UID: \"75b93d40-5c8a-47d2-8f67-3b22d2594c19\") " pod="openstack/nova-cell1-conductor-db-sync-9qp8p" Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.365652 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzz6c\" (UniqueName: \"kubernetes.io/projected/75b93d40-5c8a-47d2-8f67-3b22d2594c19-kube-api-access-vzz6c\") pod \"nova-cell1-conductor-db-sync-9qp8p\" (UID: \"75b93d40-5c8a-47d2-8f67-3b22d2594c19\") " pod="openstack/nova-cell1-conductor-db-sync-9qp8p" Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.365783 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b93d40-5c8a-47d2-8f67-3b22d2594c19-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9qp8p\" (UID: \"75b93d40-5c8a-47d2-8f67-3b22d2594c19\") " pod="openstack/nova-cell1-conductor-db-sync-9qp8p" Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.365847 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b93d40-5c8a-47d2-8f67-3b22d2594c19-config-data\") pod \"nova-cell1-conductor-db-sync-9qp8p\" (UID: \"75b93d40-5c8a-47d2-8f67-3b22d2594c19\") " pod="openstack/nova-cell1-conductor-db-sync-9qp8p" Jan 27 09:19:57 crc kubenswrapper[4799]: W0127 09:19:57.398188 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82c19687_86a1_451e_9011_a17f777f9e39.slice/crio-c2beae77059cc34f82362d0e16e6bdc8ff303762a089a500feb3ce893e6f7178 WatchSource:0}: Error finding container c2beae77059cc34f82362d0e16e6bdc8ff303762a089a500feb3ce893e6f7178: Status 404 returned error can't find the container with id c2beae77059cc34f82362d0e16e6bdc8ff303762a089a500feb3ce893e6f7178 Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.473538 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b93d40-5c8a-47d2-8f67-3b22d2594c19-scripts\") pod \"nova-cell1-conductor-db-sync-9qp8p\" (UID: \"75b93d40-5c8a-47d2-8f67-3b22d2594c19\") " pod="openstack/nova-cell1-conductor-db-sync-9qp8p" Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.473614 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzz6c\" (UniqueName: \"kubernetes.io/projected/75b93d40-5c8a-47d2-8f67-3b22d2594c19-kube-api-access-vzz6c\") pod \"nova-cell1-conductor-db-sync-9qp8p\" (UID: \"75b93d40-5c8a-47d2-8f67-3b22d2594c19\") " pod="openstack/nova-cell1-conductor-db-sync-9qp8p" Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.473693 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b93d40-5c8a-47d2-8f67-3b22d2594c19-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9qp8p\" (UID: \"75b93d40-5c8a-47d2-8f67-3b22d2594c19\") " pod="openstack/nova-cell1-conductor-db-sync-9qp8p" Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.473746 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b93d40-5c8a-47d2-8f67-3b22d2594c19-config-data\") pod \"nova-cell1-conductor-db-sync-9qp8p\" (UID: \"75b93d40-5c8a-47d2-8f67-3b22d2594c19\") " pod="openstack/nova-cell1-conductor-db-sync-9qp8p" Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.478207 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b93d40-5c8a-47d2-8f67-3b22d2594c19-scripts\") pod \"nova-cell1-conductor-db-sync-9qp8p\" (UID: \"75b93d40-5c8a-47d2-8f67-3b22d2594c19\") " pod="openstack/nova-cell1-conductor-db-sync-9qp8p" Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.479820 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b93d40-5c8a-47d2-8f67-3b22d2594c19-config-data\") pod \"nova-cell1-conductor-db-sync-9qp8p\" (UID: \"75b93d40-5c8a-47d2-8f67-3b22d2594c19\") " pod="openstack/nova-cell1-conductor-db-sync-9qp8p" Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.481906 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b93d40-5c8a-47d2-8f67-3b22d2594c19-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9qp8p\" (UID: \"75b93d40-5c8a-47d2-8f67-3b22d2594c19\") " pod="openstack/nova-cell1-conductor-db-sync-9qp8p" Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.495037 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzz6c\" (UniqueName: \"kubernetes.io/projected/75b93d40-5c8a-47d2-8f67-3b22d2594c19-kube-api-access-vzz6c\") pod \"nova-cell1-conductor-db-sync-9qp8p\" (UID: \"75b93d40-5c8a-47d2-8f67-3b22d2594c19\") " pod="openstack/nova-cell1-conductor-db-sync-9qp8p" Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.544678 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9qp8p" Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.967490 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6cc3da9c-c322-43a7-8d4e-56518c6f70cc","Type":"ContainerStarted","Data":"c4f65c26d220acccfb8775a85138522087fd1dc0b100e178b498a32ca547d16a"} Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.969082 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7tn7x" event={"ID":"d678b284-7ca9-4738-a934-e1638038844b","Type":"ContainerStarted","Data":"bb6090e4a7b683e79b9643c244c69f814188cb4703a1a389f256aa46fc6d7449"} Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.970135 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"82c19687-86a1-451e-9011-a17f777f9e39","Type":"ContainerStarted","Data":"c2beae77059cc34f82362d0e16e6bdc8ff303762a089a500feb3ce893e6f7178"} Jan 27 09:19:57 crc kubenswrapper[4799]: I0127 09:19:57.971381 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" event={"ID":"3ed81796-f7dd-4fe5-b876-d4761d0fddf8","Type":"ContainerStarted","Data":"4d5ba0652a756e941784b08073307c1207e782f2fc4bf71415a105f8ae986c67"} Jan 27 09:19:58 crc kubenswrapper[4799]: W0127 09:19:58.179785 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75b93d40_5c8a_47d2_8f67_3b22d2594c19.slice/crio-da498bfcc2eeb6b368985cf7b853c5376b7ad25e22630d3ba5fd87e5ec94cfc1 WatchSource:0}: Error finding container da498bfcc2eeb6b368985cf7b853c5376b7ad25e22630d3ba5fd87e5ec94cfc1: Status 404 returned error can't find the container with id da498bfcc2eeb6b368985cf7b853c5376b7ad25e22630d3ba5fd87e5ec94cfc1 Jan 27 09:19:58 crc kubenswrapper[4799]: I0127 09:19:58.183634 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9qp8p"] Jan 27 09:19:58 crc kubenswrapper[4799]: I0127 09:19:58.982668 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"82c19687-86a1-451e-9011-a17f777f9e39","Type":"ContainerStarted","Data":"d391a2917e9176fea1b6d3cbae82fd4595a6b1381ca046c6d0ffb9a76aaf7ccc"} Jan 27 09:19:58 crc kubenswrapper[4799]: I0127 09:19:58.984550 4799 generic.go:334] "Generic (PLEG): container finished" podID="3ed81796-f7dd-4fe5-b876-d4761d0fddf8" containerID="09bbb9d0ae7b21e941a92070802040cabe4ed81847d65b606c2db6dbf1ee4635" exitCode=0 Jan 27 09:19:58 crc kubenswrapper[4799]: I0127 09:19:58.984619 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" event={"ID":"3ed81796-f7dd-4fe5-b876-d4761d0fddf8","Type":"ContainerDied","Data":"09bbb9d0ae7b21e941a92070802040cabe4ed81847d65b606c2db6dbf1ee4635"} Jan 27 09:19:58 crc kubenswrapper[4799]: I0127 09:19:58.992143 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c40a416-327e-4270-a06b-a984eebb3d27","Type":"ContainerStarted","Data":"e9e2236d80c4db4851448e5ab6e8b9fabf325ad564f9dbc828662d1d03e8ce7a"} Jan 27 09:19:58 crc kubenswrapper[4799]: I0127 09:19:58.992188 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c40a416-327e-4270-a06b-a984eebb3d27","Type":"ContainerStarted","Data":"083bc04db3148ea469c4eaa59a56fa055b0bba13082cae6d66f242df6ac42ee8"} Jan 27 09:19:58 crc kubenswrapper[4799]: I0127 09:19:58.995026 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9qp8p" event={"ID":"75b93d40-5c8a-47d2-8f67-3b22d2594c19","Type":"ContainerStarted","Data":"ebe0841af298b53325004f708c663198dbd52cfe14a24a341ee34a8885048a1a"} Jan 27 09:19:58 crc kubenswrapper[4799]: I0127 09:19:58.995064 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9qp8p" event={"ID":"75b93d40-5c8a-47d2-8f67-3b22d2594c19","Type":"ContainerStarted","Data":"da498bfcc2eeb6b368985cf7b853c5376b7ad25e22630d3ba5fd87e5ec94cfc1"} Jan 27 09:19:59 crc kubenswrapper[4799]: I0127 09:19:59.002730 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.002712189 podStartE2EDuration="3.002712189s" podCreationTimestamp="2026-01-27 09:19:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:19:59.002614356 +0000 UTC m=+5665.313718431" watchObservedRunningTime="2026-01-27 09:19:59.002712189 +0000 UTC m=+5665.313816254" Jan 27 09:19:59 crc kubenswrapper[4799]: I0127 09:19:59.013405 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96147aa2-900d-4825-a662-2a37f034f0c3","Type":"ContainerStarted","Data":"83bdb24d71a76b72742fe8727174acaebf09d5e94311e2768d94703dd4d46aa2"} Jan 27 09:19:59 crc kubenswrapper[4799]: I0127 09:19:59.013453 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96147aa2-900d-4825-a662-2a37f034f0c3","Type":"ContainerStarted","Data":"c5c7054fc9fb842d5b1171953068249faabb1c9b5a939468b57fbbc55dec9123"} Jan 27 09:19:59 crc kubenswrapper[4799]: I0127 09:19:59.038596 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6cc3da9c-c322-43a7-8d4e-56518c6f70cc","Type":"ContainerStarted","Data":"2c32b01e4fd558523da8fdddb26cb8c8aefcfa8da052eb16af87379a26dfdcbf"} Jan 27 09:19:59 crc kubenswrapper[4799]: I0127 09:19:59.045718 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7tn7x" event={"ID":"d678b284-7ca9-4738-a934-e1638038844b","Type":"ContainerStarted","Data":"2c43c8034707a68bf10d7af267a80113908fb2eaff30a8ee5c21ea4387943f67"} Jan 27 09:19:59 crc kubenswrapper[4799]: I0127 09:19:59.050215 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-9qp8p" podStartSLOduration=2.050170001 podStartE2EDuration="2.050170001s" podCreationTimestamp="2026-01-27 09:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:19:59.03066499 +0000 UTC m=+5665.341769055" watchObservedRunningTime="2026-01-27 09:19:59.050170001 +0000 UTC m=+5665.361274086" Jan 27 09:19:59 crc kubenswrapper[4799]: I0127 09:19:59.101796 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.101778456 podStartE2EDuration="4.101778456s" podCreationTimestamp="2026-01-27 09:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:19:59.097448968 +0000 UTC m=+5665.408553043" watchObservedRunningTime="2026-01-27 09:19:59.101778456 +0000 UTC m=+5665.412882521" Jan 27 09:19:59 crc kubenswrapper[4799]: I0127 09:19:59.148932 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.148909078 podStartE2EDuration="4.148909078s" podCreationTimestamp="2026-01-27 09:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:19:59.143768198 +0000 UTC m=+5665.454872263" watchObservedRunningTime="2026-01-27 09:19:59.148909078 +0000 UTC m=+5665.460013153" Jan 27 09:19:59 crc kubenswrapper[4799]: I0127 09:19:59.154577 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.154555482 podStartE2EDuration="4.154555482s" podCreationTimestamp="2026-01-27 09:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:19:59.121520973 +0000 UTC m=+5665.432625038" watchObservedRunningTime="2026-01-27 09:19:59.154555482 +0000 UTC m=+5665.465659537" Jan 27 09:19:59 crc kubenswrapper[4799]: I0127 09:19:59.162784 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-7tn7x" podStartSLOduration=4.162766195 podStartE2EDuration="4.162766195s" podCreationTimestamp="2026-01-27 09:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:19:59.158904041 +0000 UTC m=+5665.470008106" watchObservedRunningTime="2026-01-27 09:19:59.162766195 +0000 UTC m=+5665.473870260" Jan 27 09:20:00 crc kubenswrapper[4799]: I0127 09:20:00.055983 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" event={"ID":"3ed81796-f7dd-4fe5-b876-d4761d0fddf8","Type":"ContainerStarted","Data":"56754cbfc9462a9c592ef30f720dd0e53d592bb9e4975ff855109f3b5ab43856"} Jan 27 09:20:00 crc kubenswrapper[4799]: I0127 09:20:00.099372 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" podStartSLOduration=4.099346737 podStartE2EDuration="4.099346737s" podCreationTimestamp="2026-01-27 09:19:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:20:00.077728739 +0000 UTC m=+5666.388832804" watchObservedRunningTime="2026-01-27 09:20:00.099346737 +0000 UTC m=+5666.410450802" Jan 27 09:20:01 crc kubenswrapper[4799]: I0127 09:20:01.064414 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" Jan 27 09:20:01 crc kubenswrapper[4799]: I0127 09:20:01.271168 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 09:20:01 crc kubenswrapper[4799]: I0127 09:20:01.271237 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 09:20:01 crc kubenswrapper[4799]: I0127 09:20:01.304280 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:20:01 crc kubenswrapper[4799]: I0127 09:20:01.519288 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 09:20:04 crc kubenswrapper[4799]: I0127 09:20:04.090032 4799 generic.go:334] "Generic (PLEG): container finished" podID="d678b284-7ca9-4738-a934-e1638038844b" containerID="2c43c8034707a68bf10d7af267a80113908fb2eaff30a8ee5c21ea4387943f67" exitCode=0 Jan 27 09:20:04 crc kubenswrapper[4799]: I0127 09:20:04.090122 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7tn7x" event={"ID":"d678b284-7ca9-4738-a934-e1638038844b","Type":"ContainerDied","Data":"2c43c8034707a68bf10d7af267a80113908fb2eaff30a8ee5c21ea4387943f67"} Jan 27 09:20:05 crc kubenswrapper[4799]: I0127 09:20:05.500662 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7tn7x" Jan 27 09:20:05 crc kubenswrapper[4799]: I0127 09:20:05.628658 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d678b284-7ca9-4738-a934-e1638038844b-scripts\") pod \"d678b284-7ca9-4738-a934-e1638038844b\" (UID: \"d678b284-7ca9-4738-a934-e1638038844b\") " Jan 27 09:20:05 crc kubenswrapper[4799]: I0127 09:20:05.628767 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d678b284-7ca9-4738-a934-e1638038844b-combined-ca-bundle\") pod \"d678b284-7ca9-4738-a934-e1638038844b\" (UID: \"d678b284-7ca9-4738-a934-e1638038844b\") " Jan 27 09:20:05 crc kubenswrapper[4799]: I0127 09:20:05.628809 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfm9m\" (UniqueName: \"kubernetes.io/projected/d678b284-7ca9-4738-a934-e1638038844b-kube-api-access-wfm9m\") pod \"d678b284-7ca9-4738-a934-e1638038844b\" (UID: \"d678b284-7ca9-4738-a934-e1638038844b\") " Jan 27 09:20:05 crc kubenswrapper[4799]: I0127 09:20:05.628860 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d678b284-7ca9-4738-a934-e1638038844b-config-data\") pod \"d678b284-7ca9-4738-a934-e1638038844b\" (UID: \"d678b284-7ca9-4738-a934-e1638038844b\") " Jan 27 09:20:05 crc kubenswrapper[4799]: I0127 09:20:05.634984 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d678b284-7ca9-4738-a934-e1638038844b-kube-api-access-wfm9m" (OuterVolumeSpecName: "kube-api-access-wfm9m") pod "d678b284-7ca9-4738-a934-e1638038844b" (UID: "d678b284-7ca9-4738-a934-e1638038844b"). InnerVolumeSpecName "kube-api-access-wfm9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:20:05 crc kubenswrapper[4799]: I0127 09:20:05.636925 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d678b284-7ca9-4738-a934-e1638038844b-scripts" (OuterVolumeSpecName: "scripts") pod "d678b284-7ca9-4738-a934-e1638038844b" (UID: "d678b284-7ca9-4738-a934-e1638038844b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:20:05 crc kubenswrapper[4799]: I0127 09:20:05.661371 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d678b284-7ca9-4738-a934-e1638038844b-config-data" (OuterVolumeSpecName: "config-data") pod "d678b284-7ca9-4738-a934-e1638038844b" (UID: "d678b284-7ca9-4738-a934-e1638038844b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:20:05 crc kubenswrapper[4799]: I0127 09:20:05.664919 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d678b284-7ca9-4738-a934-e1638038844b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d678b284-7ca9-4738-a934-e1638038844b" (UID: "d678b284-7ca9-4738-a934-e1638038844b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:20:05 crc kubenswrapper[4799]: I0127 09:20:05.731709 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d678b284-7ca9-4738-a934-e1638038844b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:05 crc kubenswrapper[4799]: I0127 09:20:05.731750 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d678b284-7ca9-4738-a934-e1638038844b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:05 crc kubenswrapper[4799]: I0127 09:20:05.731784 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfm9m\" (UniqueName: \"kubernetes.io/projected/d678b284-7ca9-4738-a934-e1638038844b-kube-api-access-wfm9m\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:05 crc kubenswrapper[4799]: I0127 09:20:05.731796 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d678b284-7ca9-4738-a934-e1638038844b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:06 crc kubenswrapper[4799]: I0127 09:20:06.117718 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7tn7x" event={"ID":"d678b284-7ca9-4738-a934-e1638038844b","Type":"ContainerDied","Data":"bb6090e4a7b683e79b9643c244c69f814188cb4703a1a389f256aa46fc6d7449"} Jan 27 09:20:06 crc kubenswrapper[4799]: I0127 09:20:06.117771 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb6090e4a7b683e79b9643c244c69f814188cb4703a1a389f256aa46fc6d7449" Jan 27 09:20:06 crc kubenswrapper[4799]: I0127 09:20:06.117858 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7tn7x" Jan 27 09:20:06 crc kubenswrapper[4799]: I0127 09:20:06.120905 4799 generic.go:334] "Generic (PLEG): container finished" podID="75b93d40-5c8a-47d2-8f67-3b22d2594c19" containerID="ebe0841af298b53325004f708c663198dbd52cfe14a24a341ee34a8885048a1a" exitCode=0 Jan 27 09:20:06 crc kubenswrapper[4799]: I0127 09:20:06.120968 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9qp8p" event={"ID":"75b93d40-5c8a-47d2-8f67-3b22d2594c19","Type":"ContainerDied","Data":"ebe0841af298b53325004f708c663198dbd52cfe14a24a341ee34a8885048a1a"} Jan 27 09:20:06 crc kubenswrapper[4799]: I0127 09:20:06.184766 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 09:20:06 crc kubenswrapper[4799]: I0127 09:20:06.184822 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 09:20:06 crc kubenswrapper[4799]: I0127 09:20:06.271476 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 09:20:06 crc kubenswrapper[4799]: I0127 09:20:06.271541 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 09:20:06 crc kubenswrapper[4799]: I0127 09:20:06.300149 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:20:06 crc kubenswrapper[4799]: I0127 09:20:06.300570 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="82c19687-86a1-451e-9011-a17f777f9e39" containerName="nova-scheduler-scheduler" containerID="cri-o://d391a2917e9176fea1b6d3cbae82fd4595a6b1381ca046c6d0ffb9a76aaf7ccc" gracePeriod=30 Jan 27 09:20:06 crc kubenswrapper[4799]: I0127 09:20:06.303621 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:20:06 crc kubenswrapper[4799]: I0127 09:20:06.318007 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:20:06 crc kubenswrapper[4799]: I0127 09:20:06.318897 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:20:06 crc kubenswrapper[4799]: I0127 09:20:06.396586 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:20:06 crc kubenswrapper[4799]: I0127 09:20:06.537345 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" Jan 27 09:20:06 crc kubenswrapper[4799]: I0127 09:20:06.601709 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-567d4c69c7-bsznb"] Jan 27 09:20:06 crc kubenswrapper[4799]: I0127 09:20:06.601953 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" podUID="bb9cf218-3d46-4767-82c8-7a8a0d569065" containerName="dnsmasq-dns" containerID="cri-o://cde58785e9861b0deb5fedfe490ae4ec5a67858bc8e363c87a84536fc5a95b80" gracePeriod=10 Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.158018 4799 generic.go:334] "Generic (PLEG): container finished" podID="bb9cf218-3d46-4767-82c8-7a8a0d569065" containerID="cde58785e9861b0deb5fedfe490ae4ec5a67858bc8e363c87a84536fc5a95b80" exitCode=0 Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.159191 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" event={"ID":"bb9cf218-3d46-4767-82c8-7a8a0d569065","Type":"ContainerDied","Data":"cde58785e9861b0deb5fedfe490ae4ec5a67858bc8e363c87a84536fc5a95b80"} Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.159438 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="96147aa2-900d-4825-a662-2a37f034f0c3" containerName="nova-metadata-log" containerID="cri-o://c5c7054fc9fb842d5b1171953068249faabb1c9b5a939468b57fbbc55dec9123" gracePeriod=30 Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.159830 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2c40a416-327e-4270-a06b-a984eebb3d27" containerName="nova-api-log" containerID="cri-o://083bc04db3148ea469c4eaa59a56fa055b0bba13082cae6d66f242df6ac42ee8" gracePeriod=30 Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.160107 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="96147aa2-900d-4825-a662-2a37f034f0c3" containerName="nova-metadata-metadata" containerID="cri-o://83bdb24d71a76b72742fe8727174acaebf09d5e94311e2768d94703dd4d46aa2" gracePeriod=30 Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.160170 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2c40a416-327e-4270-a06b-a984eebb3d27" containerName="nova-api-api" containerID="cri-o://e9e2236d80c4db4851448e5ab6e8b9fabf325ad564f9dbc828662d1d03e8ce7a" gracePeriod=30 Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.166013 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2c40a416-327e-4270-a06b-a984eebb3d27" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.52:8774/\": EOF" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.170082 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2c40a416-327e-4270-a06b-a984eebb3d27" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.52:8774/\": EOF" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.172135 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.200097 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96147aa2-900d-4825-a662-2a37f034f0c3" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.53:8775/\": EOF" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.200179 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96147aa2-900d-4825-a662-2a37f034f0c3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.53:8775/\": EOF" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.366718 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.480242 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-config\") pod \"bb9cf218-3d46-4767-82c8-7a8a0d569065\" (UID: \"bb9cf218-3d46-4767-82c8-7a8a0d569065\") " Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.480537 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-ovsdbserver-sb\") pod \"bb9cf218-3d46-4767-82c8-7a8a0d569065\" (UID: \"bb9cf218-3d46-4767-82c8-7a8a0d569065\") " Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.480581 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw2n5\" (UniqueName: \"kubernetes.io/projected/bb9cf218-3d46-4767-82c8-7a8a0d569065-kube-api-access-zw2n5\") pod \"bb9cf218-3d46-4767-82c8-7a8a0d569065\" (UID: \"bb9cf218-3d46-4767-82c8-7a8a0d569065\") " Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.480784 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-ovsdbserver-nb\") pod \"bb9cf218-3d46-4767-82c8-7a8a0d569065\" (UID: \"bb9cf218-3d46-4767-82c8-7a8a0d569065\") " Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.480870 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-dns-svc\") pod \"bb9cf218-3d46-4767-82c8-7a8a0d569065\" (UID: \"bb9cf218-3d46-4767-82c8-7a8a0d569065\") " Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.489517 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb9cf218-3d46-4767-82c8-7a8a0d569065-kube-api-access-zw2n5" (OuterVolumeSpecName: "kube-api-access-zw2n5") pod "bb9cf218-3d46-4767-82c8-7a8a0d569065" (UID: "bb9cf218-3d46-4767-82c8-7a8a0d569065"). InnerVolumeSpecName "kube-api-access-zw2n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.545860 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9qp8p" Jan 27 09:20:07 crc kubenswrapper[4799]: E0127 09:20:07.556141 4799 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c40a416_327e_4270_a06b_a984eebb3d27.slice/crio-conmon-083bc04db3148ea469c4eaa59a56fa055b0bba13082cae6d66f242df6ac42ee8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c40a416_327e_4270_a06b_a984eebb3d27.slice/crio-083bc04db3148ea469c4eaa59a56fa055b0bba13082cae6d66f242df6ac42ee8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96147aa2_900d_4825_a662_2a37f034f0c3.slice/crio-c5c7054fc9fb842d5b1171953068249faabb1c9b5a939468b57fbbc55dec9123.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96147aa2_900d_4825_a662_2a37f034f0c3.slice/crio-conmon-c5c7054fc9fb842d5b1171953068249faabb1c9b5a939468b57fbbc55dec9123.scope\": RecentStats: unable to find data in memory cache]" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.556880 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bb9cf218-3d46-4767-82c8-7a8a0d569065" (UID: "bb9cf218-3d46-4767-82c8-7a8a0d569065"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.603191 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bb9cf218-3d46-4767-82c8-7a8a0d569065" (UID: "bb9cf218-3d46-4767-82c8-7a8a0d569065"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.603490 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bb9cf218-3d46-4767-82c8-7a8a0d569065" (UID: "bb9cf218-3d46-4767-82c8-7a8a0d569065"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.605729 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.606002 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw2n5\" (UniqueName: \"kubernetes.io/projected/bb9cf218-3d46-4767-82c8-7a8a0d569065-kube-api-access-zw2n5\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.609589 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-config" (OuterVolumeSpecName: "config") pod "bb9cf218-3d46-4767-82c8-7a8a0d569065" (UID: "bb9cf218-3d46-4767-82c8-7a8a0d569065"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.707475 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b93d40-5c8a-47d2-8f67-3b22d2594c19-scripts\") pod \"75b93d40-5c8a-47d2-8f67-3b22d2594c19\" (UID: \"75b93d40-5c8a-47d2-8f67-3b22d2594c19\") " Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.707563 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b93d40-5c8a-47d2-8f67-3b22d2594c19-combined-ca-bundle\") pod \"75b93d40-5c8a-47d2-8f67-3b22d2594c19\" (UID: \"75b93d40-5c8a-47d2-8f67-3b22d2594c19\") " Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.707611 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzz6c\" (UniqueName: \"kubernetes.io/projected/75b93d40-5c8a-47d2-8f67-3b22d2594c19-kube-api-access-vzz6c\") pod \"75b93d40-5c8a-47d2-8f67-3b22d2594c19\" (UID: \"75b93d40-5c8a-47d2-8f67-3b22d2594c19\") " Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.707709 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b93d40-5c8a-47d2-8f67-3b22d2594c19-config-data\") pod \"75b93d40-5c8a-47d2-8f67-3b22d2594c19\" (UID: \"75b93d40-5c8a-47d2-8f67-3b22d2594c19\") " Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.708111 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.708129 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.708138 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9cf218-3d46-4767-82c8-7a8a0d569065-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.713519 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b93d40-5c8a-47d2-8f67-3b22d2594c19-scripts" (OuterVolumeSpecName: "scripts") pod "75b93d40-5c8a-47d2-8f67-3b22d2594c19" (UID: "75b93d40-5c8a-47d2-8f67-3b22d2594c19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.714489 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b93d40-5c8a-47d2-8f67-3b22d2594c19-kube-api-access-vzz6c" (OuterVolumeSpecName: "kube-api-access-vzz6c") pod "75b93d40-5c8a-47d2-8f67-3b22d2594c19" (UID: "75b93d40-5c8a-47d2-8f67-3b22d2594c19"). InnerVolumeSpecName "kube-api-access-vzz6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.732912 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b93d40-5c8a-47d2-8f67-3b22d2594c19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75b93d40-5c8a-47d2-8f67-3b22d2594c19" (UID: "75b93d40-5c8a-47d2-8f67-3b22d2594c19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.743833 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b93d40-5c8a-47d2-8f67-3b22d2594c19-config-data" (OuterVolumeSpecName: "config-data") pod "75b93d40-5c8a-47d2-8f67-3b22d2594c19" (UID: "75b93d40-5c8a-47d2-8f67-3b22d2594c19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.809792 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b93d40-5c8a-47d2-8f67-3b22d2594c19-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.809835 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b93d40-5c8a-47d2-8f67-3b22d2594c19-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.809848 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b93d40-5c8a-47d2-8f67-3b22d2594c19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:07 crc kubenswrapper[4799]: I0127 09:20:07.809861 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzz6c\" (UniqueName: \"kubernetes.io/projected/75b93d40-5c8a-47d2-8f67-3b22d2594c19-kube-api-access-vzz6c\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.168771 4799 generic.go:334] "Generic (PLEG): container finished" podID="82c19687-86a1-451e-9011-a17f777f9e39" containerID="d391a2917e9176fea1b6d3cbae82fd4595a6b1381ca046c6d0ffb9a76aaf7ccc" exitCode=0 Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.168845 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"82c19687-86a1-451e-9011-a17f777f9e39","Type":"ContainerDied","Data":"d391a2917e9176fea1b6d3cbae82fd4595a6b1381ca046c6d0ffb9a76aaf7ccc"} Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.171522 4799 generic.go:334] "Generic (PLEG): container finished" podID="2c40a416-327e-4270-a06b-a984eebb3d27" containerID="083bc04db3148ea469c4eaa59a56fa055b0bba13082cae6d66f242df6ac42ee8" exitCode=143 Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.171608 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c40a416-327e-4270-a06b-a984eebb3d27","Type":"ContainerDied","Data":"083bc04db3148ea469c4eaa59a56fa055b0bba13082cae6d66f242df6ac42ee8"} Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.173705 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9qp8p" event={"ID":"75b93d40-5c8a-47d2-8f67-3b22d2594c19","Type":"ContainerDied","Data":"da498bfcc2eeb6b368985cf7b853c5376b7ad25e22630d3ba5fd87e5ec94cfc1"} Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.173857 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da498bfcc2eeb6b368985cf7b853c5376b7ad25e22630d3ba5fd87e5ec94cfc1" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.173787 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9qp8p" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.176476 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.176637 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567d4c69c7-bsznb" event={"ID":"bb9cf218-3d46-4767-82c8-7a8a0d569065","Type":"ContainerDied","Data":"b6182598eb570c1934ec27c55ae34fc6951a763957613993e12146e2dab31540"} Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.176868 4799 scope.go:117] "RemoveContainer" containerID="cde58785e9861b0deb5fedfe490ae4ec5a67858bc8e363c87a84536fc5a95b80" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.178822 4799 generic.go:334] "Generic (PLEG): container finished" podID="96147aa2-900d-4825-a662-2a37f034f0c3" containerID="c5c7054fc9fb842d5b1171953068249faabb1c9b5a939468b57fbbc55dec9123" exitCode=143 Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.179337 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96147aa2-900d-4825-a662-2a37f034f0c3","Type":"ContainerDied","Data":"c5c7054fc9fb842d5b1171953068249faabb1c9b5a939468b57fbbc55dec9123"} Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.224509 4799 scope.go:117] "RemoveContainer" containerID="f947f78ed0e84ff3b0f6d1e8d150584c6ebc35378c57e84ffdbf82c5972969d4" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.226621 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-567d4c69c7-bsznb"] Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.248869 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-567d4c69c7-bsznb"] Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.258257 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:20:08 crc kubenswrapper[4799]: E0127 09:20:08.258752 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9cf218-3d46-4767-82c8-7a8a0d569065" containerName="dnsmasq-dns" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.258779 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9cf218-3d46-4767-82c8-7a8a0d569065" containerName="dnsmasq-dns" Jan 27 09:20:08 crc kubenswrapper[4799]: E0127 09:20:08.258799 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9cf218-3d46-4767-82c8-7a8a0d569065" containerName="init" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.258808 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9cf218-3d46-4767-82c8-7a8a0d569065" containerName="init" Jan 27 09:20:08 crc kubenswrapper[4799]: E0127 09:20:08.258831 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b93d40-5c8a-47d2-8f67-3b22d2594c19" containerName="nova-cell1-conductor-db-sync" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.258839 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b93d40-5c8a-47d2-8f67-3b22d2594c19" containerName="nova-cell1-conductor-db-sync" Jan 27 09:20:08 crc kubenswrapper[4799]: E0127 09:20:08.258873 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d678b284-7ca9-4738-a934-e1638038844b" containerName="nova-manage" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.258881 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d678b284-7ca9-4738-a934-e1638038844b" containerName="nova-manage" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.259082 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9cf218-3d46-4767-82c8-7a8a0d569065" containerName="dnsmasq-dns" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.259098 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b93d40-5c8a-47d2-8f67-3b22d2594c19" containerName="nova-cell1-conductor-db-sync" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.259126 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="d678b284-7ca9-4738-a934-e1638038844b" containerName="nova-manage" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.259916 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.264411 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.313678 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.426892 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgspb\" (UniqueName: \"kubernetes.io/projected/3c980df1-a520-4c83-9094-65ffa132b464-kube-api-access-qgspb\") pod \"nova-cell1-conductor-0\" (UID: \"3c980df1-a520-4c83-9094-65ffa132b464\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.426933 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c980df1-a520-4c83-9094-65ffa132b464-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3c980df1-a520-4c83-9094-65ffa132b464\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.426964 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c980df1-a520-4c83-9094-65ffa132b464-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3c980df1-a520-4c83-9094-65ffa132b464\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.473313 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb9cf218-3d46-4767-82c8-7a8a0d569065" path="/var/lib/kubelet/pods/bb9cf218-3d46-4767-82c8-7a8a0d569065/volumes" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.529327 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgspb\" (UniqueName: \"kubernetes.io/projected/3c980df1-a520-4c83-9094-65ffa132b464-kube-api-access-qgspb\") pod \"nova-cell1-conductor-0\" (UID: \"3c980df1-a520-4c83-9094-65ffa132b464\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.529395 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c980df1-a520-4c83-9094-65ffa132b464-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3c980df1-a520-4c83-9094-65ffa132b464\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.529436 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c980df1-a520-4c83-9094-65ffa132b464-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3c980df1-a520-4c83-9094-65ffa132b464\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.539573 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c980df1-a520-4c83-9094-65ffa132b464-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3c980df1-a520-4c83-9094-65ffa132b464\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.551977 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c980df1-a520-4c83-9094-65ffa132b464-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3c980df1-a520-4c83-9094-65ffa132b464\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.579031 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgspb\" (UniqueName: \"kubernetes.io/projected/3c980df1-a520-4c83-9094-65ffa132b464-kube-api-access-qgspb\") pod \"nova-cell1-conductor-0\" (UID: \"3c980df1-a520-4c83-9094-65ffa132b464\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.694686 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.774832 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.936752 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdlf4\" (UniqueName: \"kubernetes.io/projected/82c19687-86a1-451e-9011-a17f777f9e39-kube-api-access-tdlf4\") pod \"82c19687-86a1-451e-9011-a17f777f9e39\" (UID: \"82c19687-86a1-451e-9011-a17f777f9e39\") " Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.936887 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c19687-86a1-451e-9011-a17f777f9e39-combined-ca-bundle\") pod \"82c19687-86a1-451e-9011-a17f777f9e39\" (UID: \"82c19687-86a1-451e-9011-a17f777f9e39\") " Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.936942 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c19687-86a1-451e-9011-a17f777f9e39-config-data\") pod \"82c19687-86a1-451e-9011-a17f777f9e39\" (UID: \"82c19687-86a1-451e-9011-a17f777f9e39\") " Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.950702 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82c19687-86a1-451e-9011-a17f777f9e39-kube-api-access-tdlf4" (OuterVolumeSpecName: "kube-api-access-tdlf4") pod "82c19687-86a1-451e-9011-a17f777f9e39" (UID: "82c19687-86a1-451e-9011-a17f777f9e39"). InnerVolumeSpecName "kube-api-access-tdlf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.978563 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82c19687-86a1-451e-9011-a17f777f9e39-config-data" (OuterVolumeSpecName: "config-data") pod "82c19687-86a1-451e-9011-a17f777f9e39" (UID: "82c19687-86a1-451e-9011-a17f777f9e39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:20:08 crc kubenswrapper[4799]: I0127 09:20:08.981612 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82c19687-86a1-451e-9011-a17f777f9e39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82c19687-86a1-451e-9011-a17f777f9e39" (UID: "82c19687-86a1-451e-9011-a17f777f9e39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.039091 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdlf4\" (UniqueName: \"kubernetes.io/projected/82c19687-86a1-451e-9011-a17f777f9e39-kube-api-access-tdlf4\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.039152 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c19687-86a1-451e-9011-a17f777f9e39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.039167 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c19687-86a1-451e-9011-a17f777f9e39-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.171676 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:20:09 crc kubenswrapper[4799]: W0127 09:20:09.172172 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c980df1_a520_4c83_9094_65ffa132b464.slice/crio-cecdbd2a03c842c48d0f26d4c8955b3e51fafcc9153f28a775f60d87008b6e20 WatchSource:0}: Error finding container cecdbd2a03c842c48d0f26d4c8955b3e51fafcc9153f28a775f60d87008b6e20: Status 404 returned error can't find the container with id cecdbd2a03c842c48d0f26d4c8955b3e51fafcc9153f28a775f60d87008b6e20 Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.190832 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3c980df1-a520-4c83-9094-65ffa132b464","Type":"ContainerStarted","Data":"cecdbd2a03c842c48d0f26d4c8955b3e51fafcc9153f28a775f60d87008b6e20"} Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.196195 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.196738 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"82c19687-86a1-451e-9011-a17f777f9e39","Type":"ContainerDied","Data":"c2beae77059cc34f82362d0e16e6bdc8ff303762a089a500feb3ce893e6f7178"} Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.196953 4799 scope.go:117] "RemoveContainer" containerID="d391a2917e9176fea1b6d3cbae82fd4595a6b1381ca046c6d0ffb9a76aaf7ccc" Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.315801 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.325633 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.337254 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:20:09 crc kubenswrapper[4799]: E0127 09:20:09.337733 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82c19687-86a1-451e-9011-a17f777f9e39" containerName="nova-scheduler-scheduler" Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.337748 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="82c19687-86a1-451e-9011-a17f777f9e39" containerName="nova-scheduler-scheduler" Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.337948 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="82c19687-86a1-451e-9011-a17f777f9e39" containerName="nova-scheduler-scheduler" Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.341408 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.347029 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.352150 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.448358 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c537b700-af95-43ea-98e3-a37e340f1b35-config-data\") pod \"nova-scheduler-0\" (UID: \"c537b700-af95-43ea-98e3-a37e340f1b35\") " pod="openstack/nova-scheduler-0" Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.448554 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c537b700-af95-43ea-98e3-a37e340f1b35-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c537b700-af95-43ea-98e3-a37e340f1b35\") " pod="openstack/nova-scheduler-0" Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.448594 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clqdj\" (UniqueName: \"kubernetes.io/projected/c537b700-af95-43ea-98e3-a37e340f1b35-kube-api-access-clqdj\") pod \"nova-scheduler-0\" (UID: \"c537b700-af95-43ea-98e3-a37e340f1b35\") " pod="openstack/nova-scheduler-0" Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.550729 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c537b700-af95-43ea-98e3-a37e340f1b35-config-data\") pod \"nova-scheduler-0\" (UID: \"c537b700-af95-43ea-98e3-a37e340f1b35\") " pod="openstack/nova-scheduler-0" Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.550895 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c537b700-af95-43ea-98e3-a37e340f1b35-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c537b700-af95-43ea-98e3-a37e340f1b35\") " pod="openstack/nova-scheduler-0" Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.550941 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clqdj\" (UniqueName: \"kubernetes.io/projected/c537b700-af95-43ea-98e3-a37e340f1b35-kube-api-access-clqdj\") pod \"nova-scheduler-0\" (UID: \"c537b700-af95-43ea-98e3-a37e340f1b35\") " pod="openstack/nova-scheduler-0" Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.555417 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c537b700-af95-43ea-98e3-a37e340f1b35-config-data\") pod \"nova-scheduler-0\" (UID: \"c537b700-af95-43ea-98e3-a37e340f1b35\") " pod="openstack/nova-scheduler-0" Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.555907 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c537b700-af95-43ea-98e3-a37e340f1b35-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c537b700-af95-43ea-98e3-a37e340f1b35\") " pod="openstack/nova-scheduler-0" Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.570367 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clqdj\" (UniqueName: \"kubernetes.io/projected/c537b700-af95-43ea-98e3-a37e340f1b35-kube-api-access-clqdj\") pod \"nova-scheduler-0\" (UID: \"c537b700-af95-43ea-98e3-a37e340f1b35\") " pod="openstack/nova-scheduler-0" Jan 27 09:20:09 crc kubenswrapper[4799]: I0127 09:20:09.669695 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:20:10 crc kubenswrapper[4799]: I0127 09:20:10.139360 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:20:10 crc kubenswrapper[4799]: I0127 09:20:10.209928 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c537b700-af95-43ea-98e3-a37e340f1b35","Type":"ContainerStarted","Data":"9b98b706ad20f4d81f60c32edc1e2e157895a57468dea102d5a66bc5dd09dee5"} Jan 27 09:20:10 crc kubenswrapper[4799]: I0127 09:20:10.211742 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3c980df1-a520-4c83-9094-65ffa132b464","Type":"ContainerStarted","Data":"942bd87edd1bb3698510d7f67f940d7972fd60a9d5885b49f21a7467b2b57dcb"} Jan 27 09:20:10 crc kubenswrapper[4799]: I0127 09:20:10.213112 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 09:20:10 crc kubenswrapper[4799]: I0127 09:20:10.234692 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.234677725 podStartE2EDuration="2.234677725s" podCreationTimestamp="2026-01-27 09:20:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:20:10.228188808 +0000 UTC m=+5676.539292903" watchObservedRunningTime="2026-01-27 09:20:10.234677725 +0000 UTC m=+5676.545781790" Jan 27 09:20:10 crc kubenswrapper[4799]: I0127 09:20:10.461676 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82c19687-86a1-451e-9011-a17f777f9e39" path="/var/lib/kubelet/pods/82c19687-86a1-451e-9011-a17f777f9e39/volumes" Jan 27 09:20:11 crc kubenswrapper[4799]: I0127 09:20:11.225064 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c537b700-af95-43ea-98e3-a37e340f1b35","Type":"ContainerStarted","Data":"1fd0c9018b785fd8dd044d8f48100a623f94728efc3543cd61e2685b274c8114"} Jan 27 09:20:11 crc kubenswrapper[4799]: I0127 09:20:11.248136 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.24812047 podStartE2EDuration="2.24812047s" podCreationTimestamp="2026-01-27 09:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:20:11.246095004 +0000 UTC m=+5677.557199079" watchObservedRunningTime="2026-01-27 09:20:11.24812047 +0000 UTC m=+5677.559224535" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.155231 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.251731 4799 generic.go:334] "Generic (PLEG): container finished" podID="96147aa2-900d-4825-a662-2a37f034f0c3" containerID="83bdb24d71a76b72742fe8727174acaebf09d5e94311e2768d94703dd4d46aa2" exitCode=0 Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.251775 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96147aa2-900d-4825-a662-2a37f034f0c3","Type":"ContainerDied","Data":"83bdb24d71a76b72742fe8727174acaebf09d5e94311e2768d94703dd4d46aa2"} Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.251804 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96147aa2-900d-4825-a662-2a37f034f0c3","Type":"ContainerDied","Data":"925657144b00e89b3d65d15682e9effc1f14c6fa961012c517c706da36b110be"} Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.251822 4799 scope.go:117] "RemoveContainer" containerID="83bdb24d71a76b72742fe8727174acaebf09d5e94311e2768d94703dd4d46aa2" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.251949 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.292816 4799 scope.go:117] "RemoveContainer" containerID="c5c7054fc9fb842d5b1171953068249faabb1c9b5a939468b57fbbc55dec9123" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.311403 4799 scope.go:117] "RemoveContainer" containerID="83bdb24d71a76b72742fe8727174acaebf09d5e94311e2768d94703dd4d46aa2" Jan 27 09:20:13 crc kubenswrapper[4799]: E0127 09:20:13.313012 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83bdb24d71a76b72742fe8727174acaebf09d5e94311e2768d94703dd4d46aa2\": container with ID starting with 83bdb24d71a76b72742fe8727174acaebf09d5e94311e2768d94703dd4d46aa2 not found: ID does not exist" containerID="83bdb24d71a76b72742fe8727174acaebf09d5e94311e2768d94703dd4d46aa2" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.313054 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83bdb24d71a76b72742fe8727174acaebf09d5e94311e2768d94703dd4d46aa2"} err="failed to get container status \"83bdb24d71a76b72742fe8727174acaebf09d5e94311e2768d94703dd4d46aa2\": rpc error: code = NotFound desc = could not find container \"83bdb24d71a76b72742fe8727174acaebf09d5e94311e2768d94703dd4d46aa2\": container with ID starting with 83bdb24d71a76b72742fe8727174acaebf09d5e94311e2768d94703dd4d46aa2 not found: ID does not exist" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.313080 4799 scope.go:117] "RemoveContainer" containerID="c5c7054fc9fb842d5b1171953068249faabb1c9b5a939468b57fbbc55dec9123" Jan 27 09:20:13 crc kubenswrapper[4799]: E0127 09:20:13.313522 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5c7054fc9fb842d5b1171953068249faabb1c9b5a939468b57fbbc55dec9123\": container with ID starting with c5c7054fc9fb842d5b1171953068249faabb1c9b5a939468b57fbbc55dec9123 not found: ID does not exist" containerID="c5c7054fc9fb842d5b1171953068249faabb1c9b5a939468b57fbbc55dec9123" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.313560 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5c7054fc9fb842d5b1171953068249faabb1c9b5a939468b57fbbc55dec9123"} err="failed to get container status \"c5c7054fc9fb842d5b1171953068249faabb1c9b5a939468b57fbbc55dec9123\": rpc error: code = NotFound desc = could not find container \"c5c7054fc9fb842d5b1171953068249faabb1c9b5a939468b57fbbc55dec9123\": container with ID starting with c5c7054fc9fb842d5b1171953068249faabb1c9b5a939468b57fbbc55dec9123 not found: ID does not exist" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.325759 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96147aa2-900d-4825-a662-2a37f034f0c3-combined-ca-bundle\") pod \"96147aa2-900d-4825-a662-2a37f034f0c3\" (UID: \"96147aa2-900d-4825-a662-2a37f034f0c3\") " Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.325914 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96147aa2-900d-4825-a662-2a37f034f0c3-logs\") pod \"96147aa2-900d-4825-a662-2a37f034f0c3\" (UID: \"96147aa2-900d-4825-a662-2a37f034f0c3\") " Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.325956 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkq84\" (UniqueName: \"kubernetes.io/projected/96147aa2-900d-4825-a662-2a37f034f0c3-kube-api-access-wkq84\") pod \"96147aa2-900d-4825-a662-2a37f034f0c3\" (UID: \"96147aa2-900d-4825-a662-2a37f034f0c3\") " Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.326027 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96147aa2-900d-4825-a662-2a37f034f0c3-config-data\") pod \"96147aa2-900d-4825-a662-2a37f034f0c3\" (UID: \"96147aa2-900d-4825-a662-2a37f034f0c3\") " Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.326536 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96147aa2-900d-4825-a662-2a37f034f0c3-logs" (OuterVolumeSpecName: "logs") pod "96147aa2-900d-4825-a662-2a37f034f0c3" (UID: "96147aa2-900d-4825-a662-2a37f034f0c3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.355765 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96147aa2-900d-4825-a662-2a37f034f0c3-kube-api-access-wkq84" (OuterVolumeSpecName: "kube-api-access-wkq84") pod "96147aa2-900d-4825-a662-2a37f034f0c3" (UID: "96147aa2-900d-4825-a662-2a37f034f0c3"). InnerVolumeSpecName "kube-api-access-wkq84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.359037 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96147aa2-900d-4825-a662-2a37f034f0c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96147aa2-900d-4825-a662-2a37f034f0c3" (UID: "96147aa2-900d-4825-a662-2a37f034f0c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.369284 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96147aa2-900d-4825-a662-2a37f034f0c3-config-data" (OuterVolumeSpecName: "config-data") pod "96147aa2-900d-4825-a662-2a37f034f0c3" (UID: "96147aa2-900d-4825-a662-2a37f034f0c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.428282 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96147aa2-900d-4825-a662-2a37f034f0c3-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.428353 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96147aa2-900d-4825-a662-2a37f034f0c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.428370 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96147aa2-900d-4825-a662-2a37f034f0c3-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.428381 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkq84\" (UniqueName: \"kubernetes.io/projected/96147aa2-900d-4825-a662-2a37f034f0c3-kube-api-access-wkq84\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.585957 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.597725 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.615909 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:20:13 crc kubenswrapper[4799]: E0127 09:20:13.616499 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96147aa2-900d-4825-a662-2a37f034f0c3" containerName="nova-metadata-log" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.616521 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="96147aa2-900d-4825-a662-2a37f034f0c3" containerName="nova-metadata-log" Jan 27 09:20:13 crc kubenswrapper[4799]: E0127 09:20:13.616537 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96147aa2-900d-4825-a662-2a37f034f0c3" containerName="nova-metadata-metadata" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.616545 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="96147aa2-900d-4825-a662-2a37f034f0c3" containerName="nova-metadata-metadata" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.616777 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="96147aa2-900d-4825-a662-2a37f034f0c3" containerName="nova-metadata-metadata" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.616795 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="96147aa2-900d-4825-a662-2a37f034f0c3" containerName="nova-metadata-log" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.618110 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.621221 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.625952 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.733887 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e21c051-0766-44a9-85d5-d985de7e5cb2-config-data\") pod \"nova-metadata-0\" (UID: \"5e21c051-0766-44a9-85d5-d985de7e5cb2\") " pod="openstack/nova-metadata-0" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.733965 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e21c051-0766-44a9-85d5-d985de7e5cb2-logs\") pod \"nova-metadata-0\" (UID: \"5e21c051-0766-44a9-85d5-d985de7e5cb2\") " pod="openstack/nova-metadata-0" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.733992 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qrqh\" (UniqueName: \"kubernetes.io/projected/5e21c051-0766-44a9-85d5-d985de7e5cb2-kube-api-access-2qrqh\") pod \"nova-metadata-0\" (UID: \"5e21c051-0766-44a9-85d5-d985de7e5cb2\") " pod="openstack/nova-metadata-0" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.734017 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e21c051-0766-44a9-85d5-d985de7e5cb2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5e21c051-0766-44a9-85d5-d985de7e5cb2\") " pod="openstack/nova-metadata-0" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.835237 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e21c051-0766-44a9-85d5-d985de7e5cb2-config-data\") pod \"nova-metadata-0\" (UID: \"5e21c051-0766-44a9-85d5-d985de7e5cb2\") " pod="openstack/nova-metadata-0" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.835334 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e21c051-0766-44a9-85d5-d985de7e5cb2-logs\") pod \"nova-metadata-0\" (UID: \"5e21c051-0766-44a9-85d5-d985de7e5cb2\") " pod="openstack/nova-metadata-0" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.835359 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qrqh\" (UniqueName: \"kubernetes.io/projected/5e21c051-0766-44a9-85d5-d985de7e5cb2-kube-api-access-2qrqh\") pod \"nova-metadata-0\" (UID: \"5e21c051-0766-44a9-85d5-d985de7e5cb2\") " pod="openstack/nova-metadata-0" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.835381 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e21c051-0766-44a9-85d5-d985de7e5cb2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5e21c051-0766-44a9-85d5-d985de7e5cb2\") " pod="openstack/nova-metadata-0" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.836433 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e21c051-0766-44a9-85d5-d985de7e5cb2-logs\") pod \"nova-metadata-0\" (UID: \"5e21c051-0766-44a9-85d5-d985de7e5cb2\") " pod="openstack/nova-metadata-0" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.840550 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e21c051-0766-44a9-85d5-d985de7e5cb2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5e21c051-0766-44a9-85d5-d985de7e5cb2\") " pod="openstack/nova-metadata-0" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.841809 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e21c051-0766-44a9-85d5-d985de7e5cb2-config-data\") pod \"nova-metadata-0\" (UID: \"5e21c051-0766-44a9-85d5-d985de7e5cb2\") " pod="openstack/nova-metadata-0" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.853777 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qrqh\" (UniqueName: \"kubernetes.io/projected/5e21c051-0766-44a9-85d5-d985de7e5cb2-kube-api-access-2qrqh\") pod \"nova-metadata-0\" (UID: \"5e21c051-0766-44a9-85d5-d985de7e5cb2\") " pod="openstack/nova-metadata-0" Jan 27 09:20:13 crc kubenswrapper[4799]: I0127 09:20:13.939310 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.062413 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.244129 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c40a416-327e-4270-a06b-a984eebb3d27-logs\") pod \"2c40a416-327e-4270-a06b-a984eebb3d27\" (UID: \"2c40a416-327e-4270-a06b-a984eebb3d27\") " Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.244190 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59zk8\" (UniqueName: \"kubernetes.io/projected/2c40a416-327e-4270-a06b-a984eebb3d27-kube-api-access-59zk8\") pod \"2c40a416-327e-4270-a06b-a984eebb3d27\" (UID: \"2c40a416-327e-4270-a06b-a984eebb3d27\") " Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.244266 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c40a416-327e-4270-a06b-a984eebb3d27-config-data\") pod \"2c40a416-327e-4270-a06b-a984eebb3d27\" (UID: \"2c40a416-327e-4270-a06b-a984eebb3d27\") " Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.244353 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c40a416-327e-4270-a06b-a984eebb3d27-combined-ca-bundle\") pod \"2c40a416-327e-4270-a06b-a984eebb3d27\" (UID: \"2c40a416-327e-4270-a06b-a984eebb3d27\") " Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.244939 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c40a416-327e-4270-a06b-a984eebb3d27-logs" (OuterVolumeSpecName: "logs") pod "2c40a416-327e-4270-a06b-a984eebb3d27" (UID: "2c40a416-327e-4270-a06b-a984eebb3d27"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.248904 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c40a416-327e-4270-a06b-a984eebb3d27-kube-api-access-59zk8" (OuterVolumeSpecName: "kube-api-access-59zk8") pod "2c40a416-327e-4270-a06b-a984eebb3d27" (UID: "2c40a416-327e-4270-a06b-a984eebb3d27"). InnerVolumeSpecName "kube-api-access-59zk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.262624 4799 generic.go:334] "Generic (PLEG): container finished" podID="2c40a416-327e-4270-a06b-a984eebb3d27" containerID="e9e2236d80c4db4851448e5ab6e8b9fabf325ad564f9dbc828662d1d03e8ce7a" exitCode=0 Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.262682 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.262694 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c40a416-327e-4270-a06b-a984eebb3d27","Type":"ContainerDied","Data":"e9e2236d80c4db4851448e5ab6e8b9fabf325ad564f9dbc828662d1d03e8ce7a"} Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.262753 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c40a416-327e-4270-a06b-a984eebb3d27","Type":"ContainerDied","Data":"bc976f02f129c6640a9be6f75d3fec54d88c5f458cdb792b20317d838e79c1d3"} Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.262777 4799 scope.go:117] "RemoveContainer" containerID="e9e2236d80c4db4851448e5ab6e8b9fabf325ad564f9dbc828662d1d03e8ce7a" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.272653 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c40a416-327e-4270-a06b-a984eebb3d27-config-data" (OuterVolumeSpecName: "config-data") pod "2c40a416-327e-4270-a06b-a984eebb3d27" (UID: "2c40a416-327e-4270-a06b-a984eebb3d27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.282260 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c40a416-327e-4270-a06b-a984eebb3d27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c40a416-327e-4270-a06b-a984eebb3d27" (UID: "2c40a416-327e-4270-a06b-a984eebb3d27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.289195 4799 scope.go:117] "RemoveContainer" containerID="083bc04db3148ea469c4eaa59a56fa055b0bba13082cae6d66f242df6ac42ee8" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.306874 4799 scope.go:117] "RemoveContainer" containerID="e9e2236d80c4db4851448e5ab6e8b9fabf325ad564f9dbc828662d1d03e8ce7a" Jan 27 09:20:14 crc kubenswrapper[4799]: E0127 09:20:14.307647 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9e2236d80c4db4851448e5ab6e8b9fabf325ad564f9dbc828662d1d03e8ce7a\": container with ID starting with e9e2236d80c4db4851448e5ab6e8b9fabf325ad564f9dbc828662d1d03e8ce7a not found: ID does not exist" containerID="e9e2236d80c4db4851448e5ab6e8b9fabf325ad564f9dbc828662d1d03e8ce7a" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.307772 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9e2236d80c4db4851448e5ab6e8b9fabf325ad564f9dbc828662d1d03e8ce7a"} err="failed to get container status \"e9e2236d80c4db4851448e5ab6e8b9fabf325ad564f9dbc828662d1d03e8ce7a\": rpc error: code = NotFound desc = could not find container \"e9e2236d80c4db4851448e5ab6e8b9fabf325ad564f9dbc828662d1d03e8ce7a\": container with ID starting with e9e2236d80c4db4851448e5ab6e8b9fabf325ad564f9dbc828662d1d03e8ce7a not found: ID does not exist" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.307864 4799 scope.go:117] "RemoveContainer" containerID="083bc04db3148ea469c4eaa59a56fa055b0bba13082cae6d66f242df6ac42ee8" Jan 27 09:20:14 crc kubenswrapper[4799]: E0127 09:20:14.308516 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"083bc04db3148ea469c4eaa59a56fa055b0bba13082cae6d66f242df6ac42ee8\": container with ID starting with 083bc04db3148ea469c4eaa59a56fa055b0bba13082cae6d66f242df6ac42ee8 not found: ID does not exist" containerID="083bc04db3148ea469c4eaa59a56fa055b0bba13082cae6d66f242df6ac42ee8" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.308607 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"083bc04db3148ea469c4eaa59a56fa055b0bba13082cae6d66f242df6ac42ee8"} err="failed to get container status \"083bc04db3148ea469c4eaa59a56fa055b0bba13082cae6d66f242df6ac42ee8\": rpc error: code = NotFound desc = could not find container \"083bc04db3148ea469c4eaa59a56fa055b0bba13082cae6d66f242df6ac42ee8\": container with ID starting with 083bc04db3148ea469c4eaa59a56fa055b0bba13082cae6d66f242df6ac42ee8 not found: ID does not exist" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.346966 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c40a416-327e-4270-a06b-a984eebb3d27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.347019 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c40a416-327e-4270-a06b-a984eebb3d27-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.347034 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59zk8\" (UniqueName: \"kubernetes.io/projected/2c40a416-327e-4270-a06b-a984eebb3d27-kube-api-access-59zk8\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.347053 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c40a416-327e-4270-a06b-a984eebb3d27-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.411035 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:20:14 crc kubenswrapper[4799]: W0127 09:20:14.413580 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e21c051_0766_44a9_85d5_d985de7e5cb2.slice/crio-18c02fd768405b337239387252d38387a732541c4a7c3977eaa9f579aee4ce37 WatchSource:0}: Error finding container 18c02fd768405b337239387252d38387a732541c4a7c3977eaa9f579aee4ce37: Status 404 returned error can't find the container with id 18c02fd768405b337239387252d38387a732541c4a7c3977eaa9f579aee4ce37 Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.468049 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96147aa2-900d-4825-a662-2a37f034f0c3" path="/var/lib/kubelet/pods/96147aa2-900d-4825-a662-2a37f034f0c3/volumes" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.586240 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.594715 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.618870 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 09:20:14 crc kubenswrapper[4799]: E0127 09:20:14.619446 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c40a416-327e-4270-a06b-a984eebb3d27" containerName="nova-api-log" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.619472 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c40a416-327e-4270-a06b-a984eebb3d27" containerName="nova-api-log" Jan 27 09:20:14 crc kubenswrapper[4799]: E0127 09:20:14.619521 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c40a416-327e-4270-a06b-a984eebb3d27" containerName="nova-api-api" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.619531 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c40a416-327e-4270-a06b-a984eebb3d27" containerName="nova-api-api" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.619732 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c40a416-327e-4270-a06b-a984eebb3d27" containerName="nova-api-log" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.619759 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c40a416-327e-4270-a06b-a984eebb3d27" containerName="nova-api-api" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.620911 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.623520 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.629912 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.670589 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.757594 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgw9g\" (UniqueName: \"kubernetes.io/projected/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-kube-api-access-qgw9g\") pod \"nova-api-0\" (UID: \"4c490fb4-a622-47f4-ba5e-b4dd9f153fda\") " pod="openstack/nova-api-0" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.757655 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-config-data\") pod \"nova-api-0\" (UID: \"4c490fb4-a622-47f4-ba5e-b4dd9f153fda\") " pod="openstack/nova-api-0" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.757735 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-logs\") pod \"nova-api-0\" (UID: \"4c490fb4-a622-47f4-ba5e-b4dd9f153fda\") " pod="openstack/nova-api-0" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.757796 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4c490fb4-a622-47f4-ba5e-b4dd9f153fda\") " pod="openstack/nova-api-0" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.859668 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgw9g\" (UniqueName: \"kubernetes.io/projected/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-kube-api-access-qgw9g\") pod \"nova-api-0\" (UID: \"4c490fb4-a622-47f4-ba5e-b4dd9f153fda\") " pod="openstack/nova-api-0" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.859758 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-config-data\") pod \"nova-api-0\" (UID: \"4c490fb4-a622-47f4-ba5e-b4dd9f153fda\") " pod="openstack/nova-api-0" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.859841 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-logs\") pod \"nova-api-0\" (UID: \"4c490fb4-a622-47f4-ba5e-b4dd9f153fda\") " pod="openstack/nova-api-0" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.859874 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4c490fb4-a622-47f4-ba5e-b4dd9f153fda\") " pod="openstack/nova-api-0" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.860819 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-logs\") pod \"nova-api-0\" (UID: \"4c490fb4-a622-47f4-ba5e-b4dd9f153fda\") " pod="openstack/nova-api-0" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.864844 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4c490fb4-a622-47f4-ba5e-b4dd9f153fda\") " pod="openstack/nova-api-0" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.866331 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-config-data\") pod \"nova-api-0\" (UID: \"4c490fb4-a622-47f4-ba5e-b4dd9f153fda\") " pod="openstack/nova-api-0" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.877816 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgw9g\" (UniqueName: \"kubernetes.io/projected/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-kube-api-access-qgw9g\") pod \"nova-api-0\" (UID: \"4c490fb4-a622-47f4-ba5e-b4dd9f153fda\") " pod="openstack/nova-api-0" Jan 27 09:20:14 crc kubenswrapper[4799]: I0127 09:20:14.957275 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:20:15 crc kubenswrapper[4799]: I0127 09:20:15.279580 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5e21c051-0766-44a9-85d5-d985de7e5cb2","Type":"ContainerStarted","Data":"d2331fd9896d5e2fe088db4175253861486395f9b5f60cd0d8451c95f3c8d0ef"} Jan 27 09:20:15 crc kubenswrapper[4799]: I0127 09:20:15.279914 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5e21c051-0766-44a9-85d5-d985de7e5cb2","Type":"ContainerStarted","Data":"371a4470012385c3705359383af1fb26083ed8049f59d0d9a00150b0f7b21e6b"} Jan 27 09:20:15 crc kubenswrapper[4799]: I0127 09:20:15.279926 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5e21c051-0766-44a9-85d5-d985de7e5cb2","Type":"ContainerStarted","Data":"18c02fd768405b337239387252d38387a732541c4a7c3977eaa9f579aee4ce37"} Jan 27 09:20:15 crc kubenswrapper[4799]: I0127 09:20:15.302966 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.302947316 podStartE2EDuration="2.302947316s" podCreationTimestamp="2026-01-27 09:20:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:20:15.296936742 +0000 UTC m=+5681.608040797" watchObservedRunningTime="2026-01-27 09:20:15.302947316 +0000 UTC m=+5681.614051371" Jan 27 09:20:15 crc kubenswrapper[4799]: I0127 09:20:15.452501 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:20:15 crc kubenswrapper[4799]: W0127 09:20:15.460432 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c490fb4_a622_47f4_ba5e_b4dd9f153fda.slice/crio-c092b34bb10d79c8af2ce7e00490413cefd31ac34dbfd23af4fd01b26626901a WatchSource:0}: Error finding container c092b34bb10d79c8af2ce7e00490413cefd31ac34dbfd23af4fd01b26626901a: Status 404 returned error can't find the container with id c092b34bb10d79c8af2ce7e00490413cefd31ac34dbfd23af4fd01b26626901a Jan 27 09:20:16 crc kubenswrapper[4799]: I0127 09:20:16.291525 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c490fb4-a622-47f4-ba5e-b4dd9f153fda","Type":"ContainerStarted","Data":"afec1364f0fe36694609c9468052e5ecb1f6702e831e3fe563334b7b642d3204"} Jan 27 09:20:16 crc kubenswrapper[4799]: I0127 09:20:16.291874 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c490fb4-a622-47f4-ba5e-b4dd9f153fda","Type":"ContainerStarted","Data":"3559624eea4600da5dcc93434c30564d314ddea3d019498af1dc1695cefbcf81"} Jan 27 09:20:16 crc kubenswrapper[4799]: I0127 09:20:16.291885 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c490fb4-a622-47f4-ba5e-b4dd9f153fda","Type":"ContainerStarted","Data":"c092b34bb10d79c8af2ce7e00490413cefd31ac34dbfd23af4fd01b26626901a"} Jan 27 09:20:16 crc kubenswrapper[4799]: I0127 09:20:16.313567 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.3135481430000002 podStartE2EDuration="2.313548143s" podCreationTimestamp="2026-01-27 09:20:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:20:16.306822079 +0000 UTC m=+5682.617926144" watchObservedRunningTime="2026-01-27 09:20:16.313548143 +0000 UTC m=+5682.624652198" Jan 27 09:20:16 crc kubenswrapper[4799]: I0127 09:20:16.460740 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c40a416-327e-4270-a06b-a984eebb3d27" path="/var/lib/kubelet/pods/2c40a416-327e-4270-a06b-a984eebb3d27/volumes" Jan 27 09:20:18 crc kubenswrapper[4799]: I0127 09:20:18.719997 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 09:20:18 crc kubenswrapper[4799]: I0127 09:20:18.940279 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 09:20:18 crc kubenswrapper[4799]: I0127 09:20:18.940369 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 09:20:19 crc kubenswrapper[4799]: I0127 09:20:19.228002 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-776w9"] Jan 27 09:20:19 crc kubenswrapper[4799]: I0127 09:20:19.236083 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-776w9" Jan 27 09:20:19 crc kubenswrapper[4799]: I0127 09:20:19.238541 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 27 09:20:19 crc kubenswrapper[4799]: I0127 09:20:19.238591 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 27 09:20:19 crc kubenswrapper[4799]: I0127 09:20:19.248546 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-776w9"] Jan 27 09:20:19 crc kubenswrapper[4799]: I0127 09:20:19.346034 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d72d741-e0a5-4876-b3dd-c773184fc95a-scripts\") pod \"nova-cell1-cell-mapping-776w9\" (UID: \"7d72d741-e0a5-4876-b3dd-c773184fc95a\") " pod="openstack/nova-cell1-cell-mapping-776w9" Jan 27 09:20:19 crc kubenswrapper[4799]: I0127 09:20:19.346232 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d72d741-e0a5-4876-b3dd-c773184fc95a-config-data\") pod \"nova-cell1-cell-mapping-776w9\" (UID: \"7d72d741-e0a5-4876-b3dd-c773184fc95a\") " pod="openstack/nova-cell1-cell-mapping-776w9" Jan 27 09:20:19 crc kubenswrapper[4799]: I0127 09:20:19.346539 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d72d741-e0a5-4876-b3dd-c773184fc95a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-776w9\" (UID: \"7d72d741-e0a5-4876-b3dd-c773184fc95a\") " pod="openstack/nova-cell1-cell-mapping-776w9" Jan 27 09:20:19 crc kubenswrapper[4799]: I0127 09:20:19.346705 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcrww\" (UniqueName: \"kubernetes.io/projected/7d72d741-e0a5-4876-b3dd-c773184fc95a-kube-api-access-kcrww\") pod \"nova-cell1-cell-mapping-776w9\" (UID: \"7d72d741-e0a5-4876-b3dd-c773184fc95a\") " pod="openstack/nova-cell1-cell-mapping-776w9" Jan 27 09:20:19 crc kubenswrapper[4799]: I0127 09:20:19.447755 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d72d741-e0a5-4876-b3dd-c773184fc95a-config-data\") pod \"nova-cell1-cell-mapping-776w9\" (UID: \"7d72d741-e0a5-4876-b3dd-c773184fc95a\") " pod="openstack/nova-cell1-cell-mapping-776w9" Jan 27 09:20:19 crc kubenswrapper[4799]: I0127 09:20:19.447902 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d72d741-e0a5-4876-b3dd-c773184fc95a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-776w9\" (UID: \"7d72d741-e0a5-4876-b3dd-c773184fc95a\") " pod="openstack/nova-cell1-cell-mapping-776w9" Jan 27 09:20:19 crc kubenswrapper[4799]: I0127 09:20:19.448762 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcrww\" (UniqueName: \"kubernetes.io/projected/7d72d741-e0a5-4876-b3dd-c773184fc95a-kube-api-access-kcrww\") pod \"nova-cell1-cell-mapping-776w9\" (UID: \"7d72d741-e0a5-4876-b3dd-c773184fc95a\") " pod="openstack/nova-cell1-cell-mapping-776w9" Jan 27 09:20:19 crc kubenswrapper[4799]: I0127 09:20:19.448834 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d72d741-e0a5-4876-b3dd-c773184fc95a-scripts\") pod \"nova-cell1-cell-mapping-776w9\" (UID: \"7d72d741-e0a5-4876-b3dd-c773184fc95a\") " pod="openstack/nova-cell1-cell-mapping-776w9" Jan 27 09:20:19 crc kubenswrapper[4799]: I0127 09:20:19.456486 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d72d741-e0a5-4876-b3dd-c773184fc95a-scripts\") pod \"nova-cell1-cell-mapping-776w9\" (UID: \"7d72d741-e0a5-4876-b3dd-c773184fc95a\") " pod="openstack/nova-cell1-cell-mapping-776w9" Jan 27 09:20:19 crc kubenswrapper[4799]: I0127 09:20:19.457637 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d72d741-e0a5-4876-b3dd-c773184fc95a-config-data\") pod \"nova-cell1-cell-mapping-776w9\" (UID: \"7d72d741-e0a5-4876-b3dd-c773184fc95a\") " pod="openstack/nova-cell1-cell-mapping-776w9" Jan 27 09:20:19 crc kubenswrapper[4799]: I0127 09:20:19.467781 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d72d741-e0a5-4876-b3dd-c773184fc95a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-776w9\" (UID: \"7d72d741-e0a5-4876-b3dd-c773184fc95a\") " pod="openstack/nova-cell1-cell-mapping-776w9" Jan 27 09:20:19 crc kubenswrapper[4799]: I0127 09:20:19.474575 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcrww\" (UniqueName: \"kubernetes.io/projected/7d72d741-e0a5-4876-b3dd-c773184fc95a-kube-api-access-kcrww\") pod \"nova-cell1-cell-mapping-776w9\" (UID: \"7d72d741-e0a5-4876-b3dd-c773184fc95a\") " pod="openstack/nova-cell1-cell-mapping-776w9" Jan 27 09:20:19 crc kubenswrapper[4799]: I0127 09:20:19.565437 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-776w9" Jan 27 09:20:19 crc kubenswrapper[4799]: I0127 09:20:19.670409 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 09:20:19 crc kubenswrapper[4799]: I0127 09:20:19.738053 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 09:20:20 crc kubenswrapper[4799]: I0127 09:20:20.049390 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-776w9"] Jan 27 09:20:20 crc kubenswrapper[4799]: I0127 09:20:20.337263 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-776w9" event={"ID":"7d72d741-e0a5-4876-b3dd-c773184fc95a","Type":"ContainerStarted","Data":"133633e18390780601c541c81bdb2581d684ec4f807068997573a5a17b9e46e0"} Jan 27 09:20:20 crc kubenswrapper[4799]: I0127 09:20:20.337603 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-776w9" event={"ID":"7d72d741-e0a5-4876-b3dd-c773184fc95a","Type":"ContainerStarted","Data":"5bf2b24a49dbde657b46f5c3f80de9a7a21972b96734d8b9bb95bddc1e2b5cae"} Jan 27 09:20:20 crc kubenswrapper[4799]: I0127 09:20:20.363802 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 09:20:23 crc kubenswrapper[4799]: I0127 09:20:23.939941 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 09:20:23 crc kubenswrapper[4799]: I0127 09:20:23.940626 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 09:20:24 crc kubenswrapper[4799]: I0127 09:20:24.957722 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 09:20:24 crc kubenswrapper[4799]: I0127 09:20:24.957784 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 09:20:25 crc kubenswrapper[4799]: I0127 09:20:25.022684 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5e21c051-0766-44a9-85d5-d985de7e5cb2" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.60:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 09:20:25 crc kubenswrapper[4799]: I0127 09:20:25.023083 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5e21c051-0766-44a9-85d5-d985de7e5cb2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.60:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 09:20:25 crc kubenswrapper[4799]: I0127 09:20:25.422565 4799 generic.go:334] "Generic (PLEG): container finished" podID="7d72d741-e0a5-4876-b3dd-c773184fc95a" containerID="133633e18390780601c541c81bdb2581d684ec4f807068997573a5a17b9e46e0" exitCode=0 Jan 27 09:20:25 crc kubenswrapper[4799]: I0127 09:20:25.422704 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-776w9" event={"ID":"7d72d741-e0a5-4876-b3dd-c773184fc95a","Type":"ContainerDied","Data":"133633e18390780601c541c81bdb2581d684ec4f807068997573a5a17b9e46e0"} Jan 27 09:20:26 crc kubenswrapper[4799]: I0127 09:20:26.042506 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4c490fb4-a622-47f4-ba5e-b4dd9f153fda" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.61:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 09:20:26 crc kubenswrapper[4799]: I0127 09:20:26.042506 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4c490fb4-a622-47f4-ba5e-b4dd9f153fda" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.61:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 09:20:26 crc kubenswrapper[4799]: I0127 09:20:26.802187 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-776w9" Jan 27 09:20:26 crc kubenswrapper[4799]: I0127 09:20:26.899641 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d72d741-e0a5-4876-b3dd-c773184fc95a-scripts\") pod \"7d72d741-e0a5-4876-b3dd-c773184fc95a\" (UID: \"7d72d741-e0a5-4876-b3dd-c773184fc95a\") " Jan 27 09:20:26 crc kubenswrapper[4799]: I0127 09:20:26.900081 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d72d741-e0a5-4876-b3dd-c773184fc95a-config-data\") pod \"7d72d741-e0a5-4876-b3dd-c773184fc95a\" (UID: \"7d72d741-e0a5-4876-b3dd-c773184fc95a\") " Jan 27 09:20:26 crc kubenswrapper[4799]: I0127 09:20:26.901153 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcrww\" (UniqueName: \"kubernetes.io/projected/7d72d741-e0a5-4876-b3dd-c773184fc95a-kube-api-access-kcrww\") pod \"7d72d741-e0a5-4876-b3dd-c773184fc95a\" (UID: \"7d72d741-e0a5-4876-b3dd-c773184fc95a\") " Jan 27 09:20:26 crc kubenswrapper[4799]: I0127 09:20:26.901287 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d72d741-e0a5-4876-b3dd-c773184fc95a-combined-ca-bundle\") pod \"7d72d741-e0a5-4876-b3dd-c773184fc95a\" (UID: \"7d72d741-e0a5-4876-b3dd-c773184fc95a\") " Jan 27 09:20:26 crc kubenswrapper[4799]: I0127 09:20:26.906060 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d72d741-e0a5-4876-b3dd-c773184fc95a-scripts" (OuterVolumeSpecName: "scripts") pod "7d72d741-e0a5-4876-b3dd-c773184fc95a" (UID: "7d72d741-e0a5-4876-b3dd-c773184fc95a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:20:26 crc kubenswrapper[4799]: I0127 09:20:26.906171 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d72d741-e0a5-4876-b3dd-c773184fc95a-kube-api-access-kcrww" (OuterVolumeSpecName: "kube-api-access-kcrww") pod "7d72d741-e0a5-4876-b3dd-c773184fc95a" (UID: "7d72d741-e0a5-4876-b3dd-c773184fc95a"). InnerVolumeSpecName "kube-api-access-kcrww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:20:26 crc kubenswrapper[4799]: I0127 09:20:26.928745 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d72d741-e0a5-4876-b3dd-c773184fc95a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d72d741-e0a5-4876-b3dd-c773184fc95a" (UID: "7d72d741-e0a5-4876-b3dd-c773184fc95a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:20:26 crc kubenswrapper[4799]: I0127 09:20:26.941137 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d72d741-e0a5-4876-b3dd-c773184fc95a-config-data" (OuterVolumeSpecName: "config-data") pod "7d72d741-e0a5-4876-b3dd-c773184fc95a" (UID: "7d72d741-e0a5-4876-b3dd-c773184fc95a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:20:27 crc kubenswrapper[4799]: I0127 09:20:27.003919 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcrww\" (UniqueName: \"kubernetes.io/projected/7d72d741-e0a5-4876-b3dd-c773184fc95a-kube-api-access-kcrww\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:27 crc kubenswrapper[4799]: I0127 09:20:27.003963 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d72d741-e0a5-4876-b3dd-c773184fc95a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:27 crc kubenswrapper[4799]: I0127 09:20:27.003974 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d72d741-e0a5-4876-b3dd-c773184fc95a-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:27 crc kubenswrapper[4799]: I0127 09:20:27.003984 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d72d741-e0a5-4876-b3dd-c773184fc95a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:27 crc kubenswrapper[4799]: I0127 09:20:27.441262 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-776w9" event={"ID":"7d72d741-e0a5-4876-b3dd-c773184fc95a","Type":"ContainerDied","Data":"5bf2b24a49dbde657b46f5c3f80de9a7a21972b96734d8b9bb95bddc1e2b5cae"} Jan 27 09:20:27 crc kubenswrapper[4799]: I0127 09:20:27.441338 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bf2b24a49dbde657b46f5c3f80de9a7a21972b96734d8b9bb95bddc1e2b5cae" Jan 27 09:20:27 crc kubenswrapper[4799]: I0127 09:20:27.442665 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-776w9" Jan 27 09:20:27 crc kubenswrapper[4799]: I0127 09:20:27.741749 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:20:27 crc kubenswrapper[4799]: I0127 09:20:27.741949 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c537b700-af95-43ea-98e3-a37e340f1b35" containerName="nova-scheduler-scheduler" containerID="cri-o://1fd0c9018b785fd8dd044d8f48100a623f94728efc3543cd61e2685b274c8114" gracePeriod=30 Jan 27 09:20:27 crc kubenswrapper[4799]: I0127 09:20:27.766239 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:20:27 crc kubenswrapper[4799]: I0127 09:20:27.766475 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4c490fb4-a622-47f4-ba5e-b4dd9f153fda" containerName="nova-api-log" containerID="cri-o://3559624eea4600da5dcc93434c30564d314ddea3d019498af1dc1695cefbcf81" gracePeriod=30 Jan 27 09:20:27 crc kubenswrapper[4799]: I0127 09:20:27.766562 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4c490fb4-a622-47f4-ba5e-b4dd9f153fda" containerName="nova-api-api" containerID="cri-o://afec1364f0fe36694609c9468052e5ecb1f6702e831e3fe563334b7b642d3204" gracePeriod=30 Jan 27 09:20:27 crc kubenswrapper[4799]: I0127 09:20:27.780762 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:20:27 crc kubenswrapper[4799]: I0127 09:20:27.781014 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5e21c051-0766-44a9-85d5-d985de7e5cb2" containerName="nova-metadata-log" containerID="cri-o://371a4470012385c3705359383af1fb26083ed8049f59d0d9a00150b0f7b21e6b" gracePeriod=30 Jan 27 09:20:27 crc kubenswrapper[4799]: I0127 09:20:27.781107 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5e21c051-0766-44a9-85d5-d985de7e5cb2" containerName="nova-metadata-metadata" containerID="cri-o://d2331fd9896d5e2fe088db4175253861486395f9b5f60cd0d8451c95f3c8d0ef" gracePeriod=30 Jan 27 09:20:28 crc kubenswrapper[4799]: I0127 09:20:28.462377 4799 generic.go:334] "Generic (PLEG): container finished" podID="5e21c051-0766-44a9-85d5-d985de7e5cb2" containerID="371a4470012385c3705359383af1fb26083ed8049f59d0d9a00150b0f7b21e6b" exitCode=143 Jan 27 09:20:28 crc kubenswrapper[4799]: I0127 09:20:28.467129 4799 generic.go:334] "Generic (PLEG): container finished" podID="4c490fb4-a622-47f4-ba5e-b4dd9f153fda" containerID="3559624eea4600da5dcc93434c30564d314ddea3d019498af1dc1695cefbcf81" exitCode=143 Jan 27 09:20:28 crc kubenswrapper[4799]: I0127 09:20:28.475710 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5e21c051-0766-44a9-85d5-d985de7e5cb2","Type":"ContainerDied","Data":"371a4470012385c3705359383af1fb26083ed8049f59d0d9a00150b0f7b21e6b"} Jan 27 09:20:28 crc kubenswrapper[4799]: I0127 09:20:28.475760 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c490fb4-a622-47f4-ba5e-b4dd9f153fda","Type":"ContainerDied","Data":"3559624eea4600da5dcc93434c30564d314ddea3d019498af1dc1695cefbcf81"} Jan 27 09:20:29 crc kubenswrapper[4799]: E0127 09:20:29.672658 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1fd0c9018b785fd8dd044d8f48100a623f94728efc3543cd61e2685b274c8114" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 09:20:29 crc kubenswrapper[4799]: E0127 09:20:29.674434 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1fd0c9018b785fd8dd044d8f48100a623f94728efc3543cd61e2685b274c8114" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 09:20:29 crc kubenswrapper[4799]: E0127 09:20:29.676014 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1fd0c9018b785fd8dd044d8f48100a623f94728efc3543cd61e2685b274c8114" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 09:20:29 crc kubenswrapper[4799]: E0127 09:20:29.676078 4799 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c537b700-af95-43ea-98e3-a37e340f1b35" containerName="nova-scheduler-scheduler" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.333478 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.404001 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e21c051-0766-44a9-85d5-d985de7e5cb2-logs\") pod \"5e21c051-0766-44a9-85d5-d985de7e5cb2\" (UID: \"5e21c051-0766-44a9-85d5-d985de7e5cb2\") " Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.404046 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e21c051-0766-44a9-85d5-d985de7e5cb2-combined-ca-bundle\") pod \"5e21c051-0766-44a9-85d5-d985de7e5cb2\" (UID: \"5e21c051-0766-44a9-85d5-d985de7e5cb2\") " Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.404069 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e21c051-0766-44a9-85d5-d985de7e5cb2-config-data\") pod \"5e21c051-0766-44a9-85d5-d985de7e5cb2\" (UID: \"5e21c051-0766-44a9-85d5-d985de7e5cb2\") " Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.404123 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qrqh\" (UniqueName: \"kubernetes.io/projected/5e21c051-0766-44a9-85d5-d985de7e5cb2-kube-api-access-2qrqh\") pod \"5e21c051-0766-44a9-85d5-d985de7e5cb2\" (UID: \"5e21c051-0766-44a9-85d5-d985de7e5cb2\") " Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.405134 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e21c051-0766-44a9-85d5-d985de7e5cb2-logs" (OuterVolumeSpecName: "logs") pod "5e21c051-0766-44a9-85d5-d985de7e5cb2" (UID: "5e21c051-0766-44a9-85d5-d985de7e5cb2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.417717 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e21c051-0766-44a9-85d5-d985de7e5cb2-kube-api-access-2qrqh" (OuterVolumeSpecName: "kube-api-access-2qrqh") pod "5e21c051-0766-44a9-85d5-d985de7e5cb2" (UID: "5e21c051-0766-44a9-85d5-d985de7e5cb2"). InnerVolumeSpecName "kube-api-access-2qrqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.430348 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e21c051-0766-44a9-85d5-d985de7e5cb2-config-data" (OuterVolumeSpecName: "config-data") pod "5e21c051-0766-44a9-85d5-d985de7e5cb2" (UID: "5e21c051-0766-44a9-85d5-d985de7e5cb2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.430599 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e21c051-0766-44a9-85d5-d985de7e5cb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e21c051-0766-44a9-85d5-d985de7e5cb2" (UID: "5e21c051-0766-44a9-85d5-d985de7e5cb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.499650 4799 generic.go:334] "Generic (PLEG): container finished" podID="5e21c051-0766-44a9-85d5-d985de7e5cb2" containerID="d2331fd9896d5e2fe088db4175253861486395f9b5f60cd0d8451c95f3c8d0ef" exitCode=0 Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.499694 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5e21c051-0766-44a9-85d5-d985de7e5cb2","Type":"ContainerDied","Data":"d2331fd9896d5e2fe088db4175253861486395f9b5f60cd0d8451c95f3c8d0ef"} Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.499726 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5e21c051-0766-44a9-85d5-d985de7e5cb2","Type":"ContainerDied","Data":"18c02fd768405b337239387252d38387a732541c4a7c3977eaa9f579aee4ce37"} Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.499731 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.499743 4799 scope.go:117] "RemoveContainer" containerID="d2331fd9896d5e2fe088db4175253861486395f9b5f60cd0d8451c95f3c8d0ef" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.506201 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e21c051-0766-44a9-85d5-d985de7e5cb2-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.506230 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e21c051-0766-44a9-85d5-d985de7e5cb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.506242 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e21c051-0766-44a9-85d5-d985de7e5cb2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.506254 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qrqh\" (UniqueName: \"kubernetes.io/projected/5e21c051-0766-44a9-85d5-d985de7e5cb2-kube-api-access-2qrqh\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.535898 4799 scope.go:117] "RemoveContainer" containerID="371a4470012385c3705359383af1fb26083ed8049f59d0d9a00150b0f7b21e6b" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.539624 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.549006 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.573370 4799 scope.go:117] "RemoveContainer" containerID="d2331fd9896d5e2fe088db4175253861486395f9b5f60cd0d8451c95f3c8d0ef" Jan 27 09:20:31 crc kubenswrapper[4799]: E0127 09:20:31.573706 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2331fd9896d5e2fe088db4175253861486395f9b5f60cd0d8451c95f3c8d0ef\": container with ID starting with d2331fd9896d5e2fe088db4175253861486395f9b5f60cd0d8451c95f3c8d0ef not found: ID does not exist" containerID="d2331fd9896d5e2fe088db4175253861486395f9b5f60cd0d8451c95f3c8d0ef" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.573736 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2331fd9896d5e2fe088db4175253861486395f9b5f60cd0d8451c95f3c8d0ef"} err="failed to get container status \"d2331fd9896d5e2fe088db4175253861486395f9b5f60cd0d8451c95f3c8d0ef\": rpc error: code = NotFound desc = could not find container \"d2331fd9896d5e2fe088db4175253861486395f9b5f60cd0d8451c95f3c8d0ef\": container with ID starting with d2331fd9896d5e2fe088db4175253861486395f9b5f60cd0d8451c95f3c8d0ef not found: ID does not exist" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.573754 4799 scope.go:117] "RemoveContainer" containerID="371a4470012385c3705359383af1fb26083ed8049f59d0d9a00150b0f7b21e6b" Jan 27 09:20:31 crc kubenswrapper[4799]: E0127 09:20:31.574055 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"371a4470012385c3705359383af1fb26083ed8049f59d0d9a00150b0f7b21e6b\": container with ID starting with 371a4470012385c3705359383af1fb26083ed8049f59d0d9a00150b0f7b21e6b not found: ID does not exist" containerID="371a4470012385c3705359383af1fb26083ed8049f59d0d9a00150b0f7b21e6b" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.574080 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"371a4470012385c3705359383af1fb26083ed8049f59d0d9a00150b0f7b21e6b"} err="failed to get container status \"371a4470012385c3705359383af1fb26083ed8049f59d0d9a00150b0f7b21e6b\": rpc error: code = NotFound desc = could not find container \"371a4470012385c3705359383af1fb26083ed8049f59d0d9a00150b0f7b21e6b\": container with ID starting with 371a4470012385c3705359383af1fb26083ed8049f59d0d9a00150b0f7b21e6b not found: ID does not exist" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.579362 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:20:31 crc kubenswrapper[4799]: E0127 09:20:31.579829 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e21c051-0766-44a9-85d5-d985de7e5cb2" containerName="nova-metadata-log" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.579848 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e21c051-0766-44a9-85d5-d985de7e5cb2" containerName="nova-metadata-log" Jan 27 09:20:31 crc kubenswrapper[4799]: E0127 09:20:31.579858 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d72d741-e0a5-4876-b3dd-c773184fc95a" containerName="nova-manage" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.579865 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d72d741-e0a5-4876-b3dd-c773184fc95a" containerName="nova-manage" Jan 27 09:20:31 crc kubenswrapper[4799]: E0127 09:20:31.579881 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e21c051-0766-44a9-85d5-d985de7e5cb2" containerName="nova-metadata-metadata" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.579887 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e21c051-0766-44a9-85d5-d985de7e5cb2" containerName="nova-metadata-metadata" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.580056 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d72d741-e0a5-4876-b3dd-c773184fc95a" containerName="nova-manage" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.580076 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e21c051-0766-44a9-85d5-d985de7e5cb2" containerName="nova-metadata-metadata" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.580087 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e21c051-0766-44a9-85d5-d985de7e5cb2" containerName="nova-metadata-log" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.581009 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.586950 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.597977 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.712003 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96172775-55ea-448b-a331-8c10c7a1ac20-config-data\") pod \"nova-metadata-0\" (UID: \"96172775-55ea-448b-a331-8c10c7a1ac20\") " pod="openstack/nova-metadata-0" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.712486 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96172775-55ea-448b-a331-8c10c7a1ac20-logs\") pod \"nova-metadata-0\" (UID: \"96172775-55ea-448b-a331-8c10c7a1ac20\") " pod="openstack/nova-metadata-0" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.712550 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grnf6\" (UniqueName: \"kubernetes.io/projected/96172775-55ea-448b-a331-8c10c7a1ac20-kube-api-access-grnf6\") pod \"nova-metadata-0\" (UID: \"96172775-55ea-448b-a331-8c10c7a1ac20\") " pod="openstack/nova-metadata-0" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.712595 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96172775-55ea-448b-a331-8c10c7a1ac20-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96172775-55ea-448b-a331-8c10c7a1ac20\") " pod="openstack/nova-metadata-0" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.814867 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96172775-55ea-448b-a331-8c10c7a1ac20-logs\") pod \"nova-metadata-0\" (UID: \"96172775-55ea-448b-a331-8c10c7a1ac20\") " pod="openstack/nova-metadata-0" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.814956 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grnf6\" (UniqueName: \"kubernetes.io/projected/96172775-55ea-448b-a331-8c10c7a1ac20-kube-api-access-grnf6\") pod \"nova-metadata-0\" (UID: \"96172775-55ea-448b-a331-8c10c7a1ac20\") " pod="openstack/nova-metadata-0" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.815002 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96172775-55ea-448b-a331-8c10c7a1ac20-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96172775-55ea-448b-a331-8c10c7a1ac20\") " pod="openstack/nova-metadata-0" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.815048 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96172775-55ea-448b-a331-8c10c7a1ac20-config-data\") pod \"nova-metadata-0\" (UID: \"96172775-55ea-448b-a331-8c10c7a1ac20\") " pod="openstack/nova-metadata-0" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.815355 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96172775-55ea-448b-a331-8c10c7a1ac20-logs\") pod \"nova-metadata-0\" (UID: \"96172775-55ea-448b-a331-8c10c7a1ac20\") " pod="openstack/nova-metadata-0" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.820127 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96172775-55ea-448b-a331-8c10c7a1ac20-config-data\") pod \"nova-metadata-0\" (UID: \"96172775-55ea-448b-a331-8c10c7a1ac20\") " pod="openstack/nova-metadata-0" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.821568 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96172775-55ea-448b-a331-8c10c7a1ac20-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96172775-55ea-448b-a331-8c10c7a1ac20\") " pod="openstack/nova-metadata-0" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.830475 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grnf6\" (UniqueName: \"kubernetes.io/projected/96172775-55ea-448b-a331-8c10c7a1ac20-kube-api-access-grnf6\") pod \"nova-metadata-0\" (UID: \"96172775-55ea-448b-a331-8c10c7a1ac20\") " pod="openstack/nova-metadata-0" Jan 27 09:20:31 crc kubenswrapper[4799]: I0127 09:20:31.914011 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:20:32 crc kubenswrapper[4799]: I0127 09:20:32.469813 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e21c051-0766-44a9-85d5-d985de7e5cb2" path="/var/lib/kubelet/pods/5e21c051-0766-44a9-85d5-d985de7e5cb2/volumes" Jan 27 09:20:32 crc kubenswrapper[4799]: I0127 09:20:32.511716 4799 generic.go:334] "Generic (PLEG): container finished" podID="c537b700-af95-43ea-98e3-a37e340f1b35" containerID="1fd0c9018b785fd8dd044d8f48100a623f94728efc3543cd61e2685b274c8114" exitCode=0 Jan 27 09:20:32 crc kubenswrapper[4799]: I0127 09:20:32.511783 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c537b700-af95-43ea-98e3-a37e340f1b35","Type":"ContainerDied","Data":"1fd0c9018b785fd8dd044d8f48100a623f94728efc3543cd61e2685b274c8114"} Jan 27 09:20:32 crc kubenswrapper[4799]: I0127 09:20:32.513899 4799 generic.go:334] "Generic (PLEG): container finished" podID="4c490fb4-a622-47f4-ba5e-b4dd9f153fda" containerID="afec1364f0fe36694609c9468052e5ecb1f6702e831e3fe563334b7b642d3204" exitCode=0 Jan 27 09:20:32 crc kubenswrapper[4799]: I0127 09:20:32.513954 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c490fb4-a622-47f4-ba5e-b4dd9f153fda","Type":"ContainerDied","Data":"afec1364f0fe36694609c9468052e5ecb1f6702e831e3fe563334b7b642d3204"} Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.087788 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:20:33 crc kubenswrapper[4799]: W0127 09:20:33.090076 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96172775_55ea_448b_a331_8c10c7a1ac20.slice/crio-1ee6b0968f75a99b7b4de1a96257f7d9342b12dd69f95a3a88f42bc23ff6f016 WatchSource:0}: Error finding container 1ee6b0968f75a99b7b4de1a96257f7d9342b12dd69f95a3a88f42bc23ff6f016: Status 404 returned error can't find the container with id 1ee6b0968f75a99b7b4de1a96257f7d9342b12dd69f95a3a88f42bc23ff6f016 Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.132031 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.184864 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.246363 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-logs\") pod \"4c490fb4-a622-47f4-ba5e-b4dd9f153fda\" (UID: \"4c490fb4-a622-47f4-ba5e-b4dd9f153fda\") " Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.246462 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-combined-ca-bundle\") pod \"4c490fb4-a622-47f4-ba5e-b4dd9f153fda\" (UID: \"4c490fb4-a622-47f4-ba5e-b4dd9f153fda\") " Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.246496 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-config-data\") pod \"4c490fb4-a622-47f4-ba5e-b4dd9f153fda\" (UID: \"4c490fb4-a622-47f4-ba5e-b4dd9f153fda\") " Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.246530 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgw9g\" (UniqueName: \"kubernetes.io/projected/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-kube-api-access-qgw9g\") pod \"4c490fb4-a622-47f4-ba5e-b4dd9f153fda\" (UID: \"4c490fb4-a622-47f4-ba5e-b4dd9f153fda\") " Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.247060 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-logs" (OuterVolumeSpecName: "logs") pod "4c490fb4-a622-47f4-ba5e-b4dd9f153fda" (UID: "4c490fb4-a622-47f4-ba5e-b4dd9f153fda"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.256216 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-kube-api-access-qgw9g" (OuterVolumeSpecName: "kube-api-access-qgw9g") pod "4c490fb4-a622-47f4-ba5e-b4dd9f153fda" (UID: "4c490fb4-a622-47f4-ba5e-b4dd9f153fda"). InnerVolumeSpecName "kube-api-access-qgw9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.286600 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c490fb4-a622-47f4-ba5e-b4dd9f153fda" (UID: "4c490fb4-a622-47f4-ba5e-b4dd9f153fda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.287607 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-config-data" (OuterVolumeSpecName: "config-data") pod "4c490fb4-a622-47f4-ba5e-b4dd9f153fda" (UID: "4c490fb4-a622-47f4-ba5e-b4dd9f153fda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.353827 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clqdj\" (UniqueName: \"kubernetes.io/projected/c537b700-af95-43ea-98e3-a37e340f1b35-kube-api-access-clqdj\") pod \"c537b700-af95-43ea-98e3-a37e340f1b35\" (UID: \"c537b700-af95-43ea-98e3-a37e340f1b35\") " Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.353889 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c537b700-af95-43ea-98e3-a37e340f1b35-combined-ca-bundle\") pod \"c537b700-af95-43ea-98e3-a37e340f1b35\" (UID: \"c537b700-af95-43ea-98e3-a37e340f1b35\") " Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.354152 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c537b700-af95-43ea-98e3-a37e340f1b35-config-data\") pod \"c537b700-af95-43ea-98e3-a37e340f1b35\" (UID: \"c537b700-af95-43ea-98e3-a37e340f1b35\") " Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.354604 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.354628 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.354642 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.354653 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgw9g\" (UniqueName: \"kubernetes.io/projected/4c490fb4-a622-47f4-ba5e-b4dd9f153fda-kube-api-access-qgw9g\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.359538 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c537b700-af95-43ea-98e3-a37e340f1b35-kube-api-access-clqdj" (OuterVolumeSpecName: "kube-api-access-clqdj") pod "c537b700-af95-43ea-98e3-a37e340f1b35" (UID: "c537b700-af95-43ea-98e3-a37e340f1b35"). InnerVolumeSpecName "kube-api-access-clqdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.392013 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c537b700-af95-43ea-98e3-a37e340f1b35-config-data" (OuterVolumeSpecName: "config-data") pod "c537b700-af95-43ea-98e3-a37e340f1b35" (UID: "c537b700-af95-43ea-98e3-a37e340f1b35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.398195 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c537b700-af95-43ea-98e3-a37e340f1b35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c537b700-af95-43ea-98e3-a37e340f1b35" (UID: "c537b700-af95-43ea-98e3-a37e340f1b35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.456464 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c537b700-af95-43ea-98e3-a37e340f1b35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.456501 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c537b700-af95-43ea-98e3-a37e340f1b35-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.456511 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clqdj\" (UniqueName: \"kubernetes.io/projected/c537b700-af95-43ea-98e3-a37e340f1b35-kube-api-access-clqdj\") on node \"crc\" DevicePath \"\"" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.525199 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c537b700-af95-43ea-98e3-a37e340f1b35","Type":"ContainerDied","Data":"9b98b706ad20f4d81f60c32edc1e2e157895a57468dea102d5a66bc5dd09dee5"} Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.525248 4799 scope.go:117] "RemoveContainer" containerID="1fd0c9018b785fd8dd044d8f48100a623f94728efc3543cd61e2685b274c8114" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.525401 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.532422 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c490fb4-a622-47f4-ba5e-b4dd9f153fda","Type":"ContainerDied","Data":"c092b34bb10d79c8af2ce7e00490413cefd31ac34dbfd23af4fd01b26626901a"} Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.532467 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.535949 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96172775-55ea-448b-a331-8c10c7a1ac20","Type":"ContainerStarted","Data":"53538cdf8d9fbd86abab4d97489a277bb804009f49f919e7d17b3933518c95df"} Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.535982 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96172775-55ea-448b-a331-8c10c7a1ac20","Type":"ContainerStarted","Data":"e2f46c1875eb09322a1a73ad4f190e1e058d3df354efe01afefd540af2400163"} Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.535995 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96172775-55ea-448b-a331-8c10c7a1ac20","Type":"ContainerStarted","Data":"1ee6b0968f75a99b7b4de1a96257f7d9342b12dd69f95a3a88f42bc23ff6f016"} Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.550275 4799 scope.go:117] "RemoveContainer" containerID="afec1364f0fe36694609c9468052e5ecb1f6702e831e3fe563334b7b642d3204" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.565922 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.5659015849999998 podStartE2EDuration="2.565901585s" podCreationTimestamp="2026-01-27 09:20:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:20:33.557930958 +0000 UTC m=+5699.869035023" watchObservedRunningTime="2026-01-27 09:20:33.565901585 +0000 UTC m=+5699.877005640" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.583957 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.596880 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.597283 4799 scope.go:117] "RemoveContainer" containerID="3559624eea4600da5dcc93434c30564d314ddea3d019498af1dc1695cefbcf81" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.608259 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.623267 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.643363 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:20:33 crc kubenswrapper[4799]: E0127 09:20:33.644125 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c537b700-af95-43ea-98e3-a37e340f1b35" containerName="nova-scheduler-scheduler" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.644148 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c537b700-af95-43ea-98e3-a37e340f1b35" containerName="nova-scheduler-scheduler" Jan 27 09:20:33 crc kubenswrapper[4799]: E0127 09:20:33.644205 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c490fb4-a622-47f4-ba5e-b4dd9f153fda" containerName="nova-api-log" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.644216 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c490fb4-a622-47f4-ba5e-b4dd9f153fda" containerName="nova-api-log" Jan 27 09:20:33 crc kubenswrapper[4799]: E0127 09:20:33.644227 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c490fb4-a622-47f4-ba5e-b4dd9f153fda" containerName="nova-api-api" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.644235 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c490fb4-a622-47f4-ba5e-b4dd9f153fda" containerName="nova-api-api" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.644898 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c537b700-af95-43ea-98e3-a37e340f1b35" containerName="nova-scheduler-scheduler" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.644932 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c490fb4-a622-47f4-ba5e-b4dd9f153fda" containerName="nova-api-api" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.644962 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c490fb4-a622-47f4-ba5e-b4dd9f153fda" containerName="nova-api-log" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.647550 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.652823 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.659211 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.677603 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.679149 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.685869 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.686317 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.766338 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq2mr\" (UniqueName: \"kubernetes.io/projected/e8d15b71-81f3-4700-b916-db1a08d5c5fc-kube-api-access-nq2mr\") pod \"nova-api-0\" (UID: \"e8d15b71-81f3-4700-b916-db1a08d5c5fc\") " pod="openstack/nova-api-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.766639 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d60b300-02c2-4bc8-908d-2f7e2b5bddad-config-data\") pod \"nova-scheduler-0\" (UID: \"1d60b300-02c2-4bc8-908d-2f7e2b5bddad\") " pod="openstack/nova-scheduler-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.766783 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8d15b71-81f3-4700-b916-db1a08d5c5fc-config-data\") pod \"nova-api-0\" (UID: \"e8d15b71-81f3-4700-b916-db1a08d5c5fc\") " pod="openstack/nova-api-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.766848 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d60b300-02c2-4bc8-908d-2f7e2b5bddad-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1d60b300-02c2-4bc8-908d-2f7e2b5bddad\") " pod="openstack/nova-scheduler-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.766938 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8d15b71-81f3-4700-b916-db1a08d5c5fc-logs\") pod \"nova-api-0\" (UID: \"e8d15b71-81f3-4700-b916-db1a08d5c5fc\") " pod="openstack/nova-api-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.767069 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8d15b71-81f3-4700-b916-db1a08d5c5fc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e8d15b71-81f3-4700-b916-db1a08d5c5fc\") " pod="openstack/nova-api-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.767151 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4xfk\" (UniqueName: \"kubernetes.io/projected/1d60b300-02c2-4bc8-908d-2f7e2b5bddad-kube-api-access-w4xfk\") pod \"nova-scheduler-0\" (UID: \"1d60b300-02c2-4bc8-908d-2f7e2b5bddad\") " pod="openstack/nova-scheduler-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.868959 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8d15b71-81f3-4700-b916-db1a08d5c5fc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e8d15b71-81f3-4700-b916-db1a08d5c5fc\") " pod="openstack/nova-api-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.869105 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4xfk\" (UniqueName: \"kubernetes.io/projected/1d60b300-02c2-4bc8-908d-2f7e2b5bddad-kube-api-access-w4xfk\") pod \"nova-scheduler-0\" (UID: \"1d60b300-02c2-4bc8-908d-2f7e2b5bddad\") " pod="openstack/nova-scheduler-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.869214 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq2mr\" (UniqueName: \"kubernetes.io/projected/e8d15b71-81f3-4700-b916-db1a08d5c5fc-kube-api-access-nq2mr\") pod \"nova-api-0\" (UID: \"e8d15b71-81f3-4700-b916-db1a08d5c5fc\") " pod="openstack/nova-api-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.869311 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d60b300-02c2-4bc8-908d-2f7e2b5bddad-config-data\") pod \"nova-scheduler-0\" (UID: \"1d60b300-02c2-4bc8-908d-2f7e2b5bddad\") " pod="openstack/nova-scheduler-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.869389 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8d15b71-81f3-4700-b916-db1a08d5c5fc-config-data\") pod \"nova-api-0\" (UID: \"e8d15b71-81f3-4700-b916-db1a08d5c5fc\") " pod="openstack/nova-api-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.869485 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d60b300-02c2-4bc8-908d-2f7e2b5bddad-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1d60b300-02c2-4bc8-908d-2f7e2b5bddad\") " pod="openstack/nova-scheduler-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.869571 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8d15b71-81f3-4700-b916-db1a08d5c5fc-logs\") pod \"nova-api-0\" (UID: \"e8d15b71-81f3-4700-b916-db1a08d5c5fc\") " pod="openstack/nova-api-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.870004 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8d15b71-81f3-4700-b916-db1a08d5c5fc-logs\") pod \"nova-api-0\" (UID: \"e8d15b71-81f3-4700-b916-db1a08d5c5fc\") " pod="openstack/nova-api-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.874220 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8d15b71-81f3-4700-b916-db1a08d5c5fc-config-data\") pod \"nova-api-0\" (UID: \"e8d15b71-81f3-4700-b916-db1a08d5c5fc\") " pod="openstack/nova-api-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.874289 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d60b300-02c2-4bc8-908d-2f7e2b5bddad-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1d60b300-02c2-4bc8-908d-2f7e2b5bddad\") " pod="openstack/nova-scheduler-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.874502 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8d15b71-81f3-4700-b916-db1a08d5c5fc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e8d15b71-81f3-4700-b916-db1a08d5c5fc\") " pod="openstack/nova-api-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.875780 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d60b300-02c2-4bc8-908d-2f7e2b5bddad-config-data\") pod \"nova-scheduler-0\" (UID: \"1d60b300-02c2-4bc8-908d-2f7e2b5bddad\") " pod="openstack/nova-scheduler-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.888806 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq2mr\" (UniqueName: \"kubernetes.io/projected/e8d15b71-81f3-4700-b916-db1a08d5c5fc-kube-api-access-nq2mr\") pod \"nova-api-0\" (UID: \"e8d15b71-81f3-4700-b916-db1a08d5c5fc\") " pod="openstack/nova-api-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.892399 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4xfk\" (UniqueName: \"kubernetes.io/projected/1d60b300-02c2-4bc8-908d-2f7e2b5bddad-kube-api-access-w4xfk\") pod \"nova-scheduler-0\" (UID: \"1d60b300-02c2-4bc8-908d-2f7e2b5bddad\") " pod="openstack/nova-scheduler-0" Jan 27 09:20:33 crc kubenswrapper[4799]: I0127 09:20:33.981789 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:20:34 crc kubenswrapper[4799]: I0127 09:20:34.006706 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:20:34 crc kubenswrapper[4799]: I0127 09:20:34.463183 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c490fb4-a622-47f4-ba5e-b4dd9f153fda" path="/var/lib/kubelet/pods/4c490fb4-a622-47f4-ba5e-b4dd9f153fda/volumes" Jan 27 09:20:34 crc kubenswrapper[4799]: I0127 09:20:34.464764 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c537b700-af95-43ea-98e3-a37e340f1b35" path="/var/lib/kubelet/pods/c537b700-af95-43ea-98e3-a37e340f1b35/volumes" Jan 27 09:20:34 crc kubenswrapper[4799]: I0127 09:20:34.610465 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:20:34 crc kubenswrapper[4799]: W0127 09:20:34.612551 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8d15b71_81f3_4700_b916_db1a08d5c5fc.slice/crio-63b8c8e157c0e8259c0a2074119f7ab6f85813351849dd3005314975fdc899a3 WatchSource:0}: Error finding container 63b8c8e157c0e8259c0a2074119f7ab6f85813351849dd3005314975fdc899a3: Status 404 returned error can't find the container with id 63b8c8e157c0e8259c0a2074119f7ab6f85813351849dd3005314975fdc899a3 Jan 27 09:20:34 crc kubenswrapper[4799]: I0127 09:20:34.691076 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:20:34 crc kubenswrapper[4799]: W0127 09:20:34.692090 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d60b300_02c2_4bc8_908d_2f7e2b5bddad.slice/crio-50c3beccf7975846778254b08b1db6a8e558d1db4717d42c40576636c752ec1d WatchSource:0}: Error finding container 50c3beccf7975846778254b08b1db6a8e558d1db4717d42c40576636c752ec1d: Status 404 returned error can't find the container with id 50c3beccf7975846778254b08b1db6a8e558d1db4717d42c40576636c752ec1d Jan 27 09:20:35 crc kubenswrapper[4799]: I0127 09:20:35.559389 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8d15b71-81f3-4700-b916-db1a08d5c5fc","Type":"ContainerStarted","Data":"b3361ee321f3621da3eda593449da8db9db8c2e0fe647a43890a2ad585feee52"} Jan 27 09:20:35 crc kubenswrapper[4799]: I0127 09:20:35.560882 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8d15b71-81f3-4700-b916-db1a08d5c5fc","Type":"ContainerStarted","Data":"8aa353eab7c5b40f9c768464929c9e61a3d5fb825048c99739b8564d0b457f6a"} Jan 27 09:20:35 crc kubenswrapper[4799]: I0127 09:20:35.560962 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8d15b71-81f3-4700-b916-db1a08d5c5fc","Type":"ContainerStarted","Data":"63b8c8e157c0e8259c0a2074119f7ab6f85813351849dd3005314975fdc899a3"} Jan 27 09:20:35 crc kubenswrapper[4799]: I0127 09:20:35.579366 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1d60b300-02c2-4bc8-908d-2f7e2b5bddad","Type":"ContainerStarted","Data":"89045eb040bd6f93089885e5689c781508568ee002952038a3e102b8a3f58b8c"} Jan 27 09:20:35 crc kubenswrapper[4799]: I0127 09:20:35.579419 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1d60b300-02c2-4bc8-908d-2f7e2b5bddad","Type":"ContainerStarted","Data":"50c3beccf7975846778254b08b1db6a8e558d1db4717d42c40576636c752ec1d"} Jan 27 09:20:35 crc kubenswrapper[4799]: I0127 09:20:35.598826 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.598805247 podStartE2EDuration="2.598805247s" podCreationTimestamp="2026-01-27 09:20:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:20:35.592513356 +0000 UTC m=+5701.903617421" watchObservedRunningTime="2026-01-27 09:20:35.598805247 +0000 UTC m=+5701.909909312" Jan 27 09:20:35 crc kubenswrapper[4799]: I0127 09:20:35.612419 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.612355376 podStartE2EDuration="2.612355376s" podCreationTimestamp="2026-01-27 09:20:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:20:35.611838242 +0000 UTC m=+5701.922942307" watchObservedRunningTime="2026-01-27 09:20:35.612355376 +0000 UTC m=+5701.923459441" Jan 27 09:20:36 crc kubenswrapper[4799]: I0127 09:20:36.914949 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 09:20:36 crc kubenswrapper[4799]: I0127 09:20:36.915591 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 09:20:38 crc kubenswrapper[4799]: I0127 09:20:38.981956 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 09:20:40 crc kubenswrapper[4799]: I0127 09:20:40.622502 4799 scope.go:117] "RemoveContainer" containerID="082bf0c2700ca42b5deb31efaf2dbbbd68fecfcc4e384ee9fe98c015d621df66" Jan 27 09:20:40 crc kubenswrapper[4799]: I0127 09:20:40.647901 4799 scope.go:117] "RemoveContainer" containerID="60972ecfcaae3a1d8cc3971335cab2f74bd0581b95f7ca6e3e930d9c7d30742c" Jan 27 09:20:40 crc kubenswrapper[4799]: I0127 09:20:40.694050 4799 scope.go:117] "RemoveContainer" containerID="875e04e6f807f6719782958b8851ec4f6f5a529d9e15da5b38ab29232915cef0" Jan 27 09:20:41 crc kubenswrapper[4799]: I0127 09:20:41.914381 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 09:20:41 crc kubenswrapper[4799]: I0127 09:20:41.914426 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 09:20:42 crc kubenswrapper[4799]: I0127 09:20:42.998504 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96172775-55ea-448b-a331-8c10c7a1ac20" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.63:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 09:20:42 crc kubenswrapper[4799]: I0127 09:20:42.998509 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96172775-55ea-448b-a331-8c10c7a1ac20" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.63:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 09:20:43 crc kubenswrapper[4799]: I0127 09:20:43.982359 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 09:20:44 crc kubenswrapper[4799]: I0127 09:20:44.007956 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 09:20:44 crc kubenswrapper[4799]: I0127 09:20:44.008021 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 09:20:44 crc kubenswrapper[4799]: I0127 09:20:44.012863 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 09:20:44 crc kubenswrapper[4799]: I0127 09:20:44.679245 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 09:20:45 crc kubenswrapper[4799]: I0127 09:20:45.090468 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e8d15b71-81f3-4700-b916-db1a08d5c5fc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.65:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 09:20:45 crc kubenswrapper[4799]: I0127 09:20:45.090803 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e8d15b71-81f3-4700-b916-db1a08d5c5fc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.65:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 09:20:51 crc kubenswrapper[4799]: I0127 09:20:51.917814 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 09:20:51 crc kubenswrapper[4799]: I0127 09:20:51.920097 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 09:20:51 crc kubenswrapper[4799]: I0127 09:20:51.922381 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 09:20:52 crc kubenswrapper[4799]: I0127 09:20:52.733830 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 09:20:54 crc kubenswrapper[4799]: I0127 09:20:54.012100 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 09:20:54 crc kubenswrapper[4799]: I0127 09:20:54.013229 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 09:20:54 crc kubenswrapper[4799]: I0127 09:20:54.013638 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 09:20:54 crc kubenswrapper[4799]: I0127 09:20:54.017458 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 09:20:54 crc kubenswrapper[4799]: I0127 09:20:54.998210 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 09:20:55 crc kubenswrapper[4799]: I0127 09:20:55.001980 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 09:20:55 crc kubenswrapper[4799]: I0127 09:20:55.231650 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f64c694b9-5pkm9"] Jan 27 09:20:55 crc kubenswrapper[4799]: I0127 09:20:55.239583 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" Jan 27 09:20:55 crc kubenswrapper[4799]: I0127 09:20:55.266449 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f64c694b9-5pkm9"] Jan 27 09:20:55 crc kubenswrapper[4799]: I0127 09:20:55.313447 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-config\") pod \"dnsmasq-dns-5f64c694b9-5pkm9\" (UID: \"44de1aed-71d7-42bb-945f-f19ec5470500\") " pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" Jan 27 09:20:55 crc kubenswrapper[4799]: I0127 09:20:55.313514 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-ovsdbserver-sb\") pod \"dnsmasq-dns-5f64c694b9-5pkm9\" (UID: \"44de1aed-71d7-42bb-945f-f19ec5470500\") " pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" Jan 27 09:20:55 crc kubenswrapper[4799]: I0127 09:20:55.313548 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-ovsdbserver-nb\") pod \"dnsmasq-dns-5f64c694b9-5pkm9\" (UID: \"44de1aed-71d7-42bb-945f-f19ec5470500\") " pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" Jan 27 09:20:55 crc kubenswrapper[4799]: I0127 09:20:55.313571 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-dns-svc\") pod \"dnsmasq-dns-5f64c694b9-5pkm9\" (UID: \"44de1aed-71d7-42bb-945f-f19ec5470500\") " pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" Jan 27 09:20:55 crc kubenswrapper[4799]: I0127 09:20:55.313645 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjwtq\" (UniqueName: \"kubernetes.io/projected/44de1aed-71d7-42bb-945f-f19ec5470500-kube-api-access-rjwtq\") pod \"dnsmasq-dns-5f64c694b9-5pkm9\" (UID: \"44de1aed-71d7-42bb-945f-f19ec5470500\") " pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" Jan 27 09:20:55 crc kubenswrapper[4799]: I0127 09:20:55.415869 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-config\") pod \"dnsmasq-dns-5f64c694b9-5pkm9\" (UID: \"44de1aed-71d7-42bb-945f-f19ec5470500\") " pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" Jan 27 09:20:55 crc kubenswrapper[4799]: I0127 09:20:55.416370 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-ovsdbserver-sb\") pod \"dnsmasq-dns-5f64c694b9-5pkm9\" (UID: \"44de1aed-71d7-42bb-945f-f19ec5470500\") " pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" Jan 27 09:20:55 crc kubenswrapper[4799]: I0127 09:20:55.416486 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-ovsdbserver-nb\") pod \"dnsmasq-dns-5f64c694b9-5pkm9\" (UID: \"44de1aed-71d7-42bb-945f-f19ec5470500\") " pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" Jan 27 09:20:55 crc kubenswrapper[4799]: I0127 09:20:55.416594 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-dns-svc\") pod \"dnsmasq-dns-5f64c694b9-5pkm9\" (UID: \"44de1aed-71d7-42bb-945f-f19ec5470500\") " pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" Jan 27 09:20:55 crc kubenswrapper[4799]: I0127 09:20:55.416935 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjwtq\" (UniqueName: \"kubernetes.io/projected/44de1aed-71d7-42bb-945f-f19ec5470500-kube-api-access-rjwtq\") pod \"dnsmasq-dns-5f64c694b9-5pkm9\" (UID: \"44de1aed-71d7-42bb-945f-f19ec5470500\") " pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" Jan 27 09:20:55 crc kubenswrapper[4799]: I0127 09:20:55.417913 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-ovsdbserver-sb\") pod \"dnsmasq-dns-5f64c694b9-5pkm9\" (UID: \"44de1aed-71d7-42bb-945f-f19ec5470500\") " pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" Jan 27 09:20:55 crc kubenswrapper[4799]: I0127 09:20:55.418672 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-config\") pod \"dnsmasq-dns-5f64c694b9-5pkm9\" (UID: \"44de1aed-71d7-42bb-945f-f19ec5470500\") " pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" Jan 27 09:20:55 crc kubenswrapper[4799]: I0127 09:20:55.419185 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-dns-svc\") pod \"dnsmasq-dns-5f64c694b9-5pkm9\" (UID: \"44de1aed-71d7-42bb-945f-f19ec5470500\") " pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" Jan 27 09:20:55 crc kubenswrapper[4799]: I0127 09:20:55.421530 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-ovsdbserver-nb\") pod \"dnsmasq-dns-5f64c694b9-5pkm9\" (UID: \"44de1aed-71d7-42bb-945f-f19ec5470500\") " pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" Jan 27 09:20:55 crc kubenswrapper[4799]: I0127 09:20:55.453237 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjwtq\" (UniqueName: \"kubernetes.io/projected/44de1aed-71d7-42bb-945f-f19ec5470500-kube-api-access-rjwtq\") pod \"dnsmasq-dns-5f64c694b9-5pkm9\" (UID: \"44de1aed-71d7-42bb-945f-f19ec5470500\") " pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" Jan 27 09:20:55 crc kubenswrapper[4799]: I0127 09:20:55.617627 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" Jan 27 09:20:56 crc kubenswrapper[4799]: I0127 09:20:56.138930 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f64c694b9-5pkm9"] Jan 27 09:20:57 crc kubenswrapper[4799]: I0127 09:20:57.014604 4799 generic.go:334] "Generic (PLEG): container finished" podID="44de1aed-71d7-42bb-945f-f19ec5470500" containerID="080eaf41d9d81aba926c58cb5a728c5e45836e62a3751a9f503690a7b059e56d" exitCode=0 Jan 27 09:20:57 crc kubenswrapper[4799]: I0127 09:20:57.018073 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" event={"ID":"44de1aed-71d7-42bb-945f-f19ec5470500","Type":"ContainerDied","Data":"080eaf41d9d81aba926c58cb5a728c5e45836e62a3751a9f503690a7b059e56d"} Jan 27 09:20:57 crc kubenswrapper[4799]: I0127 09:20:57.018112 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" event={"ID":"44de1aed-71d7-42bb-945f-f19ec5470500","Type":"ContainerStarted","Data":"3b6349c1412accde143c8accf828aeaf14fe59e5ae3d3cc54d21e8c8000cb7b4"} Jan 27 09:20:58 crc kubenswrapper[4799]: I0127 09:20:58.031060 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" event={"ID":"44de1aed-71d7-42bb-945f-f19ec5470500","Type":"ContainerStarted","Data":"999174b5726944f34f93e1a1b4a4027b94f29c85024d15546e103a4f82a5d503"} Jan 27 09:20:58 crc kubenswrapper[4799]: I0127 09:20:58.031539 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" Jan 27 09:20:58 crc kubenswrapper[4799]: I0127 09:20:58.059487 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" podStartSLOduration=3.059460472 podStartE2EDuration="3.059460472s" podCreationTimestamp="2026-01-27 09:20:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:20:58.051715081 +0000 UTC m=+5724.362819156" watchObservedRunningTime="2026-01-27 09:20:58.059460472 +0000 UTC m=+5724.370564537" Jan 27 09:21:05 crc kubenswrapper[4799]: I0127 09:21:05.621050 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" Jan 27 09:21:05 crc kubenswrapper[4799]: I0127 09:21:05.696157 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64dbbdfd45-nw7r2"] Jan 27 09:21:05 crc kubenswrapper[4799]: I0127 09:21:05.696507 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" podUID="3ed81796-f7dd-4fe5-b876-d4761d0fddf8" containerName="dnsmasq-dns" containerID="cri-o://56754cbfc9462a9c592ef30f720dd0e53d592bb9e4975ff855109f3b5ab43856" gracePeriod=10 Jan 27 09:21:06 crc kubenswrapper[4799]: I0127 09:21:06.103273 4799 generic.go:334] "Generic (PLEG): container finished" podID="3ed81796-f7dd-4fe5-b876-d4761d0fddf8" containerID="56754cbfc9462a9c592ef30f720dd0e53d592bb9e4975ff855109f3b5ab43856" exitCode=0 Jan 27 09:21:06 crc kubenswrapper[4799]: I0127 09:21:06.103333 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" event={"ID":"3ed81796-f7dd-4fe5-b876-d4761d0fddf8","Type":"ContainerDied","Data":"56754cbfc9462a9c592ef30f720dd0e53d592bb9e4975ff855109f3b5ab43856"} Jan 27 09:21:06 crc kubenswrapper[4799]: I0127 09:21:06.536354 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" podUID="3ed81796-f7dd-4fe5-b876-d4761d0fddf8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.56:5353: connect: connection refused" Jan 27 09:21:06 crc kubenswrapper[4799]: I0127 09:21:06.979276 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" Jan 27 09:21:07 crc kubenswrapper[4799]: I0127 09:21:07.052174 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-dns-svc\") pod \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\" (UID: \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\") " Jan 27 09:21:07 crc kubenswrapper[4799]: I0127 09:21:07.052798 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v5dt\" (UniqueName: \"kubernetes.io/projected/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-kube-api-access-4v5dt\") pod \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\" (UID: \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\") " Jan 27 09:21:07 crc kubenswrapper[4799]: I0127 09:21:07.054126 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-ovsdbserver-sb\") pod \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\" (UID: \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\") " Jan 27 09:21:07 crc kubenswrapper[4799]: I0127 09:21:07.054173 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-config\") pod \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\" (UID: \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\") " Jan 27 09:21:07 crc kubenswrapper[4799]: I0127 09:21:07.054232 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-ovsdbserver-nb\") pod \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\" (UID: \"3ed81796-f7dd-4fe5-b876-d4761d0fddf8\") " Jan 27 09:21:07 crc kubenswrapper[4799]: I0127 09:21:07.077718 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-kube-api-access-4v5dt" (OuterVolumeSpecName: "kube-api-access-4v5dt") pod "3ed81796-f7dd-4fe5-b876-d4761d0fddf8" (UID: "3ed81796-f7dd-4fe5-b876-d4761d0fddf8"). InnerVolumeSpecName "kube-api-access-4v5dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:21:07 crc kubenswrapper[4799]: I0127 09:21:07.107586 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3ed81796-f7dd-4fe5-b876-d4761d0fddf8" (UID: "3ed81796-f7dd-4fe5-b876-d4761d0fddf8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:21:07 crc kubenswrapper[4799]: I0127 09:21:07.110946 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ed81796-f7dd-4fe5-b876-d4761d0fddf8" (UID: "3ed81796-f7dd-4fe5-b876-d4761d0fddf8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:21:07 crc kubenswrapper[4799]: I0127 09:21:07.113707 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3ed81796-f7dd-4fe5-b876-d4761d0fddf8" (UID: "3ed81796-f7dd-4fe5-b876-d4761d0fddf8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:21:07 crc kubenswrapper[4799]: I0127 09:21:07.116237 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" event={"ID":"3ed81796-f7dd-4fe5-b876-d4761d0fddf8","Type":"ContainerDied","Data":"4d5ba0652a756e941784b08073307c1207e782f2fc4bf71415a105f8ae986c67"} Jan 27 09:21:07 crc kubenswrapper[4799]: I0127 09:21:07.116393 4799 scope.go:117] "RemoveContainer" containerID="56754cbfc9462a9c592ef30f720dd0e53d592bb9e4975ff855109f3b5ab43856" Jan 27 09:21:07 crc kubenswrapper[4799]: I0127 09:21:07.116682 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64dbbdfd45-nw7r2" Jan 27 09:21:07 crc kubenswrapper[4799]: I0127 09:21:07.118944 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-config" (OuterVolumeSpecName: "config") pod "3ed81796-f7dd-4fe5-b876-d4761d0fddf8" (UID: "3ed81796-f7dd-4fe5-b876-d4761d0fddf8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:21:07 crc kubenswrapper[4799]: I0127 09:21:07.155528 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:07 crc kubenswrapper[4799]: I0127 09:21:07.156023 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:07 crc kubenswrapper[4799]: I0127 09:21:07.156036 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:07 crc kubenswrapper[4799]: I0127 09:21:07.156045 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:07 crc kubenswrapper[4799]: I0127 09:21:07.156054 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v5dt\" (UniqueName: \"kubernetes.io/projected/3ed81796-f7dd-4fe5-b876-d4761d0fddf8-kube-api-access-4v5dt\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:07 crc kubenswrapper[4799]: I0127 09:21:07.205256 4799 scope.go:117] "RemoveContainer" containerID="09bbb9d0ae7b21e941a92070802040cabe4ed81847d65b606c2db6dbf1ee4635" Jan 27 09:21:07 crc kubenswrapper[4799]: I0127 09:21:07.458754 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64dbbdfd45-nw7r2"] Jan 27 09:21:07 crc kubenswrapper[4799]: I0127 09:21:07.478858 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64dbbdfd45-nw7r2"] Jan 27 09:21:08 crc kubenswrapper[4799]: I0127 09:21:08.464886 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ed81796-f7dd-4fe5-b876-d4761d0fddf8" path="/var/lib/kubelet/pods/3ed81796-f7dd-4fe5-b876-d4761d0fddf8/volumes" Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.154539 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-hjkv7"] Jan 27 09:21:09 crc kubenswrapper[4799]: E0127 09:21:09.155350 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed81796-f7dd-4fe5-b876-d4761d0fddf8" containerName="dnsmasq-dns" Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.155376 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed81796-f7dd-4fe5-b876-d4761d0fddf8" containerName="dnsmasq-dns" Jan 27 09:21:09 crc kubenswrapper[4799]: E0127 09:21:09.155430 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed81796-f7dd-4fe5-b876-d4761d0fddf8" containerName="init" Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.155439 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed81796-f7dd-4fe5-b876-d4761d0fddf8" containerName="init" Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.155638 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ed81796-f7dd-4fe5-b876-d4761d0fddf8" containerName="dnsmasq-dns" Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.156490 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hjkv7" Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.166316 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hjkv7"] Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.256953 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-a2e4-account-create-update-svrkq"] Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.258175 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a2e4-account-create-update-svrkq" Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.267389 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.269005 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a2e4-account-create-update-svrkq"] Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.297965 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebdcba0b-16a3-4b75-a2b1-7d0e3395469e-operator-scripts\") pod \"cinder-db-create-hjkv7\" (UID: \"ebdcba0b-16a3-4b75-a2b1-7d0e3395469e\") " pod="openstack/cinder-db-create-hjkv7" Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.299211 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5qcj\" (UniqueName: \"kubernetes.io/projected/ebdcba0b-16a3-4b75-a2b1-7d0e3395469e-kube-api-access-d5qcj\") pod \"cinder-db-create-hjkv7\" (UID: \"ebdcba0b-16a3-4b75-a2b1-7d0e3395469e\") " pod="openstack/cinder-db-create-hjkv7" Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.401949 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a758b2e8-c50a-4872-9493-7e61d25e8dc4-operator-scripts\") pod \"cinder-a2e4-account-create-update-svrkq\" (UID: \"a758b2e8-c50a-4872-9493-7e61d25e8dc4\") " pod="openstack/cinder-a2e4-account-create-update-svrkq" Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.402413 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebdcba0b-16a3-4b75-a2b1-7d0e3395469e-operator-scripts\") pod \"cinder-db-create-hjkv7\" (UID: \"ebdcba0b-16a3-4b75-a2b1-7d0e3395469e\") " pod="openstack/cinder-db-create-hjkv7" Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.402634 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5qcj\" (UniqueName: \"kubernetes.io/projected/ebdcba0b-16a3-4b75-a2b1-7d0e3395469e-kube-api-access-d5qcj\") pod \"cinder-db-create-hjkv7\" (UID: \"ebdcba0b-16a3-4b75-a2b1-7d0e3395469e\") " pod="openstack/cinder-db-create-hjkv7" Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.402782 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5dkx\" (UniqueName: \"kubernetes.io/projected/a758b2e8-c50a-4872-9493-7e61d25e8dc4-kube-api-access-w5dkx\") pod \"cinder-a2e4-account-create-update-svrkq\" (UID: \"a758b2e8-c50a-4872-9493-7e61d25e8dc4\") " pod="openstack/cinder-a2e4-account-create-update-svrkq" Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.403258 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebdcba0b-16a3-4b75-a2b1-7d0e3395469e-operator-scripts\") pod \"cinder-db-create-hjkv7\" (UID: \"ebdcba0b-16a3-4b75-a2b1-7d0e3395469e\") " pod="openstack/cinder-db-create-hjkv7" Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.421319 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5qcj\" (UniqueName: \"kubernetes.io/projected/ebdcba0b-16a3-4b75-a2b1-7d0e3395469e-kube-api-access-d5qcj\") pod \"cinder-db-create-hjkv7\" (UID: \"ebdcba0b-16a3-4b75-a2b1-7d0e3395469e\") " pod="openstack/cinder-db-create-hjkv7" Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.475564 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hjkv7" Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.505330 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5dkx\" (UniqueName: \"kubernetes.io/projected/a758b2e8-c50a-4872-9493-7e61d25e8dc4-kube-api-access-w5dkx\") pod \"cinder-a2e4-account-create-update-svrkq\" (UID: \"a758b2e8-c50a-4872-9493-7e61d25e8dc4\") " pod="openstack/cinder-a2e4-account-create-update-svrkq" Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.505415 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a758b2e8-c50a-4872-9493-7e61d25e8dc4-operator-scripts\") pod \"cinder-a2e4-account-create-update-svrkq\" (UID: \"a758b2e8-c50a-4872-9493-7e61d25e8dc4\") " pod="openstack/cinder-a2e4-account-create-update-svrkq" Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.506321 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a758b2e8-c50a-4872-9493-7e61d25e8dc4-operator-scripts\") pod \"cinder-a2e4-account-create-update-svrkq\" (UID: \"a758b2e8-c50a-4872-9493-7e61d25e8dc4\") " pod="openstack/cinder-a2e4-account-create-update-svrkq" Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.526156 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5dkx\" (UniqueName: \"kubernetes.io/projected/a758b2e8-c50a-4872-9493-7e61d25e8dc4-kube-api-access-w5dkx\") pod \"cinder-a2e4-account-create-update-svrkq\" (UID: \"a758b2e8-c50a-4872-9493-7e61d25e8dc4\") " pod="openstack/cinder-a2e4-account-create-update-svrkq" Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.577958 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a2e4-account-create-update-svrkq" Jan 27 09:21:09 crc kubenswrapper[4799]: W0127 09:21:09.960506 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebdcba0b_16a3_4b75_a2b1_7d0e3395469e.slice/crio-d5ee0d2345801996147cb0acad8c6d57ea11997767970cb3ce33c4d9181885c6 WatchSource:0}: Error finding container d5ee0d2345801996147cb0acad8c6d57ea11997767970cb3ce33c4d9181885c6: Status 404 returned error can't find the container with id d5ee0d2345801996147cb0acad8c6d57ea11997767970cb3ce33c4d9181885c6 Jan 27 09:21:09 crc kubenswrapper[4799]: I0127 09:21:09.961219 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hjkv7"] Jan 27 09:21:10 crc kubenswrapper[4799]: I0127 09:21:10.089866 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a2e4-account-create-update-svrkq"] Jan 27 09:21:10 crc kubenswrapper[4799]: I0127 09:21:10.160220 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hjkv7" event={"ID":"ebdcba0b-16a3-4b75-a2b1-7d0e3395469e","Type":"ContainerStarted","Data":"d5ee0d2345801996147cb0acad8c6d57ea11997767970cb3ce33c4d9181885c6"} Jan 27 09:21:10 crc kubenswrapper[4799]: I0127 09:21:10.163560 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a2e4-account-create-update-svrkq" event={"ID":"a758b2e8-c50a-4872-9493-7e61d25e8dc4","Type":"ContainerStarted","Data":"160cdbd9dd51e65563b4d23459a941d07ad30ddd14d77c6e4040efee98fb5f8c"} Jan 27 09:21:11 crc kubenswrapper[4799]: I0127 09:21:11.174176 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a2e4-account-create-update-svrkq" event={"ID":"a758b2e8-c50a-4872-9493-7e61d25e8dc4","Type":"ContainerStarted","Data":"9b4332251d36345e8ac23413488e8697cb9aefa75890c3035c3a3b1e2d0b4bbe"} Jan 27 09:21:11 crc kubenswrapper[4799]: I0127 09:21:11.176875 4799 generic.go:334] "Generic (PLEG): container finished" podID="ebdcba0b-16a3-4b75-a2b1-7d0e3395469e" containerID="95631f3bbcb886e8c7addc815c6f28be7e54ce8046eacf0068004574071c1743" exitCode=0 Jan 27 09:21:11 crc kubenswrapper[4799]: I0127 09:21:11.176939 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hjkv7" event={"ID":"ebdcba0b-16a3-4b75-a2b1-7d0e3395469e","Type":"ContainerDied","Data":"95631f3bbcb886e8c7addc815c6f28be7e54ce8046eacf0068004574071c1743"} Jan 27 09:21:11 crc kubenswrapper[4799]: I0127 09:21:11.200477 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-a2e4-account-create-update-svrkq" podStartSLOduration=2.200458249 podStartE2EDuration="2.200458249s" podCreationTimestamp="2026-01-27 09:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:21:11.193357695 +0000 UTC m=+5737.504461780" watchObservedRunningTime="2026-01-27 09:21:11.200458249 +0000 UTC m=+5737.511562314" Jan 27 09:21:12 crc kubenswrapper[4799]: I0127 09:21:12.189767 4799 generic.go:334] "Generic (PLEG): container finished" podID="a758b2e8-c50a-4872-9493-7e61d25e8dc4" containerID="9b4332251d36345e8ac23413488e8697cb9aefa75890c3035c3a3b1e2d0b4bbe" exitCode=0 Jan 27 09:21:12 crc kubenswrapper[4799]: I0127 09:21:12.189825 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a2e4-account-create-update-svrkq" event={"ID":"a758b2e8-c50a-4872-9493-7e61d25e8dc4","Type":"ContainerDied","Data":"9b4332251d36345e8ac23413488e8697cb9aefa75890c3035c3a3b1e2d0b4bbe"} Jan 27 09:21:12 crc kubenswrapper[4799]: I0127 09:21:12.610871 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hjkv7" Jan 27 09:21:12 crc kubenswrapper[4799]: I0127 09:21:12.776219 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5qcj\" (UniqueName: \"kubernetes.io/projected/ebdcba0b-16a3-4b75-a2b1-7d0e3395469e-kube-api-access-d5qcj\") pod \"ebdcba0b-16a3-4b75-a2b1-7d0e3395469e\" (UID: \"ebdcba0b-16a3-4b75-a2b1-7d0e3395469e\") " Jan 27 09:21:12 crc kubenswrapper[4799]: I0127 09:21:12.776335 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebdcba0b-16a3-4b75-a2b1-7d0e3395469e-operator-scripts\") pod \"ebdcba0b-16a3-4b75-a2b1-7d0e3395469e\" (UID: \"ebdcba0b-16a3-4b75-a2b1-7d0e3395469e\") " Jan 27 09:21:12 crc kubenswrapper[4799]: I0127 09:21:12.776996 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebdcba0b-16a3-4b75-a2b1-7d0e3395469e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ebdcba0b-16a3-4b75-a2b1-7d0e3395469e" (UID: "ebdcba0b-16a3-4b75-a2b1-7d0e3395469e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:21:12 crc kubenswrapper[4799]: I0127 09:21:12.782608 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebdcba0b-16a3-4b75-a2b1-7d0e3395469e-kube-api-access-d5qcj" (OuterVolumeSpecName: "kube-api-access-d5qcj") pod "ebdcba0b-16a3-4b75-a2b1-7d0e3395469e" (UID: "ebdcba0b-16a3-4b75-a2b1-7d0e3395469e"). InnerVolumeSpecName "kube-api-access-d5qcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:21:12 crc kubenswrapper[4799]: I0127 09:21:12.878979 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5qcj\" (UniqueName: \"kubernetes.io/projected/ebdcba0b-16a3-4b75-a2b1-7d0e3395469e-kube-api-access-d5qcj\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:12 crc kubenswrapper[4799]: I0127 09:21:12.879490 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebdcba0b-16a3-4b75-a2b1-7d0e3395469e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:13 crc kubenswrapper[4799]: I0127 09:21:13.200669 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hjkv7" event={"ID":"ebdcba0b-16a3-4b75-a2b1-7d0e3395469e","Type":"ContainerDied","Data":"d5ee0d2345801996147cb0acad8c6d57ea11997767970cb3ce33c4d9181885c6"} Jan 27 09:21:13 crc kubenswrapper[4799]: I0127 09:21:13.200722 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5ee0d2345801996147cb0acad8c6d57ea11997767970cb3ce33c4d9181885c6" Jan 27 09:21:13 crc kubenswrapper[4799]: I0127 09:21:13.200859 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hjkv7" Jan 27 09:21:13 crc kubenswrapper[4799]: I0127 09:21:13.569027 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a2e4-account-create-update-svrkq" Jan 27 09:21:13 crc kubenswrapper[4799]: I0127 09:21:13.693634 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a758b2e8-c50a-4872-9493-7e61d25e8dc4-operator-scripts\") pod \"a758b2e8-c50a-4872-9493-7e61d25e8dc4\" (UID: \"a758b2e8-c50a-4872-9493-7e61d25e8dc4\") " Jan 27 09:21:13 crc kubenswrapper[4799]: I0127 09:21:13.693828 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5dkx\" (UniqueName: \"kubernetes.io/projected/a758b2e8-c50a-4872-9493-7e61d25e8dc4-kube-api-access-w5dkx\") pod \"a758b2e8-c50a-4872-9493-7e61d25e8dc4\" (UID: \"a758b2e8-c50a-4872-9493-7e61d25e8dc4\") " Jan 27 09:21:13 crc kubenswrapper[4799]: I0127 09:21:13.694539 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a758b2e8-c50a-4872-9493-7e61d25e8dc4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a758b2e8-c50a-4872-9493-7e61d25e8dc4" (UID: "a758b2e8-c50a-4872-9493-7e61d25e8dc4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:21:13 crc kubenswrapper[4799]: I0127 09:21:13.698424 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a758b2e8-c50a-4872-9493-7e61d25e8dc4-kube-api-access-w5dkx" (OuterVolumeSpecName: "kube-api-access-w5dkx") pod "a758b2e8-c50a-4872-9493-7e61d25e8dc4" (UID: "a758b2e8-c50a-4872-9493-7e61d25e8dc4"). InnerVolumeSpecName "kube-api-access-w5dkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:21:13 crc kubenswrapper[4799]: I0127 09:21:13.796199 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a758b2e8-c50a-4872-9493-7e61d25e8dc4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:13 crc kubenswrapper[4799]: I0127 09:21:13.796247 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5dkx\" (UniqueName: \"kubernetes.io/projected/a758b2e8-c50a-4872-9493-7e61d25e8dc4-kube-api-access-w5dkx\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.213075 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a2e4-account-create-update-svrkq" event={"ID":"a758b2e8-c50a-4872-9493-7e61d25e8dc4","Type":"ContainerDied","Data":"160cdbd9dd51e65563b4d23459a941d07ad30ddd14d77c6e4040efee98fb5f8c"} Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.213393 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="160cdbd9dd51e65563b4d23459a941d07ad30ddd14d77c6e4040efee98fb5f8c" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.213129 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a2e4-account-create-update-svrkq" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.506020 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-22xds"] Jan 27 09:21:14 crc kubenswrapper[4799]: E0127 09:21:14.537388 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebdcba0b-16a3-4b75-a2b1-7d0e3395469e" containerName="mariadb-database-create" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.537436 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebdcba0b-16a3-4b75-a2b1-7d0e3395469e" containerName="mariadb-database-create" Jan 27 09:21:14 crc kubenswrapper[4799]: E0127 09:21:14.537552 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a758b2e8-c50a-4872-9493-7e61d25e8dc4" containerName="mariadb-account-create-update" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.537564 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a758b2e8-c50a-4872-9493-7e61d25e8dc4" containerName="mariadb-account-create-update" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.538277 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebdcba0b-16a3-4b75-a2b1-7d0e3395469e" containerName="mariadb-database-create" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.538326 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a758b2e8-c50a-4872-9493-7e61d25e8dc4" containerName="mariadb-account-create-update" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.543823 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-22xds" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.547971 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.548228 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cgzns" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.548487 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.570573 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-22xds"] Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.652527 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-combined-ca-bundle\") pod \"cinder-db-sync-22xds\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " pod="openstack/cinder-db-sync-22xds" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.652737 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg9vp\" (UniqueName: \"kubernetes.io/projected/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-kube-api-access-qg9vp\") pod \"cinder-db-sync-22xds\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " pod="openstack/cinder-db-sync-22xds" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.652819 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-etc-machine-id\") pod \"cinder-db-sync-22xds\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " pod="openstack/cinder-db-sync-22xds" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.653158 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-config-data\") pod \"cinder-db-sync-22xds\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " pod="openstack/cinder-db-sync-22xds" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.653185 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-scripts\") pod \"cinder-db-sync-22xds\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " pod="openstack/cinder-db-sync-22xds" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.653365 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-db-sync-config-data\") pod \"cinder-db-sync-22xds\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " pod="openstack/cinder-db-sync-22xds" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.755626 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-combined-ca-bundle\") pod \"cinder-db-sync-22xds\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " pod="openstack/cinder-db-sync-22xds" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.755701 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg9vp\" (UniqueName: \"kubernetes.io/projected/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-kube-api-access-qg9vp\") pod \"cinder-db-sync-22xds\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " pod="openstack/cinder-db-sync-22xds" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.755727 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-etc-machine-id\") pod \"cinder-db-sync-22xds\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " pod="openstack/cinder-db-sync-22xds" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.755837 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-config-data\") pod \"cinder-db-sync-22xds\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " pod="openstack/cinder-db-sync-22xds" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.755862 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-scripts\") pod \"cinder-db-sync-22xds\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " pod="openstack/cinder-db-sync-22xds" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.755913 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-db-sync-config-data\") pod \"cinder-db-sync-22xds\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " pod="openstack/cinder-db-sync-22xds" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.755991 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-etc-machine-id\") pod \"cinder-db-sync-22xds\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " pod="openstack/cinder-db-sync-22xds" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.760197 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-config-data\") pod \"cinder-db-sync-22xds\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " pod="openstack/cinder-db-sync-22xds" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.760583 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-combined-ca-bundle\") pod \"cinder-db-sync-22xds\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " pod="openstack/cinder-db-sync-22xds" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.763884 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-db-sync-config-data\") pod \"cinder-db-sync-22xds\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " pod="openstack/cinder-db-sync-22xds" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.781914 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-scripts\") pod \"cinder-db-sync-22xds\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " pod="openstack/cinder-db-sync-22xds" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.787064 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg9vp\" (UniqueName: \"kubernetes.io/projected/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-kube-api-access-qg9vp\") pod \"cinder-db-sync-22xds\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " pod="openstack/cinder-db-sync-22xds" Jan 27 09:21:14 crc kubenswrapper[4799]: I0127 09:21:14.866378 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-22xds" Jan 27 09:21:15 crc kubenswrapper[4799]: I0127 09:21:15.431084 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-22xds"] Jan 27 09:21:16 crc kubenswrapper[4799]: I0127 09:21:16.234010 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-22xds" event={"ID":"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8","Type":"ContainerStarted","Data":"03230bf99dce8a890f0083d2ddf746009b778ad7f279a82209b3b1ccb7f8c1ca"} Jan 27 09:21:16 crc kubenswrapper[4799]: I0127 09:21:16.234373 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-22xds" event={"ID":"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8","Type":"ContainerStarted","Data":"4071d09ff2470ab222359ee40bfa120563520cde97ab012a36c2fc03f027b6d1"} Jan 27 09:21:16 crc kubenswrapper[4799]: I0127 09:21:16.254313 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-22xds" podStartSLOduration=2.254275138 podStartE2EDuration="2.254275138s" podCreationTimestamp="2026-01-27 09:21:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:21:16.252386647 +0000 UTC m=+5742.563490752" watchObservedRunningTime="2026-01-27 09:21:16.254275138 +0000 UTC m=+5742.565379203" Jan 27 09:21:23 crc kubenswrapper[4799]: I0127 09:21:23.312760 4799 generic.go:334] "Generic (PLEG): container finished" podID="39b6f7cb-8b32-40b5-a24b-a72cd119c6e8" containerID="03230bf99dce8a890f0083d2ddf746009b778ad7f279a82209b3b1ccb7f8c1ca" exitCode=0 Jan 27 09:21:23 crc kubenswrapper[4799]: I0127 09:21:23.312964 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-22xds" event={"ID":"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8","Type":"ContainerDied","Data":"03230bf99dce8a890f0083d2ddf746009b778ad7f279a82209b3b1ccb7f8c1ca"} Jan 27 09:21:23 crc kubenswrapper[4799]: I0127 09:21:23.731762 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:21:23 crc kubenswrapper[4799]: I0127 09:21:23.731816 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:21:24 crc kubenswrapper[4799]: I0127 09:21:24.746030 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-22xds" Jan 27 09:21:24 crc kubenswrapper[4799]: I0127 09:21:24.887869 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-etc-machine-id\") pod \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " Jan 27 09:21:24 crc kubenswrapper[4799]: I0127 09:21:24.887957 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg9vp\" (UniqueName: \"kubernetes.io/projected/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-kube-api-access-qg9vp\") pod \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " Jan 27 09:21:24 crc kubenswrapper[4799]: I0127 09:21:24.888051 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-db-sync-config-data\") pod \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " Jan 27 09:21:24 crc kubenswrapper[4799]: I0127 09:21:24.888084 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "39b6f7cb-8b32-40b5-a24b-a72cd119c6e8" (UID: "39b6f7cb-8b32-40b5-a24b-a72cd119c6e8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:21:24 crc kubenswrapper[4799]: I0127 09:21:24.888123 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-scripts\") pod \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " Jan 27 09:21:24 crc kubenswrapper[4799]: I0127 09:21:24.888212 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-config-data\") pod \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " Jan 27 09:21:24 crc kubenswrapper[4799]: I0127 09:21:24.888622 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-combined-ca-bundle\") pod \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\" (UID: \"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8\") " Jan 27 09:21:24 crc kubenswrapper[4799]: I0127 09:21:24.889074 4799 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:24 crc kubenswrapper[4799]: I0127 09:21:24.894449 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "39b6f7cb-8b32-40b5-a24b-a72cd119c6e8" (UID: "39b6f7cb-8b32-40b5-a24b-a72cd119c6e8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:21:24 crc kubenswrapper[4799]: I0127 09:21:24.898070 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-scripts" (OuterVolumeSpecName: "scripts") pod "39b6f7cb-8b32-40b5-a24b-a72cd119c6e8" (UID: "39b6f7cb-8b32-40b5-a24b-a72cd119c6e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:21:24 crc kubenswrapper[4799]: I0127 09:21:24.899410 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-kube-api-access-qg9vp" (OuterVolumeSpecName: "kube-api-access-qg9vp") pod "39b6f7cb-8b32-40b5-a24b-a72cd119c6e8" (UID: "39b6f7cb-8b32-40b5-a24b-a72cd119c6e8"). InnerVolumeSpecName "kube-api-access-qg9vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:21:24 crc kubenswrapper[4799]: I0127 09:21:24.934863 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-config-data" (OuterVolumeSpecName: "config-data") pod "39b6f7cb-8b32-40b5-a24b-a72cd119c6e8" (UID: "39b6f7cb-8b32-40b5-a24b-a72cd119c6e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:21:24 crc kubenswrapper[4799]: I0127 09:21:24.938943 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39b6f7cb-8b32-40b5-a24b-a72cd119c6e8" (UID: "39b6f7cb-8b32-40b5-a24b-a72cd119c6e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:21:24 crc kubenswrapper[4799]: I0127 09:21:24.991164 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg9vp\" (UniqueName: \"kubernetes.io/projected/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-kube-api-access-qg9vp\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:24 crc kubenswrapper[4799]: I0127 09:21:24.991203 4799 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:24 crc kubenswrapper[4799]: I0127 09:21:24.991213 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:24 crc kubenswrapper[4799]: I0127 09:21:24.991222 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:24 crc kubenswrapper[4799]: I0127 09:21:24.991235 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.354437 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-22xds" event={"ID":"39b6f7cb-8b32-40b5-a24b-a72cd119c6e8","Type":"ContainerDied","Data":"4071d09ff2470ab222359ee40bfa120563520cde97ab012a36c2fc03f027b6d1"} Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.354530 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4071d09ff2470ab222359ee40bfa120563520cde97ab012a36c2fc03f027b6d1" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.354582 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-22xds" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.695492 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-557fb89b5c-8kwpv"] Jan 27 09:21:25 crc kubenswrapper[4799]: E0127 09:21:25.695981 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b6f7cb-8b32-40b5-a24b-a72cd119c6e8" containerName="cinder-db-sync" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.696008 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b6f7cb-8b32-40b5-a24b-a72cd119c6e8" containerName="cinder-db-sync" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.696323 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b6f7cb-8b32-40b5-a24b-a72cd119c6e8" containerName="cinder-db-sync" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.703863 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.725199 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557fb89b5c-8kwpv"] Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.805211 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b65b5ffb-7c8c-4092-a794-8ea6b6c490eb-config\") pod \"dnsmasq-dns-557fb89b5c-8kwpv\" (UID: \"b65b5ffb-7c8c-4092-a794-8ea6b6c490eb\") " pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.805271 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b65b5ffb-7c8c-4092-a794-8ea6b6c490eb-ovsdbserver-nb\") pod \"dnsmasq-dns-557fb89b5c-8kwpv\" (UID: \"b65b5ffb-7c8c-4092-a794-8ea6b6c490eb\") " pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.805320 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l7z8\" (UniqueName: \"kubernetes.io/projected/b65b5ffb-7c8c-4092-a794-8ea6b6c490eb-kube-api-access-9l7z8\") pod \"dnsmasq-dns-557fb89b5c-8kwpv\" (UID: \"b65b5ffb-7c8c-4092-a794-8ea6b6c490eb\") " pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.805436 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b65b5ffb-7c8c-4092-a794-8ea6b6c490eb-ovsdbserver-sb\") pod \"dnsmasq-dns-557fb89b5c-8kwpv\" (UID: \"b65b5ffb-7c8c-4092-a794-8ea6b6c490eb\") " pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.805478 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b65b5ffb-7c8c-4092-a794-8ea6b6c490eb-dns-svc\") pod \"dnsmasq-dns-557fb89b5c-8kwpv\" (UID: \"b65b5ffb-7c8c-4092-a794-8ea6b6c490eb\") " pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.907150 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b65b5ffb-7c8c-4092-a794-8ea6b6c490eb-config\") pod \"dnsmasq-dns-557fb89b5c-8kwpv\" (UID: \"b65b5ffb-7c8c-4092-a794-8ea6b6c490eb\") " pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.907204 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b65b5ffb-7c8c-4092-a794-8ea6b6c490eb-ovsdbserver-nb\") pod \"dnsmasq-dns-557fb89b5c-8kwpv\" (UID: \"b65b5ffb-7c8c-4092-a794-8ea6b6c490eb\") " pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.907233 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l7z8\" (UniqueName: \"kubernetes.io/projected/b65b5ffb-7c8c-4092-a794-8ea6b6c490eb-kube-api-access-9l7z8\") pod \"dnsmasq-dns-557fb89b5c-8kwpv\" (UID: \"b65b5ffb-7c8c-4092-a794-8ea6b6c490eb\") " pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.907280 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b65b5ffb-7c8c-4092-a794-8ea6b6c490eb-ovsdbserver-sb\") pod \"dnsmasq-dns-557fb89b5c-8kwpv\" (UID: \"b65b5ffb-7c8c-4092-a794-8ea6b6c490eb\") " pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.907358 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b65b5ffb-7c8c-4092-a794-8ea6b6c490eb-dns-svc\") pod \"dnsmasq-dns-557fb89b5c-8kwpv\" (UID: \"b65b5ffb-7c8c-4092-a794-8ea6b6c490eb\") " pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.908363 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b65b5ffb-7c8c-4092-a794-8ea6b6c490eb-config\") pod \"dnsmasq-dns-557fb89b5c-8kwpv\" (UID: \"b65b5ffb-7c8c-4092-a794-8ea6b6c490eb\") " pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.908388 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b65b5ffb-7c8c-4092-a794-8ea6b6c490eb-dns-svc\") pod \"dnsmasq-dns-557fb89b5c-8kwpv\" (UID: \"b65b5ffb-7c8c-4092-a794-8ea6b6c490eb\") " pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.908538 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b65b5ffb-7c8c-4092-a794-8ea6b6c490eb-ovsdbserver-sb\") pod \"dnsmasq-dns-557fb89b5c-8kwpv\" (UID: \"b65b5ffb-7c8c-4092-a794-8ea6b6c490eb\") " pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.909159 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b65b5ffb-7c8c-4092-a794-8ea6b6c490eb-ovsdbserver-nb\") pod \"dnsmasq-dns-557fb89b5c-8kwpv\" (UID: \"b65b5ffb-7c8c-4092-a794-8ea6b6c490eb\") " pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.913168 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.915021 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.918024 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cgzns" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.918275 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.918483 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.921138 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.934612 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:21:25 crc kubenswrapper[4799]: I0127 09:21:25.946267 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l7z8\" (UniqueName: \"kubernetes.io/projected/b65b5ffb-7c8c-4092-a794-8ea6b6c490eb-kube-api-access-9l7z8\") pod \"dnsmasq-dns-557fb89b5c-8kwpv\" (UID: \"b65b5ffb-7c8c-4092-a794-8ea6b6c490eb\") " pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.009012 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-config-data-custom\") pod \"cinder-api-0\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " pod="openstack/cinder-api-0" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.009377 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-scripts\") pod \"cinder-api-0\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " pod="openstack/cinder-api-0" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.009424 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-logs\") pod \"cinder-api-0\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " pod="openstack/cinder-api-0" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.009440 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-899sz\" (UniqueName: \"kubernetes.io/projected/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-kube-api-access-899sz\") pod \"cinder-api-0\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " pod="openstack/cinder-api-0" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.009469 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " pod="openstack/cinder-api-0" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.009622 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " pod="openstack/cinder-api-0" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.009681 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-config-data\") pod \"cinder-api-0\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " pod="openstack/cinder-api-0" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.038480 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.111216 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-config-data-custom\") pod \"cinder-api-0\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " pod="openstack/cinder-api-0" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.111277 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-scripts\") pod \"cinder-api-0\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " pod="openstack/cinder-api-0" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.111334 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-logs\") pod \"cinder-api-0\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " pod="openstack/cinder-api-0" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.111355 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-899sz\" (UniqueName: \"kubernetes.io/projected/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-kube-api-access-899sz\") pod \"cinder-api-0\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " pod="openstack/cinder-api-0" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.111402 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " pod="openstack/cinder-api-0" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.111438 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " pod="openstack/cinder-api-0" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.111459 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-config-data\") pod \"cinder-api-0\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " pod="openstack/cinder-api-0" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.111830 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-logs\") pod \"cinder-api-0\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " pod="openstack/cinder-api-0" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.111902 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " pod="openstack/cinder-api-0" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.115782 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-scripts\") pod \"cinder-api-0\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " pod="openstack/cinder-api-0" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.116777 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " pod="openstack/cinder-api-0" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.119218 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-config-data\") pod \"cinder-api-0\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " pod="openstack/cinder-api-0" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.119383 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-config-data-custom\") pod \"cinder-api-0\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " pod="openstack/cinder-api-0" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.133857 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-899sz\" (UniqueName: \"kubernetes.io/projected/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-kube-api-access-899sz\") pod \"cinder-api-0\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " pod="openstack/cinder-api-0" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.238626 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.556987 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557fb89b5c-8kwpv"] Jan 27 09:21:26 crc kubenswrapper[4799]: I0127 09:21:26.761518 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:21:26 crc kubenswrapper[4799]: W0127 09:21:26.769633 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2bf11fd_b6af_47c8_86b0_e16b36b8841e.slice/crio-d0733491def361276807d9decc9eca4fa93c653c5544d736f63ca05cfb465d37 WatchSource:0}: Error finding container d0733491def361276807d9decc9eca4fa93c653c5544d736f63ca05cfb465d37: Status 404 returned error can't find the container with id d0733491def361276807d9decc9eca4fa93c653c5544d736f63ca05cfb465d37 Jan 27 09:21:27 crc kubenswrapper[4799]: I0127 09:21:27.394992 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d2bf11fd-b6af-47c8-86b0-e16b36b8841e","Type":"ContainerStarted","Data":"d0733491def361276807d9decc9eca4fa93c653c5544d736f63ca05cfb465d37"} Jan 27 09:21:27 crc kubenswrapper[4799]: I0127 09:21:27.397564 4799 generic.go:334] "Generic (PLEG): container finished" podID="b65b5ffb-7c8c-4092-a794-8ea6b6c490eb" containerID="f24bc1ba31a2dbdbdd4cfbab743c3f1e741841b3bfa8507186f195b97fdad80f" exitCode=0 Jan 27 09:21:27 crc kubenswrapper[4799]: I0127 09:21:27.397647 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" event={"ID":"b65b5ffb-7c8c-4092-a794-8ea6b6c490eb","Type":"ContainerDied","Data":"f24bc1ba31a2dbdbdd4cfbab743c3f1e741841b3bfa8507186f195b97fdad80f"} Jan 27 09:21:27 crc kubenswrapper[4799]: I0127 09:21:27.397711 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" event={"ID":"b65b5ffb-7c8c-4092-a794-8ea6b6c490eb","Type":"ContainerStarted","Data":"9c60579fbaf203123d6ee30483167e72d3e540310817343f8d06e5c97e0d58e5"} Jan 27 09:21:28 crc kubenswrapper[4799]: I0127 09:21:28.419059 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d2bf11fd-b6af-47c8-86b0-e16b36b8841e","Type":"ContainerStarted","Data":"9b8ed6d76dcd92bc07af6c53b70aa4ff35e1c2891cb4d4efeb478f20c7dd1d05"} Jan 27 09:21:29 crc kubenswrapper[4799]: I0127 09:21:29.429999 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d2bf11fd-b6af-47c8-86b0-e16b36b8841e","Type":"ContainerStarted","Data":"8b359406a4603457995409e89ae340f9f3855dcadc6eea64346391979ed02a58"} Jan 27 09:21:29 crc kubenswrapper[4799]: I0127 09:21:29.430646 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 09:21:29 crc kubenswrapper[4799]: I0127 09:21:29.434726 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" event={"ID":"b65b5ffb-7c8c-4092-a794-8ea6b6c490eb","Type":"ContainerStarted","Data":"d474d15dc4a3845cf2d2e088ebdffa733804add06101e1f1a9bdeaf86b3139be"} Jan 27 09:21:29 crc kubenswrapper[4799]: I0127 09:21:29.434930 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" Jan 27 09:21:29 crc kubenswrapper[4799]: I0127 09:21:29.464799 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.464783178 podStartE2EDuration="4.464783178s" podCreationTimestamp="2026-01-27 09:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:21:29.457249363 +0000 UTC m=+5755.768353448" watchObservedRunningTime="2026-01-27 09:21:29.464783178 +0000 UTC m=+5755.775887243" Jan 27 09:21:29 crc kubenswrapper[4799]: I0127 09:21:29.489077 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" podStartSLOduration=4.489052218 podStartE2EDuration="4.489052218s" podCreationTimestamp="2026-01-27 09:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:21:29.482538341 +0000 UTC m=+5755.793642416" watchObservedRunningTime="2026-01-27 09:21:29.489052218 +0000 UTC m=+5755.800156283" Jan 27 09:21:36 crc kubenswrapper[4799]: I0127 09:21:36.041511 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-557fb89b5c-8kwpv" Jan 27 09:21:36 crc kubenswrapper[4799]: I0127 09:21:36.149369 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f64c694b9-5pkm9"] Jan 27 09:21:36 crc kubenswrapper[4799]: I0127 09:21:36.150087 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" podUID="44de1aed-71d7-42bb-945f-f19ec5470500" containerName="dnsmasq-dns" containerID="cri-o://999174b5726944f34f93e1a1b4a4027b94f29c85024d15546e103a4f82a5d503" gracePeriod=10 Jan 27 09:21:36 crc kubenswrapper[4799]: I0127 09:21:36.500771 4799 generic.go:334] "Generic (PLEG): container finished" podID="44de1aed-71d7-42bb-945f-f19ec5470500" containerID="999174b5726944f34f93e1a1b4a4027b94f29c85024d15546e103a4f82a5d503" exitCode=0 Jan 27 09:21:36 crc kubenswrapper[4799]: I0127 09:21:36.501175 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" event={"ID":"44de1aed-71d7-42bb-945f-f19ec5470500","Type":"ContainerDied","Data":"999174b5726944f34f93e1a1b4a4027b94f29c85024d15546e103a4f82a5d503"} Jan 27 09:21:37 crc kubenswrapper[4799]: I0127 09:21:37.307516 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" Jan 27 09:21:37 crc kubenswrapper[4799]: I0127 09:21:37.458456 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-dns-svc\") pod \"44de1aed-71d7-42bb-945f-f19ec5470500\" (UID: \"44de1aed-71d7-42bb-945f-f19ec5470500\") " Jan 27 09:21:37 crc kubenswrapper[4799]: I0127 09:21:37.458623 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-config\") pod \"44de1aed-71d7-42bb-945f-f19ec5470500\" (UID: \"44de1aed-71d7-42bb-945f-f19ec5470500\") " Jan 27 09:21:37 crc kubenswrapper[4799]: I0127 09:21:37.458686 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-ovsdbserver-nb\") pod \"44de1aed-71d7-42bb-945f-f19ec5470500\" (UID: \"44de1aed-71d7-42bb-945f-f19ec5470500\") " Jan 27 09:21:37 crc kubenswrapper[4799]: I0127 09:21:37.458732 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-ovsdbserver-sb\") pod \"44de1aed-71d7-42bb-945f-f19ec5470500\" (UID: \"44de1aed-71d7-42bb-945f-f19ec5470500\") " Jan 27 09:21:37 crc kubenswrapper[4799]: I0127 09:21:37.458803 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjwtq\" (UniqueName: \"kubernetes.io/projected/44de1aed-71d7-42bb-945f-f19ec5470500-kube-api-access-rjwtq\") pod \"44de1aed-71d7-42bb-945f-f19ec5470500\" (UID: \"44de1aed-71d7-42bb-945f-f19ec5470500\") " Jan 27 09:21:37 crc kubenswrapper[4799]: I0127 09:21:37.466085 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44de1aed-71d7-42bb-945f-f19ec5470500-kube-api-access-rjwtq" (OuterVolumeSpecName: "kube-api-access-rjwtq") pod "44de1aed-71d7-42bb-945f-f19ec5470500" (UID: "44de1aed-71d7-42bb-945f-f19ec5470500"). InnerVolumeSpecName "kube-api-access-rjwtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:21:37 crc kubenswrapper[4799]: I0127 09:21:37.518500 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" event={"ID":"44de1aed-71d7-42bb-945f-f19ec5470500","Type":"ContainerDied","Data":"3b6349c1412accde143c8accf828aeaf14fe59e5ae3d3cc54d21e8c8000cb7b4"} Jan 27 09:21:37 crc kubenswrapper[4799]: I0127 09:21:37.518572 4799 scope.go:117] "RemoveContainer" containerID="999174b5726944f34f93e1a1b4a4027b94f29c85024d15546e103a4f82a5d503" Jan 27 09:21:37 crc kubenswrapper[4799]: I0127 09:21:37.518781 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f64c694b9-5pkm9" Jan 27 09:21:37 crc kubenswrapper[4799]: I0127 09:21:37.523491 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-config" (OuterVolumeSpecName: "config") pod "44de1aed-71d7-42bb-945f-f19ec5470500" (UID: "44de1aed-71d7-42bb-945f-f19ec5470500"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:21:37 crc kubenswrapper[4799]: I0127 09:21:37.531419 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "44de1aed-71d7-42bb-945f-f19ec5470500" (UID: "44de1aed-71d7-42bb-945f-f19ec5470500"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:21:37 crc kubenswrapper[4799]: I0127 09:21:37.531893 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "44de1aed-71d7-42bb-945f-f19ec5470500" (UID: "44de1aed-71d7-42bb-945f-f19ec5470500"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:21:37 crc kubenswrapper[4799]: I0127 09:21:37.563044 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "44de1aed-71d7-42bb-945f-f19ec5470500" (UID: "44de1aed-71d7-42bb-945f-f19ec5470500"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:21:37 crc kubenswrapper[4799]: I0127 09:21:37.563331 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:37 crc kubenswrapper[4799]: I0127 09:21:37.563360 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:37 crc kubenswrapper[4799]: I0127 09:21:37.563369 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:37 crc kubenswrapper[4799]: I0127 09:21:37.563383 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44de1aed-71d7-42bb-945f-f19ec5470500-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:37 crc kubenswrapper[4799]: I0127 09:21:37.563394 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjwtq\" (UniqueName: \"kubernetes.io/projected/44de1aed-71d7-42bb-945f-f19ec5470500-kube-api-access-rjwtq\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:37 crc kubenswrapper[4799]: I0127 09:21:37.623763 4799 scope.go:117] "RemoveContainer" containerID="080eaf41d9d81aba926c58cb5a728c5e45836e62a3751a9f503690a7b059e56d" Jan 27 09:21:37 crc kubenswrapper[4799]: I0127 09:21:37.856585 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f64c694b9-5pkm9"] Jan 27 09:21:37 crc kubenswrapper[4799]: I0127 09:21:37.864936 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f64c694b9-5pkm9"] Jan 27 09:21:38 crc kubenswrapper[4799]: I0127 09:21:38.001643 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:21:38 crc kubenswrapper[4799]: I0127 09:21:38.001972 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e8d15b71-81f3-4700-b916-db1a08d5c5fc" containerName="nova-api-log" containerID="cri-o://8aa353eab7c5b40f9c768464929c9e61a3d5fb825048c99739b8564d0b457f6a" gracePeriod=30 Jan 27 09:21:38 crc kubenswrapper[4799]: I0127 09:21:38.002057 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e8d15b71-81f3-4700-b916-db1a08d5c5fc" containerName="nova-api-api" containerID="cri-o://b3361ee321f3621da3eda593449da8db9db8c2e0fe647a43890a2ad585feee52" gracePeriod=30 Jan 27 09:21:38 crc kubenswrapper[4799]: I0127 09:21:38.015025 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:21:38 crc kubenswrapper[4799]: I0127 09:21:38.015417 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1d60b300-02c2-4bc8-908d-2f7e2b5bddad" containerName="nova-scheduler-scheduler" containerID="cri-o://89045eb040bd6f93089885e5689c781508568ee002952038a3e102b8a3f58b8c" gracePeriod=30 Jan 27 09:21:38 crc kubenswrapper[4799]: I0127 09:21:38.030710 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:21:38 crc kubenswrapper[4799]: I0127 09:21:38.031009 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="6cc3da9c-c322-43a7-8d4e-56518c6f70cc" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://2c32b01e4fd558523da8fdddb26cb8c8aefcfa8da052eb16af87379a26dfdcbf" gracePeriod=30 Jan 27 09:21:38 crc kubenswrapper[4799]: I0127 09:21:38.048945 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:21:38 crc kubenswrapper[4799]: I0127 09:21:38.049603 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="96172775-55ea-448b-a331-8c10c7a1ac20" containerName="nova-metadata-log" containerID="cri-o://e2f46c1875eb09322a1a73ad4f190e1e058d3df354efe01afefd540af2400163" gracePeriod=30 Jan 27 09:21:38 crc kubenswrapper[4799]: I0127 09:21:38.049702 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="96172775-55ea-448b-a331-8c10c7a1ac20" containerName="nova-metadata-metadata" containerID="cri-o://53538cdf8d9fbd86abab4d97489a277bb804009f49f919e7d17b3933518c95df" gracePeriod=30 Jan 27 09:21:38 crc kubenswrapper[4799]: I0127 09:21:38.059093 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:21:38 crc kubenswrapper[4799]: I0127 09:21:38.059492 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="3c980df1-a520-4c83-9094-65ffa132b464" containerName="nova-cell1-conductor-conductor" containerID="cri-o://942bd87edd1bb3698510d7f67f940d7972fd60a9d5885b49f21a7467b2b57dcb" gracePeriod=30 Jan 27 09:21:38 crc kubenswrapper[4799]: I0127 09:21:38.191541 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 09:21:38 crc kubenswrapper[4799]: I0127 09:21:38.191804 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="9cf460e2-e400-4353-8e03-611ab39e1842" containerName="nova-cell0-conductor-conductor" containerID="cri-o://c18df90c75e23d615660d1f9bc3ad524cbab3165ca0bcdace0d6cd3bbf6d5df9" gracePeriod=30 Jan 27 09:21:38 crc kubenswrapper[4799]: I0127 09:21:38.463965 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44de1aed-71d7-42bb-945f-f19ec5470500" path="/var/lib/kubelet/pods/44de1aed-71d7-42bb-945f-f19ec5470500/volumes" Jan 27 09:21:38 crc kubenswrapper[4799]: I0127 09:21:38.552888 4799 generic.go:334] "Generic (PLEG): container finished" podID="96172775-55ea-448b-a331-8c10c7a1ac20" containerID="e2f46c1875eb09322a1a73ad4f190e1e058d3df354efe01afefd540af2400163" exitCode=143 Jan 27 09:21:38 crc kubenswrapper[4799]: I0127 09:21:38.552963 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96172775-55ea-448b-a331-8c10c7a1ac20","Type":"ContainerDied","Data":"e2f46c1875eb09322a1a73ad4f190e1e058d3df354efe01afefd540af2400163"} Jan 27 09:21:38 crc kubenswrapper[4799]: I0127 09:21:38.568470 4799 generic.go:334] "Generic (PLEG): container finished" podID="e8d15b71-81f3-4700-b916-db1a08d5c5fc" containerID="8aa353eab7c5b40f9c768464929c9e61a3d5fb825048c99739b8564d0b457f6a" exitCode=143 Jan 27 09:21:38 crc kubenswrapper[4799]: I0127 09:21:38.568523 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8d15b71-81f3-4700-b916-db1a08d5c5fc","Type":"ContainerDied","Data":"8aa353eab7c5b40f9c768464929c9e61a3d5fb825048c99739b8564d0b457f6a"} Jan 27 09:21:38 crc kubenswrapper[4799]: E0127 09:21:38.713743 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="942bd87edd1bb3698510d7f67f940d7972fd60a9d5885b49f21a7467b2b57dcb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:21:38 crc kubenswrapper[4799]: E0127 09:21:38.719472 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="942bd87edd1bb3698510d7f67f940d7972fd60a9d5885b49f21a7467b2b57dcb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:21:38 crc kubenswrapper[4799]: E0127 09:21:38.729678 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="942bd87edd1bb3698510d7f67f940d7972fd60a9d5885b49f21a7467b2b57dcb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:21:38 crc kubenswrapper[4799]: E0127 09:21:38.729758 4799 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="3c980df1-a520-4c83-9094-65ffa132b464" containerName="nova-cell1-conductor-conductor" Jan 27 09:21:38 crc kubenswrapper[4799]: E0127 09:21:38.984815 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="89045eb040bd6f93089885e5689c781508568ee002952038a3e102b8a3f58b8c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 09:21:38 crc kubenswrapper[4799]: E0127 09:21:38.986734 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="89045eb040bd6f93089885e5689c781508568ee002952038a3e102b8a3f58b8c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 09:21:38 crc kubenswrapper[4799]: E0127 09:21:38.988278 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="89045eb040bd6f93089885e5689c781508568ee002952038a3e102b8a3f58b8c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 09:21:38 crc kubenswrapper[4799]: E0127 09:21:38.988356 4799 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1d60b300-02c2-4bc8-908d-2f7e2b5bddad" containerName="nova-scheduler-scheduler" Jan 27 09:21:39 crc kubenswrapper[4799]: I0127 09:21:39.003881 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 09:21:39 crc kubenswrapper[4799]: I0127 09:21:39.597550 4799 generic.go:334] "Generic (PLEG): container finished" podID="6cc3da9c-c322-43a7-8d4e-56518c6f70cc" containerID="2c32b01e4fd558523da8fdddb26cb8c8aefcfa8da052eb16af87379a26dfdcbf" exitCode=0 Jan 27 09:21:39 crc kubenswrapper[4799]: I0127 09:21:39.597611 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6cc3da9c-c322-43a7-8d4e-56518c6f70cc","Type":"ContainerDied","Data":"2c32b01e4fd558523da8fdddb26cb8c8aefcfa8da052eb16af87379a26dfdcbf"} Jan 27 09:21:39 crc kubenswrapper[4799]: I0127 09:21:39.775618 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:21:39 crc kubenswrapper[4799]: I0127 09:21:39.915059 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cc3da9c-c322-43a7-8d4e-56518c6f70cc-config-data\") pod \"6cc3da9c-c322-43a7-8d4e-56518c6f70cc\" (UID: \"6cc3da9c-c322-43a7-8d4e-56518c6f70cc\") " Jan 27 09:21:39 crc kubenswrapper[4799]: I0127 09:21:39.915145 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw9ms\" (UniqueName: \"kubernetes.io/projected/6cc3da9c-c322-43a7-8d4e-56518c6f70cc-kube-api-access-jw9ms\") pod \"6cc3da9c-c322-43a7-8d4e-56518c6f70cc\" (UID: \"6cc3da9c-c322-43a7-8d4e-56518c6f70cc\") " Jan 27 09:21:39 crc kubenswrapper[4799]: I0127 09:21:39.915180 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cc3da9c-c322-43a7-8d4e-56518c6f70cc-combined-ca-bundle\") pod \"6cc3da9c-c322-43a7-8d4e-56518c6f70cc\" (UID: \"6cc3da9c-c322-43a7-8d4e-56518c6f70cc\") " Jan 27 09:21:39 crc kubenswrapper[4799]: I0127 09:21:39.941930 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc3da9c-c322-43a7-8d4e-56518c6f70cc-kube-api-access-jw9ms" (OuterVolumeSpecName: "kube-api-access-jw9ms") pod "6cc3da9c-c322-43a7-8d4e-56518c6f70cc" (UID: "6cc3da9c-c322-43a7-8d4e-56518c6f70cc"). InnerVolumeSpecName "kube-api-access-jw9ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:21:39 crc kubenswrapper[4799]: I0127 09:21:39.947128 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cc3da9c-c322-43a7-8d4e-56518c6f70cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6cc3da9c-c322-43a7-8d4e-56518c6f70cc" (UID: "6cc3da9c-c322-43a7-8d4e-56518c6f70cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:21:39 crc kubenswrapper[4799]: I0127 09:21:39.989624 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cc3da9c-c322-43a7-8d4e-56518c6f70cc-config-data" (OuterVolumeSpecName: "config-data") pod "6cc3da9c-c322-43a7-8d4e-56518c6f70cc" (UID: "6cc3da9c-c322-43a7-8d4e-56518c6f70cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.016906 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cc3da9c-c322-43a7-8d4e-56518c6f70cc-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.016949 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw9ms\" (UniqueName: \"kubernetes.io/projected/6cc3da9c-c322-43a7-8d4e-56518c6f70cc-kube-api-access-jw9ms\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.016966 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cc3da9c-c322-43a7-8d4e-56518c6f70cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.174798 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.325704 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf460e2-e400-4353-8e03-611ab39e1842-config-data\") pod \"9cf460e2-e400-4353-8e03-611ab39e1842\" (UID: \"9cf460e2-e400-4353-8e03-611ab39e1842\") " Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.325747 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnq2h\" (UniqueName: \"kubernetes.io/projected/9cf460e2-e400-4353-8e03-611ab39e1842-kube-api-access-xnq2h\") pod \"9cf460e2-e400-4353-8e03-611ab39e1842\" (UID: \"9cf460e2-e400-4353-8e03-611ab39e1842\") " Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.325913 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf460e2-e400-4353-8e03-611ab39e1842-combined-ca-bundle\") pod \"9cf460e2-e400-4353-8e03-611ab39e1842\" (UID: \"9cf460e2-e400-4353-8e03-611ab39e1842\") " Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.330889 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cf460e2-e400-4353-8e03-611ab39e1842-kube-api-access-xnq2h" (OuterVolumeSpecName: "kube-api-access-xnq2h") pod "9cf460e2-e400-4353-8e03-611ab39e1842" (UID: "9cf460e2-e400-4353-8e03-611ab39e1842"). InnerVolumeSpecName "kube-api-access-xnq2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.358276 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf460e2-e400-4353-8e03-611ab39e1842-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9cf460e2-e400-4353-8e03-611ab39e1842" (UID: "9cf460e2-e400-4353-8e03-611ab39e1842"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.359548 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf460e2-e400-4353-8e03-611ab39e1842-config-data" (OuterVolumeSpecName: "config-data") pod "9cf460e2-e400-4353-8e03-611ab39e1842" (UID: "9cf460e2-e400-4353-8e03-611ab39e1842"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.427864 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf460e2-e400-4353-8e03-611ab39e1842-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.427905 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf460e2-e400-4353-8e03-611ab39e1842-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.427917 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnq2h\" (UniqueName: \"kubernetes.io/projected/9cf460e2-e400-4353-8e03-611ab39e1842-kube-api-access-xnq2h\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.613740 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.614556 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6cc3da9c-c322-43a7-8d4e-56518c6f70cc","Type":"ContainerDied","Data":"c4f65c26d220acccfb8775a85138522087fd1dc0b100e178b498a32ca547d16a"} Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.614595 4799 scope.go:117] "RemoveContainer" containerID="2c32b01e4fd558523da8fdddb26cb8c8aefcfa8da052eb16af87379a26dfdcbf" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.621291 4799 generic.go:334] "Generic (PLEG): container finished" podID="9cf460e2-e400-4353-8e03-611ab39e1842" containerID="c18df90c75e23d615660d1f9bc3ad524cbab3165ca0bcdace0d6cd3bbf6d5df9" exitCode=0 Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.621356 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9cf460e2-e400-4353-8e03-611ab39e1842","Type":"ContainerDied","Data":"c18df90c75e23d615660d1f9bc3ad524cbab3165ca0bcdace0d6cd3bbf6d5df9"} Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.621385 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9cf460e2-e400-4353-8e03-611ab39e1842","Type":"ContainerDied","Data":"332146c89239a436fc72eb502b5554edbe1fad670e4cee96a736d8071c35a2b1"} Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.621444 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.706884 4799 scope.go:117] "RemoveContainer" containerID="c18df90c75e23d615660d1f9bc3ad524cbab3165ca0bcdace0d6cd3bbf6d5df9" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.712042 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.728226 4799 scope.go:117] "RemoveContainer" containerID="c18df90c75e23d615660d1f9bc3ad524cbab3165ca0bcdace0d6cd3bbf6d5df9" Jan 27 09:21:40 crc kubenswrapper[4799]: E0127 09:21:40.728937 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c18df90c75e23d615660d1f9bc3ad524cbab3165ca0bcdace0d6cd3bbf6d5df9\": container with ID starting with c18df90c75e23d615660d1f9bc3ad524cbab3165ca0bcdace0d6cd3bbf6d5df9 not found: ID does not exist" containerID="c18df90c75e23d615660d1f9bc3ad524cbab3165ca0bcdace0d6cd3bbf6d5df9" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.728988 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c18df90c75e23d615660d1f9bc3ad524cbab3165ca0bcdace0d6cd3bbf6d5df9"} err="failed to get container status \"c18df90c75e23d615660d1f9bc3ad524cbab3165ca0bcdace0d6cd3bbf6d5df9\": rpc error: code = NotFound desc = could not find container \"c18df90c75e23d615660d1f9bc3ad524cbab3165ca0bcdace0d6cd3bbf6d5df9\": container with ID starting with c18df90c75e23d615660d1f9bc3ad524cbab3165ca0bcdace0d6cd3bbf6d5df9 not found: ID does not exist" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.730895 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.747367 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.764894 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:21:40 crc kubenswrapper[4799]: E0127 09:21:40.765336 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44de1aed-71d7-42bb-945f-f19ec5470500" containerName="dnsmasq-dns" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.765359 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="44de1aed-71d7-42bb-945f-f19ec5470500" containerName="dnsmasq-dns" Jan 27 09:21:40 crc kubenswrapper[4799]: E0127 09:21:40.765376 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf460e2-e400-4353-8e03-611ab39e1842" containerName="nova-cell0-conductor-conductor" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.765384 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf460e2-e400-4353-8e03-611ab39e1842" containerName="nova-cell0-conductor-conductor" Jan 27 09:21:40 crc kubenswrapper[4799]: E0127 09:21:40.765402 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc3da9c-c322-43a7-8d4e-56518c6f70cc" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.765409 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc3da9c-c322-43a7-8d4e-56518c6f70cc" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 09:21:40 crc kubenswrapper[4799]: E0127 09:21:40.765438 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44de1aed-71d7-42bb-945f-f19ec5470500" containerName="init" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.765444 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="44de1aed-71d7-42bb-945f-f19ec5470500" containerName="init" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.765637 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc3da9c-c322-43a7-8d4e-56518c6f70cc" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.765663 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cf460e2-e400-4353-8e03-611ab39e1842" containerName="nova-cell0-conductor-conductor" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.765680 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="44de1aed-71d7-42bb-945f-f19ec5470500" containerName="dnsmasq-dns" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.766414 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.773683 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.786959 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.803037 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.824901 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.826343 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.828831 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.835855 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.938680 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7b28\" (UniqueName: \"kubernetes.io/projected/6803f652-fe33-451f-8a37-5ab86eddd782-kube-api-access-t7b28\") pod \"nova-cell1-novncproxy-0\" (UID: \"6803f652-fe33-451f-8a37-5ab86eddd782\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.938773 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6803f652-fe33-451f-8a37-5ab86eddd782-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6803f652-fe33-451f-8a37-5ab86eddd782\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.938831 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnn7g\" (UniqueName: \"kubernetes.io/projected/3434f389-fe51-497d-af18-a0e23a76cb52-kube-api-access-bnn7g\") pod \"nova-cell0-conductor-0\" (UID: \"3434f389-fe51-497d-af18-a0e23a76cb52\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.938875 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3434f389-fe51-497d-af18-a0e23a76cb52-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3434f389-fe51-497d-af18-a0e23a76cb52\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.938959 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3434f389-fe51-497d-af18-a0e23a76cb52-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3434f389-fe51-497d-af18-a0e23a76cb52\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:21:40 crc kubenswrapper[4799]: I0127 09:21:40.939026 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6803f652-fe33-451f-8a37-5ab86eddd782-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6803f652-fe33-451f-8a37-5ab86eddd782\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:21:41 crc kubenswrapper[4799]: I0127 09:21:41.041262 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7b28\" (UniqueName: \"kubernetes.io/projected/6803f652-fe33-451f-8a37-5ab86eddd782-kube-api-access-t7b28\") pod \"nova-cell1-novncproxy-0\" (UID: \"6803f652-fe33-451f-8a37-5ab86eddd782\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:21:41 crc kubenswrapper[4799]: I0127 09:21:41.041353 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6803f652-fe33-451f-8a37-5ab86eddd782-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6803f652-fe33-451f-8a37-5ab86eddd782\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:21:41 crc kubenswrapper[4799]: I0127 09:21:41.041399 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnn7g\" (UniqueName: \"kubernetes.io/projected/3434f389-fe51-497d-af18-a0e23a76cb52-kube-api-access-bnn7g\") pod \"nova-cell0-conductor-0\" (UID: \"3434f389-fe51-497d-af18-a0e23a76cb52\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:21:41 crc kubenswrapper[4799]: I0127 09:21:41.041430 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3434f389-fe51-497d-af18-a0e23a76cb52-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3434f389-fe51-497d-af18-a0e23a76cb52\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:21:41 crc kubenswrapper[4799]: I0127 09:21:41.041454 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3434f389-fe51-497d-af18-a0e23a76cb52-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3434f389-fe51-497d-af18-a0e23a76cb52\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:21:41 crc kubenswrapper[4799]: I0127 09:21:41.041527 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6803f652-fe33-451f-8a37-5ab86eddd782-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6803f652-fe33-451f-8a37-5ab86eddd782\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:21:41 crc kubenswrapper[4799]: I0127 09:21:41.048775 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6803f652-fe33-451f-8a37-5ab86eddd782-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6803f652-fe33-451f-8a37-5ab86eddd782\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:21:41 crc kubenswrapper[4799]: I0127 09:21:41.048918 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6803f652-fe33-451f-8a37-5ab86eddd782-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6803f652-fe33-451f-8a37-5ab86eddd782\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:21:41 crc kubenswrapper[4799]: I0127 09:21:41.052042 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3434f389-fe51-497d-af18-a0e23a76cb52-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3434f389-fe51-497d-af18-a0e23a76cb52\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:21:41 crc kubenswrapper[4799]: I0127 09:21:41.052957 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3434f389-fe51-497d-af18-a0e23a76cb52-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3434f389-fe51-497d-af18-a0e23a76cb52\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:21:41 crc kubenswrapper[4799]: I0127 09:21:41.057008 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7b28\" (UniqueName: \"kubernetes.io/projected/6803f652-fe33-451f-8a37-5ab86eddd782-kube-api-access-t7b28\") pod \"nova-cell1-novncproxy-0\" (UID: \"6803f652-fe33-451f-8a37-5ab86eddd782\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:21:41 crc kubenswrapper[4799]: I0127 09:21:41.060245 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnn7g\" (UniqueName: \"kubernetes.io/projected/3434f389-fe51-497d-af18-a0e23a76cb52-kube-api-access-bnn7g\") pod \"nova-cell0-conductor-0\" (UID: \"3434f389-fe51-497d-af18-a0e23a76cb52\") " pod="openstack/nova-cell0-conductor-0" Jan 27 09:21:41 crc kubenswrapper[4799]: I0127 09:21:41.098508 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:21:41 crc kubenswrapper[4799]: I0127 09:21:41.144629 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 09:21:41 crc kubenswrapper[4799]: I0127 09:21:41.617649 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 09:21:41 crc kubenswrapper[4799]: I0127 09:21:41.656948 4799 generic.go:334] "Generic (PLEG): container finished" podID="96172775-55ea-448b-a331-8c10c7a1ac20" containerID="53538cdf8d9fbd86abab4d97489a277bb804009f49f919e7d17b3933518c95df" exitCode=0 Jan 27 09:21:41 crc kubenswrapper[4799]: I0127 09:21:41.657116 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96172775-55ea-448b-a331-8c10c7a1ac20","Type":"ContainerDied","Data":"53538cdf8d9fbd86abab4d97489a277bb804009f49f919e7d17b3933518c95df"} Jan 27 09:21:41 crc kubenswrapper[4799]: I0127 09:21:41.686603 4799 generic.go:334] "Generic (PLEG): container finished" podID="e8d15b71-81f3-4700-b916-db1a08d5c5fc" containerID="b3361ee321f3621da3eda593449da8db9db8c2e0fe647a43890a2ad585feee52" exitCode=0 Jan 27 09:21:41 crc kubenswrapper[4799]: I0127 09:21:41.686668 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8d15b71-81f3-4700-b916-db1a08d5c5fc","Type":"ContainerDied","Data":"b3361ee321f3621da3eda593449da8db9db8c2e0fe647a43890a2ad585feee52"} Jan 27 09:21:41 crc kubenswrapper[4799]: I0127 09:21:41.822606 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 09:21:41 crc kubenswrapper[4799]: W0127 09:21:41.832018 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3434f389_fe51_497d_af18_a0e23a76cb52.slice/crio-47f212277d80d2072662d8f5a7acd3a7fb0c0763af10dbba8e99b8e20fcb38d1 WatchSource:0}: Error finding container 47f212277d80d2072662d8f5a7acd3a7fb0c0763af10dbba8e99b8e20fcb38d1: Status 404 returned error can't find the container with id 47f212277d80d2072662d8f5a7acd3a7fb0c0763af10dbba8e99b8e20fcb38d1 Jan 27 09:21:41 crc kubenswrapper[4799]: I0127 09:21:41.914860 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="96172775-55ea-448b-a331-8c10c7a1ac20" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.63:8775/\": dial tcp 10.217.1.63:8775: connect: connection refused" Jan 27 09:21:41 crc kubenswrapper[4799]: I0127 09:21:41.916523 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="96172775-55ea-448b-a331-8c10c7a1ac20" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.63:8775/\": dial tcp 10.217.1.63:8775: connect: connection refused" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.189446 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.310704 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8d15b71-81f3-4700-b916-db1a08d5c5fc-logs\") pod \"e8d15b71-81f3-4700-b916-db1a08d5c5fc\" (UID: \"e8d15b71-81f3-4700-b916-db1a08d5c5fc\") " Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.310957 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq2mr\" (UniqueName: \"kubernetes.io/projected/e8d15b71-81f3-4700-b916-db1a08d5c5fc-kube-api-access-nq2mr\") pod \"e8d15b71-81f3-4700-b916-db1a08d5c5fc\" (UID: \"e8d15b71-81f3-4700-b916-db1a08d5c5fc\") " Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.310994 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8d15b71-81f3-4700-b916-db1a08d5c5fc-combined-ca-bundle\") pod \"e8d15b71-81f3-4700-b916-db1a08d5c5fc\" (UID: \"e8d15b71-81f3-4700-b916-db1a08d5c5fc\") " Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.311022 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8d15b71-81f3-4700-b916-db1a08d5c5fc-config-data\") pod \"e8d15b71-81f3-4700-b916-db1a08d5c5fc\" (UID: \"e8d15b71-81f3-4700-b916-db1a08d5c5fc\") " Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.314549 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8d15b71-81f3-4700-b916-db1a08d5c5fc-logs" (OuterVolumeSpecName: "logs") pod "e8d15b71-81f3-4700-b916-db1a08d5c5fc" (UID: "e8d15b71-81f3-4700-b916-db1a08d5c5fc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.329711 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d15b71-81f3-4700-b916-db1a08d5c5fc-kube-api-access-nq2mr" (OuterVolumeSpecName: "kube-api-access-nq2mr") pod "e8d15b71-81f3-4700-b916-db1a08d5c5fc" (UID: "e8d15b71-81f3-4700-b916-db1a08d5c5fc"). InnerVolumeSpecName "kube-api-access-nq2mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.346430 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8d15b71-81f3-4700-b916-db1a08d5c5fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8d15b71-81f3-4700-b916-db1a08d5c5fc" (UID: "e8d15b71-81f3-4700-b916-db1a08d5c5fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.415678 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq2mr\" (UniqueName: \"kubernetes.io/projected/e8d15b71-81f3-4700-b916-db1a08d5c5fc-kube-api-access-nq2mr\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.416157 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8d15b71-81f3-4700-b916-db1a08d5c5fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.416170 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8d15b71-81f3-4700-b916-db1a08d5c5fc-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.425564 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8d15b71-81f3-4700-b916-db1a08d5c5fc-config-data" (OuterVolumeSpecName: "config-data") pod "e8d15b71-81f3-4700-b916-db1a08d5c5fc" (UID: "e8d15b71-81f3-4700-b916-db1a08d5c5fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.472380 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.486894 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc3da9c-c322-43a7-8d4e-56518c6f70cc" path="/var/lib/kubelet/pods/6cc3da9c-c322-43a7-8d4e-56518c6f70cc/volumes" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.487996 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cf460e2-e400-4353-8e03-611ab39e1842" path="/var/lib/kubelet/pods/9cf460e2-e400-4353-8e03-611ab39e1842/volumes" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.535743 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8d15b71-81f3-4700-b916-db1a08d5c5fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.638254 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grnf6\" (UniqueName: \"kubernetes.io/projected/96172775-55ea-448b-a331-8c10c7a1ac20-kube-api-access-grnf6\") pod \"96172775-55ea-448b-a331-8c10c7a1ac20\" (UID: \"96172775-55ea-448b-a331-8c10c7a1ac20\") " Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.638475 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96172775-55ea-448b-a331-8c10c7a1ac20-config-data\") pod \"96172775-55ea-448b-a331-8c10c7a1ac20\" (UID: \"96172775-55ea-448b-a331-8c10c7a1ac20\") " Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.638571 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96172775-55ea-448b-a331-8c10c7a1ac20-combined-ca-bundle\") pod \"96172775-55ea-448b-a331-8c10c7a1ac20\" (UID: \"96172775-55ea-448b-a331-8c10c7a1ac20\") " Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.638734 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96172775-55ea-448b-a331-8c10c7a1ac20-logs\") pod \"96172775-55ea-448b-a331-8c10c7a1ac20\" (UID: \"96172775-55ea-448b-a331-8c10c7a1ac20\") " Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.640961 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96172775-55ea-448b-a331-8c10c7a1ac20-logs" (OuterVolumeSpecName: "logs") pod "96172775-55ea-448b-a331-8c10c7a1ac20" (UID: "96172775-55ea-448b-a331-8c10c7a1ac20"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.645667 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96172775-55ea-448b-a331-8c10c7a1ac20-kube-api-access-grnf6" (OuterVolumeSpecName: "kube-api-access-grnf6") pod "96172775-55ea-448b-a331-8c10c7a1ac20" (UID: "96172775-55ea-448b-a331-8c10c7a1ac20"). InnerVolumeSpecName "kube-api-access-grnf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.672785 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96172775-55ea-448b-a331-8c10c7a1ac20-config-data" (OuterVolumeSpecName: "config-data") pod "96172775-55ea-448b-a331-8c10c7a1ac20" (UID: "96172775-55ea-448b-a331-8c10c7a1ac20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.690238 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96172775-55ea-448b-a331-8c10c7a1ac20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96172775-55ea-448b-a331-8c10c7a1ac20" (UID: "96172775-55ea-448b-a331-8c10c7a1ac20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.707669 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6803f652-fe33-451f-8a37-5ab86eddd782","Type":"ContainerStarted","Data":"cefa548b926375bcd21cda0d91ac855c750250da533106e26365b5fb2995c009"} Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.707735 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6803f652-fe33-451f-8a37-5ab86eddd782","Type":"ContainerStarted","Data":"62c6266becf5a02454c6e78c7bbcf9a533c2b4ebb2169a318afcd81fbbad11ec"} Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.722565 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96172775-55ea-448b-a331-8c10c7a1ac20","Type":"ContainerDied","Data":"1ee6b0968f75a99b7b4de1a96257f7d9342b12dd69f95a3a88f42bc23ff6f016"} Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.722623 4799 scope.go:117] "RemoveContainer" containerID="53538cdf8d9fbd86abab4d97489a277bb804009f49f919e7d17b3933518c95df" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.722795 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.748895 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3434f389-fe51-497d-af18-a0e23a76cb52","Type":"ContainerStarted","Data":"f02a6bac51d2d39ce1a174e51fa10ce754b37e28f460fea2dab19ec1e8b09484"} Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.748956 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3434f389-fe51-497d-af18-a0e23a76cb52","Type":"ContainerStarted","Data":"47f212277d80d2072662d8f5a7acd3a7fb0c0763af10dbba8e99b8e20fcb38d1"} Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.749238 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.749659 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grnf6\" (UniqueName: \"kubernetes.io/projected/96172775-55ea-448b-a331-8c10c7a1ac20-kube-api-access-grnf6\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.750556 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96172775-55ea-448b-a331-8c10c7a1ac20-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.755732 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96172775-55ea-448b-a331-8c10c7a1ac20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.755786 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96172775-55ea-448b-a331-8c10c7a1ac20-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.776459 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8d15b71-81f3-4700-b916-db1a08d5c5fc","Type":"ContainerDied","Data":"63b8c8e157c0e8259c0a2074119f7ab6f85813351849dd3005314975fdc899a3"} Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.776551 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.779863 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.779839902 podStartE2EDuration="2.779839902s" podCreationTimestamp="2026-01-27 09:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:21:42.775935736 +0000 UTC m=+5769.087039801" watchObservedRunningTime="2026-01-27 09:21:42.779839902 +0000 UTC m=+5769.090943967" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.786560 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.786538135 podStartE2EDuration="2.786538135s" podCreationTimestamp="2026-01-27 09:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:21:42.744117699 +0000 UTC m=+5769.055221764" watchObservedRunningTime="2026-01-27 09:21:42.786538135 +0000 UTC m=+5769.097642200" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.814135 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.821878 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.852642 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.852861 4799 scope.go:117] "RemoveContainer" containerID="e2f46c1875eb09322a1a73ad4f190e1e058d3df354efe01afefd540af2400163" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.870625 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.876517 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 09:21:42 crc kubenswrapper[4799]: E0127 09:21:42.877037 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96172775-55ea-448b-a331-8c10c7a1ac20" containerName="nova-metadata-metadata" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.877051 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="96172775-55ea-448b-a331-8c10c7a1ac20" containerName="nova-metadata-metadata" Jan 27 09:21:42 crc kubenswrapper[4799]: E0127 09:21:42.877078 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d15b71-81f3-4700-b916-db1a08d5c5fc" containerName="nova-api-log" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.877084 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d15b71-81f3-4700-b916-db1a08d5c5fc" containerName="nova-api-log" Jan 27 09:21:42 crc kubenswrapper[4799]: E0127 09:21:42.877096 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96172775-55ea-448b-a331-8c10c7a1ac20" containerName="nova-metadata-log" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.877105 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="96172775-55ea-448b-a331-8c10c7a1ac20" containerName="nova-metadata-log" Jan 27 09:21:42 crc kubenswrapper[4799]: E0127 09:21:42.877126 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d15b71-81f3-4700-b916-db1a08d5c5fc" containerName="nova-api-api" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.877132 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d15b71-81f3-4700-b916-db1a08d5c5fc" containerName="nova-api-api" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.877367 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8d15b71-81f3-4700-b916-db1a08d5c5fc" containerName="nova-api-log" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.877380 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="96172775-55ea-448b-a331-8c10c7a1ac20" containerName="nova-metadata-metadata" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.877396 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="96172775-55ea-448b-a331-8c10c7a1ac20" containerName="nova-metadata-log" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.877406 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8d15b71-81f3-4700-b916-db1a08d5c5fc" containerName="nova-api-api" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.879682 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.883072 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.892584 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.905826 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.907849 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.913831 4799 scope.go:117] "RemoveContainer" containerID="b3361ee321f3621da3eda593449da8db9db8c2e0fe647a43890a2ad585feee52" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.914779 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.916777 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.970208 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xvgk\" (UniqueName: \"kubernetes.io/projected/9e5450ba-70ac-47fc-ac5c-8cd34f80c39c-kube-api-access-8xvgk\") pod \"nova-metadata-0\" (UID: \"9e5450ba-70ac-47fc-ac5c-8cd34f80c39c\") " pod="openstack/nova-metadata-0" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.970320 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e5450ba-70ac-47fc-ac5c-8cd34f80c39c-config-data\") pod \"nova-metadata-0\" (UID: \"9e5450ba-70ac-47fc-ac5c-8cd34f80c39c\") " pod="openstack/nova-metadata-0" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.970385 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e5450ba-70ac-47fc-ac5c-8cd34f80c39c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9e5450ba-70ac-47fc-ac5c-8cd34f80c39c\") " pod="openstack/nova-metadata-0" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.970421 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23a85377-d942-461f-b381-da5730b8b48d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"23a85377-d942-461f-b381-da5730b8b48d\") " pod="openstack/nova-api-0" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.970455 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7mhw\" (UniqueName: \"kubernetes.io/projected/23a85377-d942-461f-b381-da5730b8b48d-kube-api-access-n7mhw\") pod \"nova-api-0\" (UID: \"23a85377-d942-461f-b381-da5730b8b48d\") " pod="openstack/nova-api-0" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.970521 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23a85377-d942-461f-b381-da5730b8b48d-logs\") pod \"nova-api-0\" (UID: \"23a85377-d942-461f-b381-da5730b8b48d\") " pod="openstack/nova-api-0" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.970563 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23a85377-d942-461f-b381-da5730b8b48d-config-data\") pod \"nova-api-0\" (UID: \"23a85377-d942-461f-b381-da5730b8b48d\") " pod="openstack/nova-api-0" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.970630 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e5450ba-70ac-47fc-ac5c-8cd34f80c39c-logs\") pod \"nova-metadata-0\" (UID: \"9e5450ba-70ac-47fc-ac5c-8cd34f80c39c\") " pod="openstack/nova-metadata-0" Jan 27 09:21:42 crc kubenswrapper[4799]: I0127 09:21:42.974200 4799 scope.go:117] "RemoveContainer" containerID="8aa353eab7c5b40f9c768464929c9e61a3d5fb825048c99739b8564d0b457f6a" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.071930 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23a85377-d942-461f-b381-da5730b8b48d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"23a85377-d942-461f-b381-da5730b8b48d\") " pod="openstack/nova-api-0" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.071987 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7mhw\" (UniqueName: \"kubernetes.io/projected/23a85377-d942-461f-b381-da5730b8b48d-kube-api-access-n7mhw\") pod \"nova-api-0\" (UID: \"23a85377-d942-461f-b381-da5730b8b48d\") " pod="openstack/nova-api-0" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.072021 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23a85377-d942-461f-b381-da5730b8b48d-logs\") pod \"nova-api-0\" (UID: \"23a85377-d942-461f-b381-da5730b8b48d\") " pod="openstack/nova-api-0" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.072054 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23a85377-d942-461f-b381-da5730b8b48d-config-data\") pod \"nova-api-0\" (UID: \"23a85377-d942-461f-b381-da5730b8b48d\") " pod="openstack/nova-api-0" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.072107 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e5450ba-70ac-47fc-ac5c-8cd34f80c39c-logs\") pod \"nova-metadata-0\" (UID: \"9e5450ba-70ac-47fc-ac5c-8cd34f80c39c\") " pod="openstack/nova-metadata-0" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.072166 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xvgk\" (UniqueName: \"kubernetes.io/projected/9e5450ba-70ac-47fc-ac5c-8cd34f80c39c-kube-api-access-8xvgk\") pod \"nova-metadata-0\" (UID: \"9e5450ba-70ac-47fc-ac5c-8cd34f80c39c\") " pod="openstack/nova-metadata-0" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.072187 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e5450ba-70ac-47fc-ac5c-8cd34f80c39c-config-data\") pod \"nova-metadata-0\" (UID: \"9e5450ba-70ac-47fc-ac5c-8cd34f80c39c\") " pod="openstack/nova-metadata-0" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.072224 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e5450ba-70ac-47fc-ac5c-8cd34f80c39c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9e5450ba-70ac-47fc-ac5c-8cd34f80c39c\") " pod="openstack/nova-metadata-0" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.074249 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e5450ba-70ac-47fc-ac5c-8cd34f80c39c-logs\") pod \"nova-metadata-0\" (UID: \"9e5450ba-70ac-47fc-ac5c-8cd34f80c39c\") " pod="openstack/nova-metadata-0" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.074687 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23a85377-d942-461f-b381-da5730b8b48d-logs\") pod \"nova-api-0\" (UID: \"23a85377-d942-461f-b381-da5730b8b48d\") " pod="openstack/nova-api-0" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.077353 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e5450ba-70ac-47fc-ac5c-8cd34f80c39c-config-data\") pod \"nova-metadata-0\" (UID: \"9e5450ba-70ac-47fc-ac5c-8cd34f80c39c\") " pod="openstack/nova-metadata-0" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.097458 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23a85377-d942-461f-b381-da5730b8b48d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"23a85377-d942-461f-b381-da5730b8b48d\") " pod="openstack/nova-api-0" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.099665 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23a85377-d942-461f-b381-da5730b8b48d-config-data\") pod \"nova-api-0\" (UID: \"23a85377-d942-461f-b381-da5730b8b48d\") " pod="openstack/nova-api-0" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.101889 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xvgk\" (UniqueName: \"kubernetes.io/projected/9e5450ba-70ac-47fc-ac5c-8cd34f80c39c-kube-api-access-8xvgk\") pod \"nova-metadata-0\" (UID: \"9e5450ba-70ac-47fc-ac5c-8cd34f80c39c\") " pod="openstack/nova-metadata-0" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.103512 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7mhw\" (UniqueName: \"kubernetes.io/projected/23a85377-d942-461f-b381-da5730b8b48d-kube-api-access-n7mhw\") pod \"nova-api-0\" (UID: \"23a85377-d942-461f-b381-da5730b8b48d\") " pod="openstack/nova-api-0" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.103878 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e5450ba-70ac-47fc-ac5c-8cd34f80c39c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9e5450ba-70ac-47fc-ac5c-8cd34f80c39c\") " pod="openstack/nova-metadata-0" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.214108 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.245315 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 09:21:43 crc kubenswrapper[4799]: E0127 09:21:43.711657 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="942bd87edd1bb3698510d7f67f940d7972fd60a9d5885b49f21a7467b2b57dcb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:21:43 crc kubenswrapper[4799]: E0127 09:21:43.714278 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="942bd87edd1bb3698510d7f67f940d7972fd60a9d5885b49f21a7467b2b57dcb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:21:43 crc kubenswrapper[4799]: E0127 09:21:43.715397 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="942bd87edd1bb3698510d7f67f940d7972fd60a9d5885b49f21a7467b2b57dcb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 09:21:43 crc kubenswrapper[4799]: E0127 09:21:43.715453 4799 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="3c980df1-a520-4c83-9094-65ffa132b464" containerName="nova-cell1-conductor-conductor" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.782627 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.799522 4799 generic.go:334] "Generic (PLEG): container finished" podID="1d60b300-02c2-4bc8-908d-2f7e2b5bddad" containerID="89045eb040bd6f93089885e5689c781508568ee002952038a3e102b8a3f58b8c" exitCode=0 Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.799612 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1d60b300-02c2-4bc8-908d-2f7e2b5bddad","Type":"ContainerDied","Data":"89045eb040bd6f93089885e5689c781508568ee002952038a3e102b8a3f58b8c"} Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.799651 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1d60b300-02c2-4bc8-908d-2f7e2b5bddad","Type":"ContainerDied","Data":"50c3beccf7975846778254b08b1db6a8e558d1db4717d42c40576636c752ec1d"} Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.799671 4799 scope.go:117] "RemoveContainer" containerID="89045eb040bd6f93089885e5689c781508568ee002952038a3e102b8a3f58b8c" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.799809 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.856027 4799 scope.go:117] "RemoveContainer" containerID="89045eb040bd6f93089885e5689c781508568ee002952038a3e102b8a3f58b8c" Jan 27 09:21:43 crc kubenswrapper[4799]: E0127 09:21:43.857157 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89045eb040bd6f93089885e5689c781508568ee002952038a3e102b8a3f58b8c\": container with ID starting with 89045eb040bd6f93089885e5689c781508568ee002952038a3e102b8a3f58b8c not found: ID does not exist" containerID="89045eb040bd6f93089885e5689c781508568ee002952038a3e102b8a3f58b8c" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.857228 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89045eb040bd6f93089885e5689c781508568ee002952038a3e102b8a3f58b8c"} err="failed to get container status \"89045eb040bd6f93089885e5689c781508568ee002952038a3e102b8a3f58b8c\": rpc error: code = NotFound desc = could not find container \"89045eb040bd6f93089885e5689c781508568ee002952038a3e102b8a3f58b8c\": container with ID starting with 89045eb040bd6f93089885e5689c781508568ee002952038a3e102b8a3f58b8c not found: ID does not exist" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.893016 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d60b300-02c2-4bc8-908d-2f7e2b5bddad-config-data\") pod \"1d60b300-02c2-4bc8-908d-2f7e2b5bddad\" (UID: \"1d60b300-02c2-4bc8-908d-2f7e2b5bddad\") " Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.893196 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d60b300-02c2-4bc8-908d-2f7e2b5bddad-combined-ca-bundle\") pod \"1d60b300-02c2-4bc8-908d-2f7e2b5bddad\" (UID: \"1d60b300-02c2-4bc8-908d-2f7e2b5bddad\") " Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.893271 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xfk\" (UniqueName: \"kubernetes.io/projected/1d60b300-02c2-4bc8-908d-2f7e2b5bddad-kube-api-access-w4xfk\") pod \"1d60b300-02c2-4bc8-908d-2f7e2b5bddad\" (UID: \"1d60b300-02c2-4bc8-908d-2f7e2b5bddad\") " Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.896329 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.903111 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d60b300-02c2-4bc8-908d-2f7e2b5bddad-kube-api-access-w4xfk" (OuterVolumeSpecName: "kube-api-access-w4xfk") pod "1d60b300-02c2-4bc8-908d-2f7e2b5bddad" (UID: "1d60b300-02c2-4bc8-908d-2f7e2b5bddad"). InnerVolumeSpecName "kube-api-access-w4xfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.905781 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xfk\" (UniqueName: \"kubernetes.io/projected/1d60b300-02c2-4bc8-908d-2f7e2b5bddad-kube-api-access-w4xfk\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.929873 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.934429 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d60b300-02c2-4bc8-908d-2f7e2b5bddad-config-data" (OuterVolumeSpecName: "config-data") pod "1d60b300-02c2-4bc8-908d-2f7e2b5bddad" (UID: "1d60b300-02c2-4bc8-908d-2f7e2b5bddad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:21:43 crc kubenswrapper[4799]: I0127 09:21:43.940522 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d60b300-02c2-4bc8-908d-2f7e2b5bddad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d60b300-02c2-4bc8-908d-2f7e2b5bddad" (UID: "1d60b300-02c2-4bc8-908d-2f7e2b5bddad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.008150 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d60b300-02c2-4bc8-908d-2f7e2b5bddad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.008185 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d60b300-02c2-4bc8-908d-2f7e2b5bddad-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.153850 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.164795 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.185771 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:21:44 crc kubenswrapper[4799]: E0127 09:21:44.186511 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d60b300-02c2-4bc8-908d-2f7e2b5bddad" containerName="nova-scheduler-scheduler" Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.186535 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d60b300-02c2-4bc8-908d-2f7e2b5bddad" containerName="nova-scheduler-scheduler" Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.186823 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d60b300-02c2-4bc8-908d-2f7e2b5bddad" containerName="nova-scheduler-scheduler" Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.187766 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.225087 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.266239 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.332730 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/096c7330-6c6a-48f8-bd44-ee5ed6893012-config-data\") pod \"nova-scheduler-0\" (UID: \"096c7330-6c6a-48f8-bd44-ee5ed6893012\") " pod="openstack/nova-scheduler-0" Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.332824 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qjpg\" (UniqueName: \"kubernetes.io/projected/096c7330-6c6a-48f8-bd44-ee5ed6893012-kube-api-access-7qjpg\") pod \"nova-scheduler-0\" (UID: \"096c7330-6c6a-48f8-bd44-ee5ed6893012\") " pod="openstack/nova-scheduler-0" Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.332896 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/096c7330-6c6a-48f8-bd44-ee5ed6893012-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"096c7330-6c6a-48f8-bd44-ee5ed6893012\") " pod="openstack/nova-scheduler-0" Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.435004 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/096c7330-6c6a-48f8-bd44-ee5ed6893012-config-data\") pod \"nova-scheduler-0\" (UID: \"096c7330-6c6a-48f8-bd44-ee5ed6893012\") " pod="openstack/nova-scheduler-0" Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.435072 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qjpg\" (UniqueName: \"kubernetes.io/projected/096c7330-6c6a-48f8-bd44-ee5ed6893012-kube-api-access-7qjpg\") pod \"nova-scheduler-0\" (UID: \"096c7330-6c6a-48f8-bd44-ee5ed6893012\") " pod="openstack/nova-scheduler-0" Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.435114 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/096c7330-6c6a-48f8-bd44-ee5ed6893012-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"096c7330-6c6a-48f8-bd44-ee5ed6893012\") " pod="openstack/nova-scheduler-0" Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.439852 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/096c7330-6c6a-48f8-bd44-ee5ed6893012-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"096c7330-6c6a-48f8-bd44-ee5ed6893012\") " pod="openstack/nova-scheduler-0" Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.439886 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/096c7330-6c6a-48f8-bd44-ee5ed6893012-config-data\") pod \"nova-scheduler-0\" (UID: \"096c7330-6c6a-48f8-bd44-ee5ed6893012\") " pod="openstack/nova-scheduler-0" Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.454587 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qjpg\" (UniqueName: \"kubernetes.io/projected/096c7330-6c6a-48f8-bd44-ee5ed6893012-kube-api-access-7qjpg\") pod \"nova-scheduler-0\" (UID: \"096c7330-6c6a-48f8-bd44-ee5ed6893012\") " pod="openstack/nova-scheduler-0" Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.467324 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d60b300-02c2-4bc8-908d-2f7e2b5bddad" path="/var/lib/kubelet/pods/1d60b300-02c2-4bc8-908d-2f7e2b5bddad/volumes" Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.468178 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96172775-55ea-448b-a331-8c10c7a1ac20" path="/var/lib/kubelet/pods/96172775-55ea-448b-a331-8c10c7a1ac20/volumes" Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.469294 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8d15b71-81f3-4700-b916-db1a08d5c5fc" path="/var/lib/kubelet/pods/e8d15b71-81f3-4700-b916-db1a08d5c5fc/volumes" Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.561273 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.823981 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23a85377-d942-461f-b381-da5730b8b48d","Type":"ContainerStarted","Data":"d68f855aba3d585c33d308cf9bda0a11218f468a2f9379c65b705725320268ff"} Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.824474 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23a85377-d942-461f-b381-da5730b8b48d","Type":"ContainerStarted","Data":"eb94c283a5a03001c6eb8d8dd0063cea66877db46cdc8e50510cf5066f1c5caf"} Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.824489 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23a85377-d942-461f-b381-da5730b8b48d","Type":"ContainerStarted","Data":"b5a048b2f764143aacb01e2ade7fab089e2eb79193a09a4f851156eb6d58956d"} Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.842401 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e5450ba-70ac-47fc-ac5c-8cd34f80c39c","Type":"ContainerStarted","Data":"1671d1dd6aaa02e6cc491aa4496ba404f9b49c447b64bfd707fcf0cb654a68fd"} Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.842456 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e5450ba-70ac-47fc-ac5c-8cd34f80c39c","Type":"ContainerStarted","Data":"9a989e0d0ee38c5669ac87a081ef82f844e3565aa2902bb3caa1c4eb560e0f91"} Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.842470 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e5450ba-70ac-47fc-ac5c-8cd34f80c39c","Type":"ContainerStarted","Data":"cb92351b187e0b0e7271939407e37248d550981d53a0ae31064687f131a2d8e2"} Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.876712 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.8766877060000002 podStartE2EDuration="2.876687706s" podCreationTimestamp="2026-01-27 09:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:21:44.872475981 +0000 UTC m=+5771.183580046" watchObservedRunningTime="2026-01-27 09:21:44.876687706 +0000 UTC m=+5771.187791771" Jan 27 09:21:44 crc kubenswrapper[4799]: I0127 09:21:44.887070 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.887043677 podStartE2EDuration="2.887043677s" podCreationTimestamp="2026-01-27 09:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:21:44.857685628 +0000 UTC m=+5771.168789693" watchObservedRunningTime="2026-01-27 09:21:44.887043677 +0000 UTC m=+5771.198147742" Jan 27 09:21:45 crc kubenswrapper[4799]: I0127 09:21:45.148070 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 09:21:45 crc kubenswrapper[4799]: I0127 09:21:45.855123 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"096c7330-6c6a-48f8-bd44-ee5ed6893012","Type":"ContainerStarted","Data":"6449dda261cdba59bb404508417a1e851c98179d03f564317ee05ff4b6cc5d1b"} Jan 27 09:21:45 crc kubenswrapper[4799]: I0127 09:21:45.855519 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"096c7330-6c6a-48f8-bd44-ee5ed6893012","Type":"ContainerStarted","Data":"9c407c330d25a3204907c684db368d1189eedb194714dbdd94e40cccb497972c"} Jan 27 09:21:46 crc kubenswrapper[4799]: I0127 09:21:46.099229 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:21:47 crc kubenswrapper[4799]: I0127 09:21:47.885286 4799 generic.go:334] "Generic (PLEG): container finished" podID="3c980df1-a520-4c83-9094-65ffa132b464" containerID="942bd87edd1bb3698510d7f67f940d7972fd60a9d5885b49f21a7467b2b57dcb" exitCode=0 Jan 27 09:21:47 crc kubenswrapper[4799]: I0127 09:21:47.885378 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3c980df1-a520-4c83-9094-65ffa132b464","Type":"ContainerDied","Data":"942bd87edd1bb3698510d7f67f940d7972fd60a9d5885b49f21a7467b2b57dcb"} Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.210639 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.245551 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.246167 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.266038 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.266011417 podStartE2EDuration="4.266011417s" podCreationTimestamp="2026-01-27 09:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:21:45.883640393 +0000 UTC m=+5772.194744458" watchObservedRunningTime="2026-01-27 09:21:48.266011417 +0000 UTC m=+5774.577115482" Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.342054 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c980df1-a520-4c83-9094-65ffa132b464-combined-ca-bundle\") pod \"3c980df1-a520-4c83-9094-65ffa132b464\" (UID: \"3c980df1-a520-4c83-9094-65ffa132b464\") " Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.342228 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c980df1-a520-4c83-9094-65ffa132b464-config-data\") pod \"3c980df1-a520-4c83-9094-65ffa132b464\" (UID: \"3c980df1-a520-4c83-9094-65ffa132b464\") " Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.342292 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgspb\" (UniqueName: \"kubernetes.io/projected/3c980df1-a520-4c83-9094-65ffa132b464-kube-api-access-qgspb\") pod \"3c980df1-a520-4c83-9094-65ffa132b464\" (UID: \"3c980df1-a520-4c83-9094-65ffa132b464\") " Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.359562 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c980df1-a520-4c83-9094-65ffa132b464-kube-api-access-qgspb" (OuterVolumeSpecName: "kube-api-access-qgspb") pod "3c980df1-a520-4c83-9094-65ffa132b464" (UID: "3c980df1-a520-4c83-9094-65ffa132b464"). InnerVolumeSpecName "kube-api-access-qgspb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.376561 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c980df1-a520-4c83-9094-65ffa132b464-config-data" (OuterVolumeSpecName: "config-data") pod "3c980df1-a520-4c83-9094-65ffa132b464" (UID: "3c980df1-a520-4c83-9094-65ffa132b464"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.376901 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c980df1-a520-4c83-9094-65ffa132b464-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c980df1-a520-4c83-9094-65ffa132b464" (UID: "3c980df1-a520-4c83-9094-65ffa132b464"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.444566 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgspb\" (UniqueName: \"kubernetes.io/projected/3c980df1-a520-4c83-9094-65ffa132b464-kube-api-access-qgspb\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.444632 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c980df1-a520-4c83-9094-65ffa132b464-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.444645 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c980df1-a520-4c83-9094-65ffa132b464-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.898983 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3c980df1-a520-4c83-9094-65ffa132b464","Type":"ContainerDied","Data":"cecdbd2a03c842c48d0f26d4c8955b3e51fafcc9153f28a775f60d87008b6e20"} Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.899045 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.899087 4799 scope.go:117] "RemoveContainer" containerID="942bd87edd1bb3698510d7f67f940d7972fd60a9d5885b49f21a7467b2b57dcb" Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.932257 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.949051 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.961452 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:21:48 crc kubenswrapper[4799]: E0127 09:21:48.962073 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c980df1-a520-4c83-9094-65ffa132b464" containerName="nova-cell1-conductor-conductor" Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.962093 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c980df1-a520-4c83-9094-65ffa132b464" containerName="nova-cell1-conductor-conductor" Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.962279 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c980df1-a520-4c83-9094-65ffa132b464" containerName="nova-cell1-conductor-conductor" Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.963164 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.968184 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 09:21:48 crc kubenswrapper[4799]: I0127 09:21:48.975635 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:21:49 crc kubenswrapper[4799]: I0127 09:21:49.056909 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n8sw\" (UniqueName: \"kubernetes.io/projected/f9091e3a-e58d-4bbb-9f81-78db65d552dd-kube-api-access-2n8sw\") pod \"nova-cell1-conductor-0\" (UID: \"f9091e3a-e58d-4bbb-9f81-78db65d552dd\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:21:49 crc kubenswrapper[4799]: I0127 09:21:49.056972 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9091e3a-e58d-4bbb-9f81-78db65d552dd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f9091e3a-e58d-4bbb-9f81-78db65d552dd\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:21:49 crc kubenswrapper[4799]: I0127 09:21:49.057361 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9091e3a-e58d-4bbb-9f81-78db65d552dd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f9091e3a-e58d-4bbb-9f81-78db65d552dd\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:21:49 crc kubenswrapper[4799]: I0127 09:21:49.159163 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9091e3a-e58d-4bbb-9f81-78db65d552dd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f9091e3a-e58d-4bbb-9f81-78db65d552dd\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:21:49 crc kubenswrapper[4799]: I0127 09:21:49.159270 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n8sw\" (UniqueName: \"kubernetes.io/projected/f9091e3a-e58d-4bbb-9f81-78db65d552dd-kube-api-access-2n8sw\") pod \"nova-cell1-conductor-0\" (UID: \"f9091e3a-e58d-4bbb-9f81-78db65d552dd\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:21:49 crc kubenswrapper[4799]: I0127 09:21:49.159322 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9091e3a-e58d-4bbb-9f81-78db65d552dd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f9091e3a-e58d-4bbb-9f81-78db65d552dd\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:21:49 crc kubenswrapper[4799]: I0127 09:21:49.163236 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9091e3a-e58d-4bbb-9f81-78db65d552dd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f9091e3a-e58d-4bbb-9f81-78db65d552dd\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:21:49 crc kubenswrapper[4799]: I0127 09:21:49.167057 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9091e3a-e58d-4bbb-9f81-78db65d552dd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f9091e3a-e58d-4bbb-9f81-78db65d552dd\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:21:49 crc kubenswrapper[4799]: I0127 09:21:49.180041 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n8sw\" (UniqueName: \"kubernetes.io/projected/f9091e3a-e58d-4bbb-9f81-78db65d552dd-kube-api-access-2n8sw\") pod \"nova-cell1-conductor-0\" (UID: \"f9091e3a-e58d-4bbb-9f81-78db65d552dd\") " pod="openstack/nova-cell1-conductor-0" Jan 27 09:21:49 crc kubenswrapper[4799]: I0127 09:21:49.286625 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 09:21:49 crc kubenswrapper[4799]: I0127 09:21:49.562094 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 09:21:49 crc kubenswrapper[4799]: I0127 09:21:49.753112 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 09:21:49 crc kubenswrapper[4799]: W0127 09:21:49.756326 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9091e3a_e58d_4bbb_9f81_78db65d552dd.slice/crio-838b38d307cf26293d0aa7da9acc969a782b1b4fc76cd0d2742b3476c496d6de WatchSource:0}: Error finding container 838b38d307cf26293d0aa7da9acc969a782b1b4fc76cd0d2742b3476c496d6de: Status 404 returned error can't find the container with id 838b38d307cf26293d0aa7da9acc969a782b1b4fc76cd0d2742b3476c496d6de Jan 27 09:21:49 crc kubenswrapper[4799]: I0127 09:21:49.915367 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f9091e3a-e58d-4bbb-9f81-78db65d552dd","Type":"ContainerStarted","Data":"838b38d307cf26293d0aa7da9acc969a782b1b4fc76cd0d2742b3476c496d6de"} Jan 27 09:21:50 crc kubenswrapper[4799]: I0127 09:21:50.469695 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c980df1-a520-4c83-9094-65ffa132b464" path="/var/lib/kubelet/pods/3c980df1-a520-4c83-9094-65ffa132b464/volumes" Jan 27 09:21:50 crc kubenswrapper[4799]: I0127 09:21:50.929937 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f9091e3a-e58d-4bbb-9f81-78db65d552dd","Type":"ContainerStarted","Data":"cc00c8acee035a3538821c7afd15aca3e1b72b1e0de8c356b50a1d08b753fe58"} Jan 27 09:21:50 crc kubenswrapper[4799]: I0127 09:21:50.930232 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 09:21:50 crc kubenswrapper[4799]: I0127 09:21:50.952202 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.952170351 podStartE2EDuration="2.952170351s" podCreationTimestamp="2026-01-27 09:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:21:50.945811757 +0000 UTC m=+5777.256915832" watchObservedRunningTime="2026-01-27 09:21:50.952170351 +0000 UTC m=+5777.263274416" Jan 27 09:21:51 crc kubenswrapper[4799]: I0127 09:21:51.099379 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:21:51 crc kubenswrapper[4799]: I0127 09:21:51.118215 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:21:51 crc kubenswrapper[4799]: I0127 09:21:51.184089 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 09:21:51 crc kubenswrapper[4799]: I0127 09:21:51.949992 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 09:21:53 crc kubenswrapper[4799]: I0127 09:21:53.215110 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 09:21:53 crc kubenswrapper[4799]: I0127 09:21:53.215553 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 09:21:53 crc kubenswrapper[4799]: I0127 09:21:53.246962 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 09:21:53 crc kubenswrapper[4799]: I0127 09:21:53.247038 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 09:21:53 crc kubenswrapper[4799]: I0127 09:21:53.731157 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:21:53 crc kubenswrapper[4799]: I0127 09:21:53.731271 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:21:54 crc kubenswrapper[4799]: I0127 09:21:54.323319 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 09:21:54 crc kubenswrapper[4799]: I0127 09:21:54.379660 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9e5450ba-70ac-47fc-ac5c-8cd34f80c39c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.75:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 09:21:54 crc kubenswrapper[4799]: I0127 09:21:54.379749 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="23a85377-d942-461f-b381-da5730b8b48d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.74:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 09:21:54 crc kubenswrapper[4799]: I0127 09:21:54.379658 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="23a85377-d942-461f-b381-da5730b8b48d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.74:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 09:21:54 crc kubenswrapper[4799]: I0127 09:21:54.380505 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9e5450ba-70ac-47fc-ac5c-8cd34f80c39c" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.75:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 09:21:54 crc kubenswrapper[4799]: I0127 09:21:54.561949 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 09:21:54 crc kubenswrapper[4799]: I0127 09:21:54.609812 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 09:21:55 crc kubenswrapper[4799]: I0127 09:21:55.011200 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 09:21:58 crc kubenswrapper[4799]: I0127 09:21:58.968887 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:21:58 crc kubenswrapper[4799]: I0127 09:21:58.972370 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 09:21:58 crc kubenswrapper[4799]: I0127 09:21:58.975207 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 09:21:58 crc kubenswrapper[4799]: I0127 09:21:58.982578 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:21:59 crc kubenswrapper[4799]: I0127 09:21:59.112519 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " pod="openstack/cinder-scheduler-0" Jan 27 09:21:59 crc kubenswrapper[4799]: I0127 09:21:59.112680 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-config-data\") pod \"cinder-scheduler-0\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " pod="openstack/cinder-scheduler-0" Jan 27 09:21:59 crc kubenswrapper[4799]: I0127 09:21:59.112756 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30b249fc-065f-47b9-ba6b-c49e493752e4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " pod="openstack/cinder-scheduler-0" Jan 27 09:21:59 crc kubenswrapper[4799]: I0127 09:21:59.112782 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-scripts\") pod \"cinder-scheduler-0\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " pod="openstack/cinder-scheduler-0" Jan 27 09:21:59 crc kubenswrapper[4799]: I0127 09:21:59.112825 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdj2m\" (UniqueName: \"kubernetes.io/projected/30b249fc-065f-47b9-ba6b-c49e493752e4-kube-api-access-vdj2m\") pod \"cinder-scheduler-0\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " pod="openstack/cinder-scheduler-0" Jan 27 09:21:59 crc kubenswrapper[4799]: I0127 09:21:59.112859 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " pod="openstack/cinder-scheduler-0" Jan 27 09:21:59 crc kubenswrapper[4799]: I0127 09:21:59.216028 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-config-data\") pod \"cinder-scheduler-0\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " pod="openstack/cinder-scheduler-0" Jan 27 09:21:59 crc kubenswrapper[4799]: I0127 09:21:59.216429 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30b249fc-065f-47b9-ba6b-c49e493752e4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " pod="openstack/cinder-scheduler-0" Jan 27 09:21:59 crc kubenswrapper[4799]: I0127 09:21:59.216508 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-scripts\") pod \"cinder-scheduler-0\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " pod="openstack/cinder-scheduler-0" Jan 27 09:21:59 crc kubenswrapper[4799]: I0127 09:21:59.216637 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdj2m\" (UniqueName: \"kubernetes.io/projected/30b249fc-065f-47b9-ba6b-c49e493752e4-kube-api-access-vdj2m\") pod \"cinder-scheduler-0\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " pod="openstack/cinder-scheduler-0" Jan 27 09:21:59 crc kubenswrapper[4799]: I0127 09:21:59.216771 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " pod="openstack/cinder-scheduler-0" Jan 27 09:21:59 crc kubenswrapper[4799]: I0127 09:21:59.216849 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " pod="openstack/cinder-scheduler-0" Jan 27 09:21:59 crc kubenswrapper[4799]: I0127 09:21:59.216636 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30b249fc-065f-47b9-ba6b-c49e493752e4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " pod="openstack/cinder-scheduler-0" Jan 27 09:21:59 crc kubenswrapper[4799]: I0127 09:21:59.231256 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-config-data\") pod \"cinder-scheduler-0\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " pod="openstack/cinder-scheduler-0" Jan 27 09:21:59 crc kubenswrapper[4799]: I0127 09:21:59.233751 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-scripts\") pod \"cinder-scheduler-0\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " pod="openstack/cinder-scheduler-0" Jan 27 09:21:59 crc kubenswrapper[4799]: I0127 09:21:59.233821 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " pod="openstack/cinder-scheduler-0" Jan 27 09:21:59 crc kubenswrapper[4799]: I0127 09:21:59.245240 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " pod="openstack/cinder-scheduler-0" Jan 27 09:21:59 crc kubenswrapper[4799]: I0127 09:21:59.250552 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdj2m\" (UniqueName: \"kubernetes.io/projected/30b249fc-065f-47b9-ba6b-c49e493752e4-kube-api-access-vdj2m\") pod \"cinder-scheduler-0\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " pod="openstack/cinder-scheduler-0" Jan 27 09:21:59 crc kubenswrapper[4799]: I0127 09:21:59.313151 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 09:21:59 crc kubenswrapper[4799]: W0127 09:21:59.875512 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30b249fc_065f_47b9_ba6b_c49e493752e4.slice/crio-549c3a0539c1706e38a486a31b8685f6016cbf6bea561de108ac0ad9f1824242 WatchSource:0}: Error finding container 549c3a0539c1706e38a486a31b8685f6016cbf6bea561de108ac0ad9f1824242: Status 404 returned error can't find the container with id 549c3a0539c1706e38a486a31b8685f6016cbf6bea561de108ac0ad9f1824242 Jan 27 09:21:59 crc kubenswrapper[4799]: I0127 09:21:59.879989 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:22:00 crc kubenswrapper[4799]: I0127 09:22:00.045101 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"30b249fc-065f-47b9-ba6b-c49e493752e4","Type":"ContainerStarted","Data":"549c3a0539c1706e38a486a31b8685f6016cbf6bea561de108ac0ad9f1824242"} Jan 27 09:22:00 crc kubenswrapper[4799]: I0127 09:22:00.420026 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:22:00 crc kubenswrapper[4799]: I0127 09:22:00.420723 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d2bf11fd-b6af-47c8-86b0-e16b36b8841e" containerName="cinder-api-log" containerID="cri-o://9b8ed6d76dcd92bc07af6c53b70aa4ff35e1c2891cb4d4efeb478f20c7dd1d05" gracePeriod=30 Jan 27 09:22:00 crc kubenswrapper[4799]: I0127 09:22:00.421252 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d2bf11fd-b6af-47c8-86b0-e16b36b8841e" containerName="cinder-api" containerID="cri-o://8b359406a4603457995409e89ae340f9f3855dcadc6eea64346391979ed02a58" gracePeriod=30 Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.060266 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"30b249fc-065f-47b9-ba6b-c49e493752e4","Type":"ContainerStarted","Data":"2565a2b05da59f85454f9dbe6b95321d2fc2c958863b3a9c300725127ad065d5"} Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.063953 4799 generic.go:334] "Generic (PLEG): container finished" podID="d2bf11fd-b6af-47c8-86b0-e16b36b8841e" containerID="9b8ed6d76dcd92bc07af6c53b70aa4ff35e1c2891cb4d4efeb478f20c7dd1d05" exitCode=143 Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.064033 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d2bf11fd-b6af-47c8-86b0-e16b36b8841e","Type":"ContainerDied","Data":"9b8ed6d76dcd92bc07af6c53b70aa4ff35e1c2891cb4d4efeb478f20c7dd1d05"} Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.115571 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.121778 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.128329 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.138769 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.271620 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.271685 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.271728 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.271848 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgxlz\" (UniqueName: \"kubernetes.io/projected/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-kube-api-access-zgxlz\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.271895 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-sys\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.271916 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-run\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.271937 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.271978 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.272014 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.272073 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.272098 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.272165 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.272198 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.272218 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-dev\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.272246 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.272279 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.374213 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.374499 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.375292 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.375444 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-dev\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.375613 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.375737 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.375834 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.375535 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-dev\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.375915 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.376136 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.376176 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.376439 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgxlz\" (UniqueName: \"kubernetes.io/projected/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-kube-api-access-zgxlz\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.376543 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-sys\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.376630 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-run\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.376707 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.376829 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.376956 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.376046 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.376863 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.376698 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-run\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.376638 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-sys\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.377228 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.377398 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.377556 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.377272 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.377234 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.383789 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.385527 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.386497 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.386994 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.398507 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.402600 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgxlz\" (UniqueName: \"kubernetes.io/projected/1aaf2cd9-1e7e-487e-abc5-b49315e2b068-kube-api-access-zgxlz\") pod \"cinder-volume-volume1-0\" (UID: \"1aaf2cd9-1e7e-487e-abc5-b49315e2b068\") " pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.460157 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.883064 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.885759 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.888151 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.902910 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.994917 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d6405a1-c618-4492-95ee-bc909981d06c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.995810 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0d6405a1-c618-4492-95ee-bc909981d06c-ceph\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.995925 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj27f\" (UniqueName: \"kubernetes.io/projected/0d6405a1-c618-4492-95ee-bc909981d06c-kube-api-access-sj27f\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.996096 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-lib-modules\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.996399 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-dev\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.996464 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-sys\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.996522 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d6405a1-c618-4492-95ee-bc909981d06c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.996716 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d6405a1-c618-4492-95ee-bc909981d06c-scripts\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.996872 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.996995 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.997038 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-run\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.997110 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d6405a1-c618-4492-95ee-bc909981d06c-config-data\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.997138 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.997173 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.997208 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:01 crc kubenswrapper[4799]: I0127 09:22:01.997426 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.098037 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.099664 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d6405a1-c618-4492-95ee-bc909981d06c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.099719 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0d6405a1-c618-4492-95ee-bc909981d06c-ceph\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.099751 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj27f\" (UniqueName: \"kubernetes.io/projected/0d6405a1-c618-4492-95ee-bc909981d06c-kube-api-access-sj27f\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.099804 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-lib-modules\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.099850 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-dev\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.099869 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-sys\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.099887 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d6405a1-c618-4492-95ee-bc909981d06c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.099927 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d6405a1-c618-4492-95ee-bc909981d06c-scripts\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.099961 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.100001 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.100024 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-run\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.100048 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d6405a1-c618-4492-95ee-bc909981d06c-config-data\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.100066 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.100090 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.100112 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.100135 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.100276 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.101668 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-sys\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.101675 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-lib-modules\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.102221 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.102281 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"30b249fc-065f-47b9-ba6b-c49e493752e4","Type":"ContainerStarted","Data":"28599c016e8448916447305eb5ac355c85a6dbc4d8ffd228bec97c159722c152"} Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.102341 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.102389 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.102627 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.102667 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-run\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.102738 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.102771 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0d6405a1-c618-4492-95ee-bc909981d06c-dev\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.119318 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d6405a1-c618-4492-95ee-bc909981d06c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.120002 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0d6405a1-c618-4492-95ee-bc909981d06c-ceph\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.122405 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d6405a1-c618-4492-95ee-bc909981d06c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.123679 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d6405a1-c618-4492-95ee-bc909981d06c-scripts\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.133875 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj27f\" (UniqueName: \"kubernetes.io/projected/0d6405a1-c618-4492-95ee-bc909981d06c-kube-api-access-sj27f\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.136878 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d6405a1-c618-4492-95ee-bc909981d06c-config-data\") pod \"cinder-backup-0\" (UID: \"0d6405a1-c618-4492-95ee-bc909981d06c\") " pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.136874 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.1368427 podStartE2EDuration="4.1368427s" podCreationTimestamp="2026-01-27 09:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:22:02.133092798 +0000 UTC m=+5788.444196863" watchObservedRunningTime="2026-01-27 09:22:02.1368427 +0000 UTC m=+5788.447946755" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.218081 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 27 09:22:02 crc kubenswrapper[4799]: I0127 09:22:02.871048 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 27 09:22:03 crc kubenswrapper[4799]: I0127 09:22:03.118273 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"1aaf2cd9-1e7e-487e-abc5-b49315e2b068","Type":"ContainerStarted","Data":"22fd32a81740d000bd0649f95796469dc62c9daa0f0cb87032119f92e412edad"} Jan 27 09:22:03 crc kubenswrapper[4799]: I0127 09:22:03.121117 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0d6405a1-c618-4492-95ee-bc909981d06c","Type":"ContainerStarted","Data":"7ab52bf6ac585d89afc200fade331d42e80a1a59884a3222b13a058b93b2c493"} Jan 27 09:22:03 crc kubenswrapper[4799]: I0127 09:22:03.225549 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 09:22:03 crc kubenswrapper[4799]: I0127 09:22:03.226735 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 09:22:03 crc kubenswrapper[4799]: I0127 09:22:03.229826 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 09:22:03 crc kubenswrapper[4799]: I0127 09:22:03.229895 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 09:22:03 crc kubenswrapper[4799]: I0127 09:22:03.255902 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 09:22:03 crc kubenswrapper[4799]: I0127 09:22:03.256012 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 09:22:03 crc kubenswrapper[4799]: I0127 09:22:03.260612 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 09:22:03 crc kubenswrapper[4799]: I0127 09:22:03.262059 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 09:22:03 crc kubenswrapper[4799]: I0127 09:22:03.629857 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="d2bf11fd-b6af-47c8-86b0-e16b36b8841e" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.1.71:8776/healthcheck\": read tcp 10.217.0.2:38132->10.217.1.71:8776: read: connection reset by peer" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.072791 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.151542 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"1aaf2cd9-1e7e-487e-abc5-b49315e2b068","Type":"ContainerStarted","Data":"c5ee48d7e9678ba3dda52c6a1d2eb1277ca270062ccb8e4bc93afefc70f2b845"} Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.151973 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"1aaf2cd9-1e7e-487e-abc5-b49315e2b068","Type":"ContainerStarted","Data":"d8f9bf97a6af1315fe30ce55200938b554ef7f1b6268993b3ca5f42170bbf66b"} Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.155865 4799 generic.go:334] "Generic (PLEG): container finished" podID="d2bf11fd-b6af-47c8-86b0-e16b36b8841e" containerID="8b359406a4603457995409e89ae340f9f3855dcadc6eea64346391979ed02a58" exitCode=0 Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.155930 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.155954 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d2bf11fd-b6af-47c8-86b0-e16b36b8841e","Type":"ContainerDied","Data":"8b359406a4603457995409e89ae340f9f3855dcadc6eea64346391979ed02a58"} Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.155988 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d2bf11fd-b6af-47c8-86b0-e16b36b8841e","Type":"ContainerDied","Data":"d0733491def361276807d9decc9eca4fa93c653c5544d736f63ca05cfb465d37"} Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.156008 4799 scope.go:117] "RemoveContainer" containerID="8b359406a4603457995409e89ae340f9f3855dcadc6eea64346391979ed02a58" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.157492 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-899sz\" (UniqueName: \"kubernetes.io/projected/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-kube-api-access-899sz\") pod \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.157544 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-logs\") pod \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.157712 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-config-data\") pod \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.157797 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-combined-ca-bundle\") pod \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.157970 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-config-data-custom\") pod \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.158052 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-scripts\") pod \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.158084 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-etc-machine-id\") pod \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\" (UID: \"d2bf11fd-b6af-47c8-86b0-e16b36b8841e\") " Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.161205 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d2bf11fd-b6af-47c8-86b0-e16b36b8841e" (UID: "d2bf11fd-b6af-47c8-86b0-e16b36b8841e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.161817 4799 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.168765 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-logs" (OuterVolumeSpecName: "logs") pod "d2bf11fd-b6af-47c8-86b0-e16b36b8841e" (UID: "d2bf11fd-b6af-47c8-86b0-e16b36b8841e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.175498 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d2bf11fd-b6af-47c8-86b0-e16b36b8841e" (UID: "d2bf11fd-b6af-47c8-86b0-e16b36b8841e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.176314 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-scripts" (OuterVolumeSpecName: "scripts") pod "d2bf11fd-b6af-47c8-86b0-e16b36b8841e" (UID: "d2bf11fd-b6af-47c8-86b0-e16b36b8841e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.176329 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-kube-api-access-899sz" (OuterVolumeSpecName: "kube-api-access-899sz") pod "d2bf11fd-b6af-47c8-86b0-e16b36b8841e" (UID: "d2bf11fd-b6af-47c8-86b0-e16b36b8841e"). InnerVolumeSpecName "kube-api-access-899sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.186241 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0d6405a1-c618-4492-95ee-bc909981d06c","Type":"ContainerStarted","Data":"275e0d3d175828e979985ad97aa9fe61c6cf8e45b0c60cafd39d4a42b4e4b7ac"} Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.186976 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.197551 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.206159 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.454604549 podStartE2EDuration="3.206134933s" podCreationTimestamp="2026-01-27 09:22:01 +0000 UTC" firstStartedPulling="2026-01-27 09:22:02.135137614 +0000 UTC m=+5788.446241679" lastFinishedPulling="2026-01-27 09:22:02.886667998 +0000 UTC m=+5789.197772063" observedRunningTime="2026-01-27 09:22:04.18362784 +0000 UTC m=+5790.494731905" watchObservedRunningTime="2026-01-27 09:22:04.206134933 +0000 UTC m=+5790.517238988" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.212638 4799 scope.go:117] "RemoveContainer" containerID="9b8ed6d76dcd92bc07af6c53b70aa4ff35e1c2891cb4d4efeb478f20c7dd1d05" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.250575 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2bf11fd-b6af-47c8-86b0-e16b36b8841e" (UID: "d2bf11fd-b6af-47c8-86b0-e16b36b8841e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.265898 4799 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.265946 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.265959 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-899sz\" (UniqueName: \"kubernetes.io/projected/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-kube-api-access-899sz\") on node \"crc\" DevicePath \"\"" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.265972 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-logs\") on node \"crc\" DevicePath \"\"" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.265981 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.305557 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-config-data" (OuterVolumeSpecName: "config-data") pod "d2bf11fd-b6af-47c8-86b0-e16b36b8841e" (UID: "d2bf11fd-b6af-47c8-86b0-e16b36b8841e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.318622 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.342036 4799 scope.go:117] "RemoveContainer" containerID="8b359406a4603457995409e89ae340f9f3855dcadc6eea64346391979ed02a58" Jan 27 09:22:04 crc kubenswrapper[4799]: E0127 09:22:04.342950 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b359406a4603457995409e89ae340f9f3855dcadc6eea64346391979ed02a58\": container with ID starting with 8b359406a4603457995409e89ae340f9f3855dcadc6eea64346391979ed02a58 not found: ID does not exist" containerID="8b359406a4603457995409e89ae340f9f3855dcadc6eea64346391979ed02a58" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.343029 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b359406a4603457995409e89ae340f9f3855dcadc6eea64346391979ed02a58"} err="failed to get container status \"8b359406a4603457995409e89ae340f9f3855dcadc6eea64346391979ed02a58\": rpc error: code = NotFound desc = could not find container \"8b359406a4603457995409e89ae340f9f3855dcadc6eea64346391979ed02a58\": container with ID starting with 8b359406a4603457995409e89ae340f9f3855dcadc6eea64346391979ed02a58 not found: ID does not exist" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.343075 4799 scope.go:117] "RemoveContainer" containerID="9b8ed6d76dcd92bc07af6c53b70aa4ff35e1c2891cb4d4efeb478f20c7dd1d05" Jan 27 09:22:04 crc kubenswrapper[4799]: E0127 09:22:04.343533 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b8ed6d76dcd92bc07af6c53b70aa4ff35e1c2891cb4d4efeb478f20c7dd1d05\": container with ID starting with 9b8ed6d76dcd92bc07af6c53b70aa4ff35e1c2891cb4d4efeb478f20c7dd1d05 not found: ID does not exist" containerID="9b8ed6d76dcd92bc07af6c53b70aa4ff35e1c2891cb4d4efeb478f20c7dd1d05" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.343560 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b8ed6d76dcd92bc07af6c53b70aa4ff35e1c2891cb4d4efeb478f20c7dd1d05"} err="failed to get container status \"9b8ed6d76dcd92bc07af6c53b70aa4ff35e1c2891cb4d4efeb478f20c7dd1d05\": rpc error: code = NotFound desc = could not find container \"9b8ed6d76dcd92bc07af6c53b70aa4ff35e1c2891cb4d4efeb478f20c7dd1d05\": container with ID starting with 9b8ed6d76dcd92bc07af6c53b70aa4ff35e1c2891cb4d4efeb478f20c7dd1d05 not found: ID does not exist" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.368525 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2bf11fd-b6af-47c8-86b0-e16b36b8841e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.583958 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.617423 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.637506 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:22:04 crc kubenswrapper[4799]: E0127 09:22:04.638202 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2bf11fd-b6af-47c8-86b0-e16b36b8841e" containerName="cinder-api-log" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.638229 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2bf11fd-b6af-47c8-86b0-e16b36b8841e" containerName="cinder-api-log" Jan 27 09:22:04 crc kubenswrapper[4799]: E0127 09:22:04.638243 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2bf11fd-b6af-47c8-86b0-e16b36b8841e" containerName="cinder-api" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.638252 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2bf11fd-b6af-47c8-86b0-e16b36b8841e" containerName="cinder-api" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.638507 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2bf11fd-b6af-47c8-86b0-e16b36b8841e" containerName="cinder-api-log" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.638540 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2bf11fd-b6af-47c8-86b0-e16b36b8841e" containerName="cinder-api" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.640085 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.646165 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.656495 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.782225 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fec7388-0fd9-4481-adff-14df549f15ba-scripts\") pod \"cinder-api-0\" (UID: \"7fec7388-0fd9-4481-adff-14df549f15ba\") " pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.782282 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fec7388-0fd9-4481-adff-14df549f15ba-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7fec7388-0fd9-4481-adff-14df549f15ba\") " pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.782388 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7fec7388-0fd9-4481-adff-14df549f15ba-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7fec7388-0fd9-4481-adff-14df549f15ba\") " pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.782734 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fec7388-0fd9-4481-adff-14df549f15ba-config-data-custom\") pod \"cinder-api-0\" (UID: \"7fec7388-0fd9-4481-adff-14df549f15ba\") " pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.782827 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fec7388-0fd9-4481-adff-14df549f15ba-config-data\") pod \"cinder-api-0\" (UID: \"7fec7388-0fd9-4481-adff-14df549f15ba\") " pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.782853 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fec7388-0fd9-4481-adff-14df549f15ba-logs\") pod \"cinder-api-0\" (UID: \"7fec7388-0fd9-4481-adff-14df549f15ba\") " pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.783019 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7fl9\" (UniqueName: \"kubernetes.io/projected/7fec7388-0fd9-4481-adff-14df549f15ba-kube-api-access-g7fl9\") pod \"cinder-api-0\" (UID: \"7fec7388-0fd9-4481-adff-14df549f15ba\") " pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.885927 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7fl9\" (UniqueName: \"kubernetes.io/projected/7fec7388-0fd9-4481-adff-14df549f15ba-kube-api-access-g7fl9\") pod \"cinder-api-0\" (UID: \"7fec7388-0fd9-4481-adff-14df549f15ba\") " pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.886116 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fec7388-0fd9-4481-adff-14df549f15ba-scripts\") pod \"cinder-api-0\" (UID: \"7fec7388-0fd9-4481-adff-14df549f15ba\") " pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.886140 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fec7388-0fd9-4481-adff-14df549f15ba-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7fec7388-0fd9-4481-adff-14df549f15ba\") " pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.887421 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7fec7388-0fd9-4481-adff-14df549f15ba-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7fec7388-0fd9-4481-adff-14df549f15ba\") " pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.886771 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7fec7388-0fd9-4481-adff-14df549f15ba-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7fec7388-0fd9-4481-adff-14df549f15ba\") " pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.887574 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fec7388-0fd9-4481-adff-14df549f15ba-config-data-custom\") pod \"cinder-api-0\" (UID: \"7fec7388-0fd9-4481-adff-14df549f15ba\") " pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.887625 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fec7388-0fd9-4481-adff-14df549f15ba-config-data\") pod \"cinder-api-0\" (UID: \"7fec7388-0fd9-4481-adff-14df549f15ba\") " pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.887653 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fec7388-0fd9-4481-adff-14df549f15ba-logs\") pod \"cinder-api-0\" (UID: \"7fec7388-0fd9-4481-adff-14df549f15ba\") " pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.888139 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fec7388-0fd9-4481-adff-14df549f15ba-logs\") pod \"cinder-api-0\" (UID: \"7fec7388-0fd9-4481-adff-14df549f15ba\") " pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.894547 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fec7388-0fd9-4481-adff-14df549f15ba-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7fec7388-0fd9-4481-adff-14df549f15ba\") " pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.894762 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fec7388-0fd9-4481-adff-14df549f15ba-config-data\") pod \"cinder-api-0\" (UID: \"7fec7388-0fd9-4481-adff-14df549f15ba\") " pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.896243 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fec7388-0fd9-4481-adff-14df549f15ba-config-data-custom\") pod \"cinder-api-0\" (UID: \"7fec7388-0fd9-4481-adff-14df549f15ba\") " pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.910511 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fec7388-0fd9-4481-adff-14df549f15ba-scripts\") pod \"cinder-api-0\" (UID: \"7fec7388-0fd9-4481-adff-14df549f15ba\") " pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.914942 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7fl9\" (UniqueName: \"kubernetes.io/projected/7fec7388-0fd9-4481-adff-14df549f15ba-kube-api-access-g7fl9\") pod \"cinder-api-0\" (UID: \"7fec7388-0fd9-4481-adff-14df549f15ba\") " pod="openstack/cinder-api-0" Jan 27 09:22:04 crc kubenswrapper[4799]: I0127 09:22:04.980065 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 09:22:05 crc kubenswrapper[4799]: I0127 09:22:05.234692 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0d6405a1-c618-4492-95ee-bc909981d06c","Type":"ContainerStarted","Data":"ce4d3569f1af834e7408428df2e9fc63316f013beae5459749434c0c4c2804c7"} Jan 27 09:22:05 crc kubenswrapper[4799]: I0127 09:22:05.273052 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.336057169 podStartE2EDuration="4.273026722s" podCreationTimestamp="2026-01-27 09:22:01 +0000 UTC" firstStartedPulling="2026-01-27 09:22:02.885868297 +0000 UTC m=+5789.196972362" lastFinishedPulling="2026-01-27 09:22:03.82283786 +0000 UTC m=+5790.133941915" observedRunningTime="2026-01-27 09:22:05.268034396 +0000 UTC m=+5791.579138461" watchObservedRunningTime="2026-01-27 09:22:05.273026722 +0000 UTC m=+5791.584130787" Jan 27 09:22:05 crc kubenswrapper[4799]: I0127 09:22:05.323284 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 09:22:05 crc kubenswrapper[4799]: W0127 09:22:05.323416 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fec7388_0fd9_4481_adff_14df549f15ba.slice/crio-3559a0190101b80b892ab6fd58aca576e704732c51957c553df13eecfe6dae73 WatchSource:0}: Error finding container 3559a0190101b80b892ab6fd58aca576e704732c51957c553df13eecfe6dae73: Status 404 returned error can't find the container with id 3559a0190101b80b892ab6fd58aca576e704732c51957c553df13eecfe6dae73 Jan 27 09:22:06 crc kubenswrapper[4799]: I0127 09:22:06.307934 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7fec7388-0fd9-4481-adff-14df549f15ba","Type":"ContainerStarted","Data":"fd915eb8a409e9bc258925caf5a09a410f012440025e7b392584601daf2f0834"} Jan 27 09:22:06 crc kubenswrapper[4799]: I0127 09:22:06.308740 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7fec7388-0fd9-4481-adff-14df549f15ba","Type":"ContainerStarted","Data":"3559a0190101b80b892ab6fd58aca576e704732c51957c553df13eecfe6dae73"} Jan 27 09:22:06 crc kubenswrapper[4799]: I0127 09:22:06.493318 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2bf11fd-b6af-47c8-86b0-e16b36b8841e" path="/var/lib/kubelet/pods/d2bf11fd-b6af-47c8-86b0-e16b36b8841e/volumes" Jan 27 09:22:06 crc kubenswrapper[4799]: I0127 09:22:06.512950 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:07 crc kubenswrapper[4799]: I0127 09:22:07.219367 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 27 09:22:07 crc kubenswrapper[4799]: I0127 09:22:07.320977 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7fec7388-0fd9-4481-adff-14df549f15ba","Type":"ContainerStarted","Data":"c61ae1eb86920a8354b89c8428406f5a08ed73b228e1986ba039e33d94b30bf5"} Jan 27 09:22:07 crc kubenswrapper[4799]: I0127 09:22:07.368785 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.368752515 podStartE2EDuration="3.368752515s" podCreationTimestamp="2026-01-27 09:22:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:22:07.349537571 +0000 UTC m=+5793.660641656" watchObservedRunningTime="2026-01-27 09:22:07.368752515 +0000 UTC m=+5793.679856580" Jan 27 09:22:08 crc kubenswrapper[4799]: I0127 09:22:08.332838 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 09:22:09 crc kubenswrapper[4799]: I0127 09:22:09.564139 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 09:22:09 crc kubenswrapper[4799]: I0127 09:22:09.640261 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:22:10 crc kubenswrapper[4799]: I0127 09:22:10.354420 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="30b249fc-065f-47b9-ba6b-c49e493752e4" containerName="cinder-scheduler" containerID="cri-o://2565a2b05da59f85454f9dbe6b95321d2fc2c958863b3a9c300725127ad065d5" gracePeriod=30 Jan 27 09:22:10 crc kubenswrapper[4799]: I0127 09:22:10.354816 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="30b249fc-065f-47b9-ba6b-c49e493752e4" containerName="probe" containerID="cri-o://28599c016e8448916447305eb5ac355c85a6dbc4d8ffd228bec97c159722c152" gracePeriod=30 Jan 27 09:22:11 crc kubenswrapper[4799]: I0127 09:22:11.369057 4799 generic.go:334] "Generic (PLEG): container finished" podID="30b249fc-065f-47b9-ba6b-c49e493752e4" containerID="28599c016e8448916447305eb5ac355c85a6dbc4d8ffd228bec97c159722c152" exitCode=0 Jan 27 09:22:11 crc kubenswrapper[4799]: I0127 09:22:11.369168 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"30b249fc-065f-47b9-ba6b-c49e493752e4","Type":"ContainerDied","Data":"28599c016e8448916447305eb5ac355c85a6dbc4d8ffd228bec97c159722c152"} Jan 27 09:22:11 crc kubenswrapper[4799]: I0127 09:22:11.704775 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 27 09:22:12 crc kubenswrapper[4799]: I0127 09:22:12.510455 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.381053 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.393515 4799 generic.go:334] "Generic (PLEG): container finished" podID="30b249fc-065f-47b9-ba6b-c49e493752e4" containerID="2565a2b05da59f85454f9dbe6b95321d2fc2c958863b3a9c300725127ad065d5" exitCode=0 Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.393575 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"30b249fc-065f-47b9-ba6b-c49e493752e4","Type":"ContainerDied","Data":"2565a2b05da59f85454f9dbe6b95321d2fc2c958863b3a9c300725127ad065d5"} Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.393613 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"30b249fc-065f-47b9-ba6b-c49e493752e4","Type":"ContainerDied","Data":"549c3a0539c1706e38a486a31b8685f6016cbf6bea561de108ac0ad9f1824242"} Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.393635 4799 scope.go:117] "RemoveContainer" containerID="28599c016e8448916447305eb5ac355c85a6dbc4d8ffd228bec97c159722c152" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.393799 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.431724 4799 scope.go:117] "RemoveContainer" containerID="2565a2b05da59f85454f9dbe6b95321d2fc2c958863b3a9c300725127ad065d5" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.437500 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdj2m\" (UniqueName: \"kubernetes.io/projected/30b249fc-065f-47b9-ba6b-c49e493752e4-kube-api-access-vdj2m\") pod \"30b249fc-065f-47b9-ba6b-c49e493752e4\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.437576 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-combined-ca-bundle\") pod \"30b249fc-065f-47b9-ba6b-c49e493752e4\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.437840 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-config-data-custom\") pod \"30b249fc-065f-47b9-ba6b-c49e493752e4\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.437978 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-config-data\") pod \"30b249fc-065f-47b9-ba6b-c49e493752e4\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.438065 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-scripts\") pod \"30b249fc-065f-47b9-ba6b-c49e493752e4\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.438125 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30b249fc-065f-47b9-ba6b-c49e493752e4-etc-machine-id\") pod \"30b249fc-065f-47b9-ba6b-c49e493752e4\" (UID: \"30b249fc-065f-47b9-ba6b-c49e493752e4\") " Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.438650 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30b249fc-065f-47b9-ba6b-c49e493752e4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "30b249fc-065f-47b9-ba6b-c49e493752e4" (UID: "30b249fc-065f-47b9-ba6b-c49e493752e4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.456612 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "30b249fc-065f-47b9-ba6b-c49e493752e4" (UID: "30b249fc-065f-47b9-ba6b-c49e493752e4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.461013 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30b249fc-065f-47b9-ba6b-c49e493752e4-kube-api-access-vdj2m" (OuterVolumeSpecName: "kube-api-access-vdj2m") pod "30b249fc-065f-47b9-ba6b-c49e493752e4" (UID: "30b249fc-065f-47b9-ba6b-c49e493752e4"). InnerVolumeSpecName "kube-api-access-vdj2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.465523 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-scripts" (OuterVolumeSpecName: "scripts") pod "30b249fc-065f-47b9-ba6b-c49e493752e4" (UID: "30b249fc-065f-47b9-ba6b-c49e493752e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.475496 4799 scope.go:117] "RemoveContainer" containerID="28599c016e8448916447305eb5ac355c85a6dbc4d8ffd228bec97c159722c152" Jan 27 09:22:13 crc kubenswrapper[4799]: E0127 09:22:13.476217 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28599c016e8448916447305eb5ac355c85a6dbc4d8ffd228bec97c159722c152\": container with ID starting with 28599c016e8448916447305eb5ac355c85a6dbc4d8ffd228bec97c159722c152 not found: ID does not exist" containerID="28599c016e8448916447305eb5ac355c85a6dbc4d8ffd228bec97c159722c152" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.476283 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28599c016e8448916447305eb5ac355c85a6dbc4d8ffd228bec97c159722c152"} err="failed to get container status \"28599c016e8448916447305eb5ac355c85a6dbc4d8ffd228bec97c159722c152\": rpc error: code = NotFound desc = could not find container \"28599c016e8448916447305eb5ac355c85a6dbc4d8ffd228bec97c159722c152\": container with ID starting with 28599c016e8448916447305eb5ac355c85a6dbc4d8ffd228bec97c159722c152 not found: ID does not exist" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.476337 4799 scope.go:117] "RemoveContainer" containerID="2565a2b05da59f85454f9dbe6b95321d2fc2c958863b3a9c300725127ad065d5" Jan 27 09:22:13 crc kubenswrapper[4799]: E0127 09:22:13.476667 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2565a2b05da59f85454f9dbe6b95321d2fc2c958863b3a9c300725127ad065d5\": container with ID starting with 2565a2b05da59f85454f9dbe6b95321d2fc2c958863b3a9c300725127ad065d5 not found: ID does not exist" containerID="2565a2b05da59f85454f9dbe6b95321d2fc2c958863b3a9c300725127ad065d5" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.476849 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2565a2b05da59f85454f9dbe6b95321d2fc2c958863b3a9c300725127ad065d5"} err="failed to get container status \"2565a2b05da59f85454f9dbe6b95321d2fc2c958863b3a9c300725127ad065d5\": rpc error: code = NotFound desc = could not find container \"2565a2b05da59f85454f9dbe6b95321d2fc2c958863b3a9c300725127ad065d5\": container with ID starting with 2565a2b05da59f85454f9dbe6b95321d2fc2c958863b3a9c300725127ad065d5 not found: ID does not exist" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.513193 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30b249fc-065f-47b9-ba6b-c49e493752e4" (UID: "30b249fc-065f-47b9-ba6b-c49e493752e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.543861 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.543924 4799 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30b249fc-065f-47b9-ba6b-c49e493752e4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.543945 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdj2m\" (UniqueName: \"kubernetes.io/projected/30b249fc-065f-47b9-ba6b-c49e493752e4-kube-api-access-vdj2m\") on node \"crc\" DevicePath \"\"" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.543958 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.543975 4799 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.588624 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-config-data" (OuterVolumeSpecName: "config-data") pod "30b249fc-065f-47b9-ba6b-c49e493752e4" (UID: "30b249fc-065f-47b9-ba6b-c49e493752e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.646331 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b249fc-065f-47b9-ba6b-c49e493752e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.738037 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.754733 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.776131 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:22:13 crc kubenswrapper[4799]: E0127 09:22:13.776925 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30b249fc-065f-47b9-ba6b-c49e493752e4" containerName="cinder-scheduler" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.776951 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="30b249fc-065f-47b9-ba6b-c49e493752e4" containerName="cinder-scheduler" Jan 27 09:22:13 crc kubenswrapper[4799]: E0127 09:22:13.777003 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30b249fc-065f-47b9-ba6b-c49e493752e4" containerName="probe" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.777012 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="30b249fc-065f-47b9-ba6b-c49e493752e4" containerName="probe" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.777268 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="30b249fc-065f-47b9-ba6b-c49e493752e4" containerName="cinder-scheduler" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.777324 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="30b249fc-065f-47b9-ba6b-c49e493752e4" containerName="probe" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.778885 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.782789 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.799230 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.860529 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7481c01b-ab94-4a72-a35c-033cd195be3b-scripts\") pod \"cinder-scheduler-0\" (UID: \"7481c01b-ab94-4a72-a35c-033cd195be3b\") " pod="openstack/cinder-scheduler-0" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.860755 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7481c01b-ab94-4a72-a35c-033cd195be3b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7481c01b-ab94-4a72-a35c-033cd195be3b\") " pod="openstack/cinder-scheduler-0" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.860923 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7481c01b-ab94-4a72-a35c-033cd195be3b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7481c01b-ab94-4a72-a35c-033cd195be3b\") " pod="openstack/cinder-scheduler-0" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.861203 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7481c01b-ab94-4a72-a35c-033cd195be3b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7481c01b-ab94-4a72-a35c-033cd195be3b\") " pod="openstack/cinder-scheduler-0" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.861549 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc884\" (UniqueName: \"kubernetes.io/projected/7481c01b-ab94-4a72-a35c-033cd195be3b-kube-api-access-hc884\") pod \"cinder-scheduler-0\" (UID: \"7481c01b-ab94-4a72-a35c-033cd195be3b\") " pod="openstack/cinder-scheduler-0" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.861812 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7481c01b-ab94-4a72-a35c-033cd195be3b-config-data\") pod \"cinder-scheduler-0\" (UID: \"7481c01b-ab94-4a72-a35c-033cd195be3b\") " pod="openstack/cinder-scheduler-0" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.967001 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc884\" (UniqueName: \"kubernetes.io/projected/7481c01b-ab94-4a72-a35c-033cd195be3b-kube-api-access-hc884\") pod \"cinder-scheduler-0\" (UID: \"7481c01b-ab94-4a72-a35c-033cd195be3b\") " pod="openstack/cinder-scheduler-0" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.967270 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7481c01b-ab94-4a72-a35c-033cd195be3b-config-data\") pod \"cinder-scheduler-0\" (UID: \"7481c01b-ab94-4a72-a35c-033cd195be3b\") " pod="openstack/cinder-scheduler-0" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.967805 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7481c01b-ab94-4a72-a35c-033cd195be3b-scripts\") pod \"cinder-scheduler-0\" (UID: \"7481c01b-ab94-4a72-a35c-033cd195be3b\") " pod="openstack/cinder-scheduler-0" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.967950 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7481c01b-ab94-4a72-a35c-033cd195be3b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7481c01b-ab94-4a72-a35c-033cd195be3b\") " pod="openstack/cinder-scheduler-0" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.968037 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7481c01b-ab94-4a72-a35c-033cd195be3b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7481c01b-ab94-4a72-a35c-033cd195be3b\") " pod="openstack/cinder-scheduler-0" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.968217 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7481c01b-ab94-4a72-a35c-033cd195be3b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7481c01b-ab94-4a72-a35c-033cd195be3b\") " pod="openstack/cinder-scheduler-0" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.972055 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7481c01b-ab94-4a72-a35c-033cd195be3b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7481c01b-ab94-4a72-a35c-033cd195be3b\") " pod="openstack/cinder-scheduler-0" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.975909 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7481c01b-ab94-4a72-a35c-033cd195be3b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7481c01b-ab94-4a72-a35c-033cd195be3b\") " pod="openstack/cinder-scheduler-0" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.975972 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7481c01b-ab94-4a72-a35c-033cd195be3b-config-data\") pod \"cinder-scheduler-0\" (UID: \"7481c01b-ab94-4a72-a35c-033cd195be3b\") " pod="openstack/cinder-scheduler-0" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.982025 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7481c01b-ab94-4a72-a35c-033cd195be3b-scripts\") pod \"cinder-scheduler-0\" (UID: \"7481c01b-ab94-4a72-a35c-033cd195be3b\") " pod="openstack/cinder-scheduler-0" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.982237 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7481c01b-ab94-4a72-a35c-033cd195be3b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7481c01b-ab94-4a72-a35c-033cd195be3b\") " pod="openstack/cinder-scheduler-0" Jan 27 09:22:13 crc kubenswrapper[4799]: I0127 09:22:13.989413 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc884\" (UniqueName: \"kubernetes.io/projected/7481c01b-ab94-4a72-a35c-033cd195be3b-kube-api-access-hc884\") pod \"cinder-scheduler-0\" (UID: \"7481c01b-ab94-4a72-a35c-033cd195be3b\") " pod="openstack/cinder-scheduler-0" Jan 27 09:22:14 crc kubenswrapper[4799]: I0127 09:22:14.118144 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 09:22:14 crc kubenswrapper[4799]: I0127 09:22:14.464966 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30b249fc-065f-47b9-ba6b-c49e493752e4" path="/var/lib/kubelet/pods/30b249fc-065f-47b9-ba6b-c49e493752e4/volumes" Jan 27 09:22:14 crc kubenswrapper[4799]: I0127 09:22:14.645418 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 09:22:15 crc kubenswrapper[4799]: I0127 09:22:15.423832 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7481c01b-ab94-4a72-a35c-033cd195be3b","Type":"ContainerStarted","Data":"584ad4d521ddc5b88755bfb0c1a30f3d9985477e9c3675e43951927ba24b2641"} Jan 27 09:22:15 crc kubenswrapper[4799]: I0127 09:22:15.424776 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7481c01b-ab94-4a72-a35c-033cd195be3b","Type":"ContainerStarted","Data":"47d7749b87a149b3b5b8f356c8ff5d1d481ea55bc7d4749044129e009844ab83"} Jan 27 09:22:16 crc kubenswrapper[4799]: I0127 09:22:16.439957 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7481c01b-ab94-4a72-a35c-033cd195be3b","Type":"ContainerStarted","Data":"09808c9598008b9f473bf38a57ce7df9c653d3cad4523fbd51831b35c5a2e4b0"} Jan 27 09:22:16 crc kubenswrapper[4799]: I0127 09:22:16.476498 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.476474702 podStartE2EDuration="3.476474702s" podCreationTimestamp="2026-01-27 09:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:22:16.469208614 +0000 UTC m=+5802.780312729" watchObservedRunningTime="2026-01-27 09:22:16.476474702 +0000 UTC m=+5802.787578787" Jan 27 09:22:17 crc kubenswrapper[4799]: I0127 09:22:17.110260 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 09:22:19 crc kubenswrapper[4799]: I0127 09:22:19.119008 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 09:22:23 crc kubenswrapper[4799]: I0127 09:22:23.731534 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:22:23 crc kubenswrapper[4799]: I0127 09:22:23.732200 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:22:23 crc kubenswrapper[4799]: I0127 09:22:23.732281 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 09:22:23 crc kubenswrapper[4799]: I0127 09:22:23.733217 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6d7b80b862be55b1b06bdfee48de3c7e8807494274cc669fc0412f97be57e1fc"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 09:22:23 crc kubenswrapper[4799]: I0127 09:22:23.733276 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://6d7b80b862be55b1b06bdfee48de3c7e8807494274cc669fc0412f97be57e1fc" gracePeriod=600 Jan 27 09:22:24 crc kubenswrapper[4799]: I0127 09:22:24.357641 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 09:22:24 crc kubenswrapper[4799]: I0127 09:22:24.537535 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="6d7b80b862be55b1b06bdfee48de3c7e8807494274cc669fc0412f97be57e1fc" exitCode=0 Jan 27 09:22:24 crc kubenswrapper[4799]: I0127 09:22:24.537597 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"6d7b80b862be55b1b06bdfee48de3c7e8807494274cc669fc0412f97be57e1fc"} Jan 27 09:22:24 crc kubenswrapper[4799]: I0127 09:22:24.537648 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0"} Jan 27 09:22:24 crc kubenswrapper[4799]: I0127 09:22:24.537684 4799 scope.go:117] "RemoveContainer" containerID="e53d40ac14db909a3b33d00d37d5c302f325f5d1fac5dda723ee1965e8695c47" Jan 27 09:22:37 crc kubenswrapper[4799]: I0127 09:22:37.026424 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4mqtp"] Jan 27 09:22:37 crc kubenswrapper[4799]: I0127 09:22:37.030264 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4mqtp" Jan 27 09:22:37 crc kubenswrapper[4799]: I0127 09:22:37.052374 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4mqtp"] Jan 27 09:22:37 crc kubenswrapper[4799]: I0127 09:22:37.146425 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xqvq\" (UniqueName: \"kubernetes.io/projected/63bc69c2-9cb3-46c9-afa3-0cbc33dd6251-kube-api-access-9xqvq\") pod \"redhat-marketplace-4mqtp\" (UID: \"63bc69c2-9cb3-46c9-afa3-0cbc33dd6251\") " pod="openshift-marketplace/redhat-marketplace-4mqtp" Jan 27 09:22:37 crc kubenswrapper[4799]: I0127 09:22:37.147173 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63bc69c2-9cb3-46c9-afa3-0cbc33dd6251-utilities\") pod \"redhat-marketplace-4mqtp\" (UID: \"63bc69c2-9cb3-46c9-afa3-0cbc33dd6251\") " pod="openshift-marketplace/redhat-marketplace-4mqtp" Jan 27 09:22:37 crc kubenswrapper[4799]: I0127 09:22:37.147413 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63bc69c2-9cb3-46c9-afa3-0cbc33dd6251-catalog-content\") pod \"redhat-marketplace-4mqtp\" (UID: \"63bc69c2-9cb3-46c9-afa3-0cbc33dd6251\") " pod="openshift-marketplace/redhat-marketplace-4mqtp" Jan 27 09:22:37 crc kubenswrapper[4799]: I0127 09:22:37.250359 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xqvq\" (UniqueName: \"kubernetes.io/projected/63bc69c2-9cb3-46c9-afa3-0cbc33dd6251-kube-api-access-9xqvq\") pod \"redhat-marketplace-4mqtp\" (UID: \"63bc69c2-9cb3-46c9-afa3-0cbc33dd6251\") " pod="openshift-marketplace/redhat-marketplace-4mqtp" Jan 27 09:22:37 crc kubenswrapper[4799]: I0127 09:22:37.250418 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63bc69c2-9cb3-46c9-afa3-0cbc33dd6251-utilities\") pod \"redhat-marketplace-4mqtp\" (UID: \"63bc69c2-9cb3-46c9-afa3-0cbc33dd6251\") " pod="openshift-marketplace/redhat-marketplace-4mqtp" Jan 27 09:22:37 crc kubenswrapper[4799]: I0127 09:22:37.250446 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63bc69c2-9cb3-46c9-afa3-0cbc33dd6251-catalog-content\") pod \"redhat-marketplace-4mqtp\" (UID: \"63bc69c2-9cb3-46c9-afa3-0cbc33dd6251\") " pod="openshift-marketplace/redhat-marketplace-4mqtp" Jan 27 09:22:37 crc kubenswrapper[4799]: I0127 09:22:37.251096 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63bc69c2-9cb3-46c9-afa3-0cbc33dd6251-catalog-content\") pod \"redhat-marketplace-4mqtp\" (UID: \"63bc69c2-9cb3-46c9-afa3-0cbc33dd6251\") " pod="openshift-marketplace/redhat-marketplace-4mqtp" Jan 27 09:22:37 crc kubenswrapper[4799]: I0127 09:22:37.251115 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63bc69c2-9cb3-46c9-afa3-0cbc33dd6251-utilities\") pod \"redhat-marketplace-4mqtp\" (UID: \"63bc69c2-9cb3-46c9-afa3-0cbc33dd6251\") " pod="openshift-marketplace/redhat-marketplace-4mqtp" Jan 27 09:22:37 crc kubenswrapper[4799]: I0127 09:22:37.280734 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xqvq\" (UniqueName: \"kubernetes.io/projected/63bc69c2-9cb3-46c9-afa3-0cbc33dd6251-kube-api-access-9xqvq\") pod \"redhat-marketplace-4mqtp\" (UID: \"63bc69c2-9cb3-46c9-afa3-0cbc33dd6251\") " pod="openshift-marketplace/redhat-marketplace-4mqtp" Jan 27 09:22:37 crc kubenswrapper[4799]: I0127 09:22:37.358034 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4mqtp" Jan 27 09:22:37 crc kubenswrapper[4799]: I0127 09:22:37.854761 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4mqtp"] Jan 27 09:22:38 crc kubenswrapper[4799]: I0127 09:22:38.710070 4799 generic.go:334] "Generic (PLEG): container finished" podID="63bc69c2-9cb3-46c9-afa3-0cbc33dd6251" containerID="b3df04c43817de0eb92df9c5161640e19e40614cc219da8b897bf398eb77d736" exitCode=0 Jan 27 09:22:38 crc kubenswrapper[4799]: I0127 09:22:38.710134 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mqtp" event={"ID":"63bc69c2-9cb3-46c9-afa3-0cbc33dd6251","Type":"ContainerDied","Data":"b3df04c43817de0eb92df9c5161640e19e40614cc219da8b897bf398eb77d736"} Jan 27 09:22:38 crc kubenswrapper[4799]: I0127 09:22:38.710522 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mqtp" event={"ID":"63bc69c2-9cb3-46c9-afa3-0cbc33dd6251","Type":"ContainerStarted","Data":"8ee33ce18b334871b694c1a6fbda9381207ca9806c6fa39e0d14b3f76adce5fe"} Jan 27 09:22:38 crc kubenswrapper[4799]: I0127 09:22:38.714422 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 09:22:39 crc kubenswrapper[4799]: I0127 09:22:39.723111 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mqtp" event={"ID":"63bc69c2-9cb3-46c9-afa3-0cbc33dd6251","Type":"ContainerStarted","Data":"56ae97d5f83a341757dbecdd88a6082db303874141805301b1c6b41d4c4874e2"} Jan 27 09:22:40 crc kubenswrapper[4799]: I0127 09:22:40.735653 4799 generic.go:334] "Generic (PLEG): container finished" podID="63bc69c2-9cb3-46c9-afa3-0cbc33dd6251" containerID="56ae97d5f83a341757dbecdd88a6082db303874141805301b1c6b41d4c4874e2" exitCode=0 Jan 27 09:22:40 crc kubenswrapper[4799]: I0127 09:22:40.735721 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mqtp" event={"ID":"63bc69c2-9cb3-46c9-afa3-0cbc33dd6251","Type":"ContainerDied","Data":"56ae97d5f83a341757dbecdd88a6082db303874141805301b1c6b41d4c4874e2"} Jan 27 09:22:40 crc kubenswrapper[4799]: I0127 09:22:40.959224 4799 scope.go:117] "RemoveContainer" containerID="8ff69720f5dd7dd6ebadbc7e49ac52bf7a3a1d9b5363844091e1fe75049fcd78" Jan 27 09:22:40 crc kubenswrapper[4799]: I0127 09:22:40.987126 4799 scope.go:117] "RemoveContainer" containerID="5f008ea9f352cdbb811bc4cc231aac2f0f99a1cc70b8012e7ed1c37426aa95f0" Jan 27 09:22:41 crc kubenswrapper[4799]: I0127 09:22:41.756029 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mqtp" event={"ID":"63bc69c2-9cb3-46c9-afa3-0cbc33dd6251","Type":"ContainerStarted","Data":"4dd4dbe3fe9350b38f41f2321008d7c7d8ad8d09b6f6c7dbe57e4d2a3df2639e"} Jan 27 09:22:41 crc kubenswrapper[4799]: I0127 09:22:41.790293 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4mqtp" podStartSLOduration=3.321493832 podStartE2EDuration="5.790265388s" podCreationTimestamp="2026-01-27 09:22:36 +0000 UTC" firstStartedPulling="2026-01-27 09:22:38.71375491 +0000 UTC m=+5825.024859005" lastFinishedPulling="2026-01-27 09:22:41.182526496 +0000 UTC m=+5827.493630561" observedRunningTime="2026-01-27 09:22:41.779099734 +0000 UTC m=+5828.090203829" watchObservedRunningTime="2026-01-27 09:22:41.790265388 +0000 UTC m=+5828.101369453" Jan 27 09:22:47 crc kubenswrapper[4799]: I0127 09:22:47.358946 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4mqtp" Jan 27 09:22:47 crc kubenswrapper[4799]: I0127 09:22:47.360060 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4mqtp" Jan 27 09:22:47 crc kubenswrapper[4799]: I0127 09:22:47.416928 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4mqtp" Jan 27 09:22:47 crc kubenswrapper[4799]: I0127 09:22:47.877428 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4mqtp" Jan 27 09:22:47 crc kubenswrapper[4799]: I0127 09:22:47.940666 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4mqtp"] Jan 27 09:22:49 crc kubenswrapper[4799]: I0127 09:22:49.845681 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4mqtp" podUID="63bc69c2-9cb3-46c9-afa3-0cbc33dd6251" containerName="registry-server" containerID="cri-o://4dd4dbe3fe9350b38f41f2321008d7c7d8ad8d09b6f6c7dbe57e4d2a3df2639e" gracePeriod=2 Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.373597 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4mqtp" Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.472580 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63bc69c2-9cb3-46c9-afa3-0cbc33dd6251-utilities\") pod \"63bc69c2-9cb3-46c9-afa3-0cbc33dd6251\" (UID: \"63bc69c2-9cb3-46c9-afa3-0cbc33dd6251\") " Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.472886 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63bc69c2-9cb3-46c9-afa3-0cbc33dd6251-catalog-content\") pod \"63bc69c2-9cb3-46c9-afa3-0cbc33dd6251\" (UID: \"63bc69c2-9cb3-46c9-afa3-0cbc33dd6251\") " Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.472924 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xqvq\" (UniqueName: \"kubernetes.io/projected/63bc69c2-9cb3-46c9-afa3-0cbc33dd6251-kube-api-access-9xqvq\") pod \"63bc69c2-9cb3-46c9-afa3-0cbc33dd6251\" (UID: \"63bc69c2-9cb3-46c9-afa3-0cbc33dd6251\") " Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.473714 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63bc69c2-9cb3-46c9-afa3-0cbc33dd6251-utilities" (OuterVolumeSpecName: "utilities") pod "63bc69c2-9cb3-46c9-afa3-0cbc33dd6251" (UID: "63bc69c2-9cb3-46c9-afa3-0cbc33dd6251"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.480560 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63bc69c2-9cb3-46c9-afa3-0cbc33dd6251-kube-api-access-9xqvq" (OuterVolumeSpecName: "kube-api-access-9xqvq") pod "63bc69c2-9cb3-46c9-afa3-0cbc33dd6251" (UID: "63bc69c2-9cb3-46c9-afa3-0cbc33dd6251"). InnerVolumeSpecName "kube-api-access-9xqvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.502968 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63bc69c2-9cb3-46c9-afa3-0cbc33dd6251-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63bc69c2-9cb3-46c9-afa3-0cbc33dd6251" (UID: "63bc69c2-9cb3-46c9-afa3-0cbc33dd6251"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.576210 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63bc69c2-9cb3-46c9-afa3-0cbc33dd6251-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.576279 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63bc69c2-9cb3-46c9-afa3-0cbc33dd6251-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.576296 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xqvq\" (UniqueName: \"kubernetes.io/projected/63bc69c2-9cb3-46c9-afa3-0cbc33dd6251-kube-api-access-9xqvq\") on node \"crc\" DevicePath \"\"" Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.862332 4799 generic.go:334] "Generic (PLEG): container finished" podID="63bc69c2-9cb3-46c9-afa3-0cbc33dd6251" containerID="4dd4dbe3fe9350b38f41f2321008d7c7d8ad8d09b6f6c7dbe57e4d2a3df2639e" exitCode=0 Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.862479 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4mqtp" Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.862513 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mqtp" event={"ID":"63bc69c2-9cb3-46c9-afa3-0cbc33dd6251","Type":"ContainerDied","Data":"4dd4dbe3fe9350b38f41f2321008d7c7d8ad8d09b6f6c7dbe57e4d2a3df2639e"} Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.863147 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mqtp" event={"ID":"63bc69c2-9cb3-46c9-afa3-0cbc33dd6251","Type":"ContainerDied","Data":"8ee33ce18b334871b694c1a6fbda9381207ca9806c6fa39e0d14b3f76adce5fe"} Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.863197 4799 scope.go:117] "RemoveContainer" containerID="4dd4dbe3fe9350b38f41f2321008d7c7d8ad8d09b6f6c7dbe57e4d2a3df2639e" Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.909946 4799 scope.go:117] "RemoveContainer" containerID="56ae97d5f83a341757dbecdd88a6082db303874141805301b1c6b41d4c4874e2" Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.914353 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4mqtp"] Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.931647 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4mqtp"] Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.939260 4799 scope.go:117] "RemoveContainer" containerID="b3df04c43817de0eb92df9c5161640e19e40614cc219da8b897bf398eb77d736" Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.983394 4799 scope.go:117] "RemoveContainer" containerID="4dd4dbe3fe9350b38f41f2321008d7c7d8ad8d09b6f6c7dbe57e4d2a3df2639e" Jan 27 09:22:50 crc kubenswrapper[4799]: E0127 09:22:50.984172 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dd4dbe3fe9350b38f41f2321008d7c7d8ad8d09b6f6c7dbe57e4d2a3df2639e\": container with ID starting with 4dd4dbe3fe9350b38f41f2321008d7c7d8ad8d09b6f6c7dbe57e4d2a3df2639e not found: ID does not exist" containerID="4dd4dbe3fe9350b38f41f2321008d7c7d8ad8d09b6f6c7dbe57e4d2a3df2639e" Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.984224 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dd4dbe3fe9350b38f41f2321008d7c7d8ad8d09b6f6c7dbe57e4d2a3df2639e"} err="failed to get container status \"4dd4dbe3fe9350b38f41f2321008d7c7d8ad8d09b6f6c7dbe57e4d2a3df2639e\": rpc error: code = NotFound desc = could not find container \"4dd4dbe3fe9350b38f41f2321008d7c7d8ad8d09b6f6c7dbe57e4d2a3df2639e\": container with ID starting with 4dd4dbe3fe9350b38f41f2321008d7c7d8ad8d09b6f6c7dbe57e4d2a3df2639e not found: ID does not exist" Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.984255 4799 scope.go:117] "RemoveContainer" containerID="56ae97d5f83a341757dbecdd88a6082db303874141805301b1c6b41d4c4874e2" Jan 27 09:22:50 crc kubenswrapper[4799]: E0127 09:22:50.984672 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56ae97d5f83a341757dbecdd88a6082db303874141805301b1c6b41d4c4874e2\": container with ID starting with 56ae97d5f83a341757dbecdd88a6082db303874141805301b1c6b41d4c4874e2 not found: ID does not exist" containerID="56ae97d5f83a341757dbecdd88a6082db303874141805301b1c6b41d4c4874e2" Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.984694 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56ae97d5f83a341757dbecdd88a6082db303874141805301b1c6b41d4c4874e2"} err="failed to get container status \"56ae97d5f83a341757dbecdd88a6082db303874141805301b1c6b41d4c4874e2\": rpc error: code = NotFound desc = could not find container \"56ae97d5f83a341757dbecdd88a6082db303874141805301b1c6b41d4c4874e2\": container with ID starting with 56ae97d5f83a341757dbecdd88a6082db303874141805301b1c6b41d4c4874e2 not found: ID does not exist" Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.984710 4799 scope.go:117] "RemoveContainer" containerID="b3df04c43817de0eb92df9c5161640e19e40614cc219da8b897bf398eb77d736" Jan 27 09:22:50 crc kubenswrapper[4799]: E0127 09:22:50.985079 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3df04c43817de0eb92df9c5161640e19e40614cc219da8b897bf398eb77d736\": container with ID starting with b3df04c43817de0eb92df9c5161640e19e40614cc219da8b897bf398eb77d736 not found: ID does not exist" containerID="b3df04c43817de0eb92df9c5161640e19e40614cc219da8b897bf398eb77d736" Jan 27 09:22:50 crc kubenswrapper[4799]: I0127 09:22:50.985104 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3df04c43817de0eb92df9c5161640e19e40614cc219da8b897bf398eb77d736"} err="failed to get container status \"b3df04c43817de0eb92df9c5161640e19e40614cc219da8b897bf398eb77d736\": rpc error: code = NotFound desc = could not find container \"b3df04c43817de0eb92df9c5161640e19e40614cc219da8b897bf398eb77d736\": container with ID starting with b3df04c43817de0eb92df9c5161640e19e40614cc219da8b897bf398eb77d736 not found: ID does not exist" Jan 27 09:22:52 crc kubenswrapper[4799]: I0127 09:22:52.468889 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63bc69c2-9cb3-46c9-afa3-0cbc33dd6251" path="/var/lib/kubelet/pods/63bc69c2-9cb3-46c9-afa3-0cbc33dd6251/volumes" Jan 27 09:23:32 crc kubenswrapper[4799]: I0127 09:23:32.046280 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-98vtf"] Jan 27 09:23:32 crc kubenswrapper[4799]: I0127 09:23:32.057452 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3c45-account-create-update-8259j"] Jan 27 09:23:32 crc kubenswrapper[4799]: I0127 09:23:32.068606 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3c45-account-create-update-8259j"] Jan 27 09:23:32 crc kubenswrapper[4799]: I0127 09:23:32.078895 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-98vtf"] Jan 27 09:23:32 crc kubenswrapper[4799]: I0127 09:23:32.466271 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4410868e-475c-4bab-a660-e877aadabc59" path="/var/lib/kubelet/pods/4410868e-475c-4bab-a660-e877aadabc59/volumes" Jan 27 09:23:32 crc kubenswrapper[4799]: I0127 09:23:32.467932 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="747711a8-56e5-4b29-a3da-4d2b739a1cc4" path="/var/lib/kubelet/pods/747711a8-56e5-4b29-a3da-4d2b739a1cc4/volumes" Jan 27 09:23:38 crc kubenswrapper[4799]: I0127 09:23:38.070643 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-x6dgx"] Jan 27 09:23:38 crc kubenswrapper[4799]: I0127 09:23:38.086143 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-x6dgx"] Jan 27 09:23:38 crc kubenswrapper[4799]: I0127 09:23:38.467088 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf26015c-9724-4189-8b76-39774eb4400d" path="/var/lib/kubelet/pods/cf26015c-9724-4189-8b76-39774eb4400d/volumes" Jan 27 09:23:41 crc kubenswrapper[4799]: I0127 09:23:41.174805 4799 scope.go:117] "RemoveContainer" containerID="405436464a5d3be1fc0624f84fa662f2c0b97c14239f44b26c2e00e8ad3c1d8c" Jan 27 09:23:41 crc kubenswrapper[4799]: I0127 09:23:41.224900 4799 scope.go:117] "RemoveContainer" containerID="a1ad14bfc8a25fad73859598d96c48c5aec3c3853ad97922ebe166ff62426284" Jan 27 09:23:41 crc kubenswrapper[4799]: I0127 09:23:41.258600 4799 scope.go:117] "RemoveContainer" containerID="422342557a27d51130aae03a2d6675c73207f4e42f21e6ee4840afe2461ff00f" Jan 27 09:23:52 crc kubenswrapper[4799]: I0127 09:23:52.039373 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lkkc7"] Jan 27 09:23:52 crc kubenswrapper[4799]: I0127 09:23:52.050619 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lkkc7"] Jan 27 09:23:52 crc kubenswrapper[4799]: I0127 09:23:52.473764 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29f0f414-45f8-4563-b38a-0d09caab1f67" path="/var/lib/kubelet/pods/29f0f414-45f8-4563-b38a-0d09caab1f67/volumes" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.204049 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-pd9m7"] Jan 27 09:24:05 crc kubenswrapper[4799]: E0127 09:24:05.205193 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63bc69c2-9cb3-46c9-afa3-0cbc33dd6251" containerName="extract-utilities" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.205208 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="63bc69c2-9cb3-46c9-afa3-0cbc33dd6251" containerName="extract-utilities" Jan 27 09:24:05 crc kubenswrapper[4799]: E0127 09:24:05.205227 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63bc69c2-9cb3-46c9-afa3-0cbc33dd6251" containerName="registry-server" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.205233 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="63bc69c2-9cb3-46c9-afa3-0cbc33dd6251" containerName="registry-server" Jan 27 09:24:05 crc kubenswrapper[4799]: E0127 09:24:05.205255 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63bc69c2-9cb3-46c9-afa3-0cbc33dd6251" containerName="extract-content" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.205261 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="63bc69c2-9cb3-46c9-afa3-0cbc33dd6251" containerName="extract-content" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.205488 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="63bc69c2-9cb3-46c9-afa3-0cbc33dd6251" containerName="registry-server" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.206956 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.209100 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.209883 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-67xbx" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.213926 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-q5btc"] Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.215946 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q5btc" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.223263 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q5btc"] Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.230964 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pd9m7"] Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.357592 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1570f261-65d6-442d-8b5b-237d9497476f-var-log-ovn\") pod \"ovn-controller-q5btc\" (UID: \"1570f261-65d6-442d-8b5b-237d9497476f\") " pod="openstack/ovn-controller-q5btc" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.357665 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9005d037-5f85-4c10-a08d-dd696195e149-var-log\") pod \"ovn-controller-ovs-pd9m7\" (UID: \"9005d037-5f85-4c10-a08d-dd696195e149\") " pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.357709 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwqc9\" (UniqueName: \"kubernetes.io/projected/9005d037-5f85-4c10-a08d-dd696195e149-kube-api-access-vwqc9\") pod \"ovn-controller-ovs-pd9m7\" (UID: \"9005d037-5f85-4c10-a08d-dd696195e149\") " pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.357744 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1570f261-65d6-442d-8b5b-237d9497476f-var-run-ovn\") pod \"ovn-controller-q5btc\" (UID: \"1570f261-65d6-442d-8b5b-237d9497476f\") " pod="openstack/ovn-controller-q5btc" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.357773 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1570f261-65d6-442d-8b5b-237d9497476f-scripts\") pod \"ovn-controller-q5btc\" (UID: \"1570f261-65d6-442d-8b5b-237d9497476f\") " pod="openstack/ovn-controller-q5btc" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.357801 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1570f261-65d6-442d-8b5b-237d9497476f-var-run\") pod \"ovn-controller-q5btc\" (UID: \"1570f261-65d6-442d-8b5b-237d9497476f\") " pod="openstack/ovn-controller-q5btc" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.357825 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9005d037-5f85-4c10-a08d-dd696195e149-scripts\") pod \"ovn-controller-ovs-pd9m7\" (UID: \"9005d037-5f85-4c10-a08d-dd696195e149\") " pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.357845 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9005d037-5f85-4c10-a08d-dd696195e149-etc-ovs\") pod \"ovn-controller-ovs-pd9m7\" (UID: \"9005d037-5f85-4c10-a08d-dd696195e149\") " pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.357870 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9005d037-5f85-4c10-a08d-dd696195e149-var-run\") pod \"ovn-controller-ovs-pd9m7\" (UID: \"9005d037-5f85-4c10-a08d-dd696195e149\") " pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.357914 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9005d037-5f85-4c10-a08d-dd696195e149-var-lib\") pod \"ovn-controller-ovs-pd9m7\" (UID: \"9005d037-5f85-4c10-a08d-dd696195e149\") " pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.357943 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlgg7\" (UniqueName: \"kubernetes.io/projected/1570f261-65d6-442d-8b5b-237d9497476f-kube-api-access-nlgg7\") pod \"ovn-controller-q5btc\" (UID: \"1570f261-65d6-442d-8b5b-237d9497476f\") " pod="openstack/ovn-controller-q5btc" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.459679 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9005d037-5f85-4c10-a08d-dd696195e149-var-lib\") pod \"ovn-controller-ovs-pd9m7\" (UID: \"9005d037-5f85-4c10-a08d-dd696195e149\") " pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.459744 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlgg7\" (UniqueName: \"kubernetes.io/projected/1570f261-65d6-442d-8b5b-237d9497476f-kube-api-access-nlgg7\") pod \"ovn-controller-q5btc\" (UID: \"1570f261-65d6-442d-8b5b-237d9497476f\") " pod="openstack/ovn-controller-q5btc" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.459786 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1570f261-65d6-442d-8b5b-237d9497476f-var-log-ovn\") pod \"ovn-controller-q5btc\" (UID: \"1570f261-65d6-442d-8b5b-237d9497476f\") " pod="openstack/ovn-controller-q5btc" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.459815 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9005d037-5f85-4c10-a08d-dd696195e149-var-log\") pod \"ovn-controller-ovs-pd9m7\" (UID: \"9005d037-5f85-4c10-a08d-dd696195e149\") " pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.459858 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwqc9\" (UniqueName: \"kubernetes.io/projected/9005d037-5f85-4c10-a08d-dd696195e149-kube-api-access-vwqc9\") pod \"ovn-controller-ovs-pd9m7\" (UID: \"9005d037-5f85-4c10-a08d-dd696195e149\") " pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.459900 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1570f261-65d6-442d-8b5b-237d9497476f-var-run-ovn\") pod \"ovn-controller-q5btc\" (UID: \"1570f261-65d6-442d-8b5b-237d9497476f\") " pod="openstack/ovn-controller-q5btc" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.459934 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1570f261-65d6-442d-8b5b-237d9497476f-scripts\") pod \"ovn-controller-q5btc\" (UID: \"1570f261-65d6-442d-8b5b-237d9497476f\") " pod="openstack/ovn-controller-q5btc" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.459969 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1570f261-65d6-442d-8b5b-237d9497476f-var-run\") pod \"ovn-controller-q5btc\" (UID: \"1570f261-65d6-442d-8b5b-237d9497476f\") " pod="openstack/ovn-controller-q5btc" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.459998 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9005d037-5f85-4c10-a08d-dd696195e149-scripts\") pod \"ovn-controller-ovs-pd9m7\" (UID: \"9005d037-5f85-4c10-a08d-dd696195e149\") " pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.460024 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9005d037-5f85-4c10-a08d-dd696195e149-etc-ovs\") pod \"ovn-controller-ovs-pd9m7\" (UID: \"9005d037-5f85-4c10-a08d-dd696195e149\") " pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.460055 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9005d037-5f85-4c10-a08d-dd696195e149-var-run\") pod \"ovn-controller-ovs-pd9m7\" (UID: \"9005d037-5f85-4c10-a08d-dd696195e149\") " pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.460414 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9005d037-5f85-4c10-a08d-dd696195e149-var-lib\") pod \"ovn-controller-ovs-pd9m7\" (UID: \"9005d037-5f85-4c10-a08d-dd696195e149\") " pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.460463 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9005d037-5f85-4c10-a08d-dd696195e149-var-run\") pod \"ovn-controller-ovs-pd9m7\" (UID: \"9005d037-5f85-4c10-a08d-dd696195e149\") " pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.460555 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1570f261-65d6-442d-8b5b-237d9497476f-var-run\") pod \"ovn-controller-q5btc\" (UID: \"1570f261-65d6-442d-8b5b-237d9497476f\") " pod="openstack/ovn-controller-q5btc" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.460617 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9005d037-5f85-4c10-a08d-dd696195e149-etc-ovs\") pod \"ovn-controller-ovs-pd9m7\" (UID: \"9005d037-5f85-4c10-a08d-dd696195e149\") " pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.460686 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1570f261-65d6-442d-8b5b-237d9497476f-var-log-ovn\") pod \"ovn-controller-q5btc\" (UID: \"1570f261-65d6-442d-8b5b-237d9497476f\") " pod="openstack/ovn-controller-q5btc" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.460739 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9005d037-5f85-4c10-a08d-dd696195e149-var-log\") pod \"ovn-controller-ovs-pd9m7\" (UID: \"9005d037-5f85-4c10-a08d-dd696195e149\") " pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.460673 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1570f261-65d6-442d-8b5b-237d9497476f-var-run-ovn\") pod \"ovn-controller-q5btc\" (UID: \"1570f261-65d6-442d-8b5b-237d9497476f\") " pod="openstack/ovn-controller-q5btc" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.463005 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1570f261-65d6-442d-8b5b-237d9497476f-scripts\") pod \"ovn-controller-q5btc\" (UID: \"1570f261-65d6-442d-8b5b-237d9497476f\") " pod="openstack/ovn-controller-q5btc" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.463103 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9005d037-5f85-4c10-a08d-dd696195e149-scripts\") pod \"ovn-controller-ovs-pd9m7\" (UID: \"9005d037-5f85-4c10-a08d-dd696195e149\") " pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.483172 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlgg7\" (UniqueName: \"kubernetes.io/projected/1570f261-65d6-442d-8b5b-237d9497476f-kube-api-access-nlgg7\") pod \"ovn-controller-q5btc\" (UID: \"1570f261-65d6-442d-8b5b-237d9497476f\") " pod="openstack/ovn-controller-q5btc" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.483758 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwqc9\" (UniqueName: \"kubernetes.io/projected/9005d037-5f85-4c10-a08d-dd696195e149-kube-api-access-vwqc9\") pod \"ovn-controller-ovs-pd9m7\" (UID: \"9005d037-5f85-4c10-a08d-dd696195e149\") " pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.534755 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:05 crc kubenswrapper[4799]: I0127 09:24:05.556259 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q5btc" Jan 27 09:24:06 crc kubenswrapper[4799]: I0127 09:24:06.127413 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q5btc"] Jan 27 09:24:06 crc kubenswrapper[4799]: I0127 09:24:06.409410 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pd9m7"] Jan 27 09:24:06 crc kubenswrapper[4799]: I0127 09:24:06.547746 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-wqj9j"] Jan 27 09:24:06 crc kubenswrapper[4799]: I0127 09:24:06.549403 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wqj9j" Jan 27 09:24:06 crc kubenswrapper[4799]: I0127 09:24:06.552425 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 27 09:24:06 crc kubenswrapper[4799]: I0127 09:24:06.571223 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wqj9j"] Jan 27 09:24:06 crc kubenswrapper[4799]: I0127 09:24:06.675945 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q5btc" event={"ID":"1570f261-65d6-442d-8b5b-237d9497476f","Type":"ContainerStarted","Data":"f6fd58f9bbd2f63c5c0368ccf0cb48d15992fbbec99f598958c87c451cd1d959"} Jan 27 09:24:06 crc kubenswrapper[4799]: I0127 09:24:06.685548 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pd9m7" event={"ID":"9005d037-5f85-4c10-a08d-dd696195e149","Type":"ContainerStarted","Data":"21600e501b2d439da8ddea9ba01612e1f6ec80dbb2f2f39ba3d787c69eb1e57b"} Jan 27 09:24:06 crc kubenswrapper[4799]: I0127 09:24:06.687181 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c28bb76-25ef-4f36-ad9c-011fc5c4687d-config\") pod \"ovn-controller-metrics-wqj9j\" (UID: \"3c28bb76-25ef-4f36-ad9c-011fc5c4687d\") " pod="openstack/ovn-controller-metrics-wqj9j" Jan 27 09:24:06 crc kubenswrapper[4799]: I0127 09:24:06.687251 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-949sz\" (UniqueName: \"kubernetes.io/projected/3c28bb76-25ef-4f36-ad9c-011fc5c4687d-kube-api-access-949sz\") pod \"ovn-controller-metrics-wqj9j\" (UID: \"3c28bb76-25ef-4f36-ad9c-011fc5c4687d\") " pod="openstack/ovn-controller-metrics-wqj9j" Jan 27 09:24:06 crc kubenswrapper[4799]: I0127 09:24:06.687372 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3c28bb76-25ef-4f36-ad9c-011fc5c4687d-ovn-rundir\") pod \"ovn-controller-metrics-wqj9j\" (UID: \"3c28bb76-25ef-4f36-ad9c-011fc5c4687d\") " pod="openstack/ovn-controller-metrics-wqj9j" Jan 27 09:24:06 crc kubenswrapper[4799]: I0127 09:24:06.687453 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3c28bb76-25ef-4f36-ad9c-011fc5c4687d-ovs-rundir\") pod \"ovn-controller-metrics-wqj9j\" (UID: \"3c28bb76-25ef-4f36-ad9c-011fc5c4687d\") " pod="openstack/ovn-controller-metrics-wqj9j" Jan 27 09:24:06 crc kubenswrapper[4799]: I0127 09:24:06.789832 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c28bb76-25ef-4f36-ad9c-011fc5c4687d-config\") pod \"ovn-controller-metrics-wqj9j\" (UID: \"3c28bb76-25ef-4f36-ad9c-011fc5c4687d\") " pod="openstack/ovn-controller-metrics-wqj9j" Jan 27 09:24:06 crc kubenswrapper[4799]: I0127 09:24:06.790490 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-949sz\" (UniqueName: \"kubernetes.io/projected/3c28bb76-25ef-4f36-ad9c-011fc5c4687d-kube-api-access-949sz\") pod \"ovn-controller-metrics-wqj9j\" (UID: \"3c28bb76-25ef-4f36-ad9c-011fc5c4687d\") " pod="openstack/ovn-controller-metrics-wqj9j" Jan 27 09:24:06 crc kubenswrapper[4799]: I0127 09:24:06.790663 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3c28bb76-25ef-4f36-ad9c-011fc5c4687d-ovn-rundir\") pod \"ovn-controller-metrics-wqj9j\" (UID: \"3c28bb76-25ef-4f36-ad9c-011fc5c4687d\") " pod="openstack/ovn-controller-metrics-wqj9j" Jan 27 09:24:06 crc kubenswrapper[4799]: I0127 09:24:06.790750 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3c28bb76-25ef-4f36-ad9c-011fc5c4687d-ovs-rundir\") pod \"ovn-controller-metrics-wqj9j\" (UID: \"3c28bb76-25ef-4f36-ad9c-011fc5c4687d\") " pod="openstack/ovn-controller-metrics-wqj9j" Jan 27 09:24:06 crc kubenswrapper[4799]: I0127 09:24:06.790802 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c28bb76-25ef-4f36-ad9c-011fc5c4687d-config\") pod \"ovn-controller-metrics-wqj9j\" (UID: \"3c28bb76-25ef-4f36-ad9c-011fc5c4687d\") " pod="openstack/ovn-controller-metrics-wqj9j" Jan 27 09:24:06 crc kubenswrapper[4799]: I0127 09:24:06.791168 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3c28bb76-25ef-4f36-ad9c-011fc5c4687d-ovn-rundir\") pod \"ovn-controller-metrics-wqj9j\" (UID: \"3c28bb76-25ef-4f36-ad9c-011fc5c4687d\") " pod="openstack/ovn-controller-metrics-wqj9j" Jan 27 09:24:06 crc kubenswrapper[4799]: I0127 09:24:06.791197 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3c28bb76-25ef-4f36-ad9c-011fc5c4687d-ovs-rundir\") pod \"ovn-controller-metrics-wqj9j\" (UID: \"3c28bb76-25ef-4f36-ad9c-011fc5c4687d\") " pod="openstack/ovn-controller-metrics-wqj9j" Jan 27 09:24:06 crc kubenswrapper[4799]: I0127 09:24:06.811771 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-949sz\" (UniqueName: \"kubernetes.io/projected/3c28bb76-25ef-4f36-ad9c-011fc5c4687d-kube-api-access-949sz\") pod \"ovn-controller-metrics-wqj9j\" (UID: \"3c28bb76-25ef-4f36-ad9c-011fc5c4687d\") " pod="openstack/ovn-controller-metrics-wqj9j" Jan 27 09:24:06 crc kubenswrapper[4799]: I0127 09:24:06.896081 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wqj9j" Jan 27 09:24:07 crc kubenswrapper[4799]: I0127 09:24:07.417587 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wqj9j"] Jan 27 09:24:07 crc kubenswrapper[4799]: W0127 09:24:07.425768 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c28bb76_25ef_4f36_ad9c_011fc5c4687d.slice/crio-b021f10103d1d1acc2422e3cb98d744588db6171cda52c601c4c4f56b9bfdaef WatchSource:0}: Error finding container b021f10103d1d1acc2422e3cb98d744588db6171cda52c601c4c4f56b9bfdaef: Status 404 returned error can't find the container with id b021f10103d1d1acc2422e3cb98d744588db6171cda52c601c4c4f56b9bfdaef Jan 27 09:24:07 crc kubenswrapper[4799]: I0127 09:24:07.691176 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-76ccs"] Jan 27 09:24:07 crc kubenswrapper[4799]: I0127 09:24:07.692866 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-76ccs" Jan 27 09:24:07 crc kubenswrapper[4799]: I0127 09:24:07.705767 4799 generic.go:334] "Generic (PLEG): container finished" podID="9005d037-5f85-4c10-a08d-dd696195e149" containerID="8ef0e0f969bb152b9161df55025e49e9ee96a74b15644c1b61a7d809e5953e78" exitCode=0 Jan 27 09:24:07 crc kubenswrapper[4799]: I0127 09:24:07.705841 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pd9m7" event={"ID":"9005d037-5f85-4c10-a08d-dd696195e149","Type":"ContainerDied","Data":"8ef0e0f969bb152b9161df55025e49e9ee96a74b15644c1b61a7d809e5953e78"} Jan 27 09:24:07 crc kubenswrapper[4799]: I0127 09:24:07.706627 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-76ccs"] Jan 27 09:24:07 crc kubenswrapper[4799]: I0127 09:24:07.715656 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wqj9j" event={"ID":"3c28bb76-25ef-4f36-ad9c-011fc5c4687d","Type":"ContainerStarted","Data":"f13d09f70bfe580cfa2eeee7d6dfce12a84f4fc5b04bb93403c58d62343f274e"} Jan 27 09:24:07 crc kubenswrapper[4799]: I0127 09:24:07.715706 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wqj9j" event={"ID":"3c28bb76-25ef-4f36-ad9c-011fc5c4687d","Type":"ContainerStarted","Data":"b021f10103d1d1acc2422e3cb98d744588db6171cda52c601c4c4f56b9bfdaef"} Jan 27 09:24:07 crc kubenswrapper[4799]: I0127 09:24:07.722007 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q5btc" event={"ID":"1570f261-65d6-442d-8b5b-237d9497476f","Type":"ContainerStarted","Data":"2486d5924e1e1f226519e9493d475338dba13c427a678905d5da739c9978f318"} Jan 27 09:24:07 crc kubenswrapper[4799]: I0127 09:24:07.722880 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-q5btc" Jan 27 09:24:07 crc kubenswrapper[4799]: I0127 09:24:07.770535 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-wqj9j" podStartSLOduration=1.77051253 podStartE2EDuration="1.77051253s" podCreationTimestamp="2026-01-27 09:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:24:07.753653731 +0000 UTC m=+5914.064757796" watchObservedRunningTime="2026-01-27 09:24:07.77051253 +0000 UTC m=+5914.081616595" Jan 27 09:24:07 crc kubenswrapper[4799]: I0127 09:24:07.810593 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-q5btc" podStartSLOduration=2.810572301 podStartE2EDuration="2.810572301s" podCreationTimestamp="2026-01-27 09:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:24:07.783600736 +0000 UTC m=+5914.094704801" watchObservedRunningTime="2026-01-27 09:24:07.810572301 +0000 UTC m=+5914.121676366" Jan 27 09:24:07 crc kubenswrapper[4799]: I0127 09:24:07.824234 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb2d4eca-17ab-4ead-8e06-cd9d2c197577-operator-scripts\") pod \"octavia-db-create-76ccs\" (UID: \"cb2d4eca-17ab-4ead-8e06-cd9d2c197577\") " pod="openstack/octavia-db-create-76ccs" Jan 27 09:24:07 crc kubenswrapper[4799]: I0127 09:24:07.824362 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s8wb\" (UniqueName: \"kubernetes.io/projected/cb2d4eca-17ab-4ead-8e06-cd9d2c197577-kube-api-access-2s8wb\") pod \"octavia-db-create-76ccs\" (UID: \"cb2d4eca-17ab-4ead-8e06-cd9d2c197577\") " pod="openstack/octavia-db-create-76ccs" Jan 27 09:24:07 crc kubenswrapper[4799]: I0127 09:24:07.926051 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb2d4eca-17ab-4ead-8e06-cd9d2c197577-operator-scripts\") pod \"octavia-db-create-76ccs\" (UID: \"cb2d4eca-17ab-4ead-8e06-cd9d2c197577\") " pod="openstack/octavia-db-create-76ccs" Jan 27 09:24:07 crc kubenswrapper[4799]: I0127 09:24:07.926169 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s8wb\" (UniqueName: \"kubernetes.io/projected/cb2d4eca-17ab-4ead-8e06-cd9d2c197577-kube-api-access-2s8wb\") pod \"octavia-db-create-76ccs\" (UID: \"cb2d4eca-17ab-4ead-8e06-cd9d2c197577\") " pod="openstack/octavia-db-create-76ccs" Jan 27 09:24:07 crc kubenswrapper[4799]: I0127 09:24:07.927630 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb2d4eca-17ab-4ead-8e06-cd9d2c197577-operator-scripts\") pod \"octavia-db-create-76ccs\" (UID: \"cb2d4eca-17ab-4ead-8e06-cd9d2c197577\") " pod="openstack/octavia-db-create-76ccs" Jan 27 09:24:07 crc kubenswrapper[4799]: I0127 09:24:07.956026 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s8wb\" (UniqueName: \"kubernetes.io/projected/cb2d4eca-17ab-4ead-8e06-cd9d2c197577-kube-api-access-2s8wb\") pod \"octavia-db-create-76ccs\" (UID: \"cb2d4eca-17ab-4ead-8e06-cd9d2c197577\") " pod="openstack/octavia-db-create-76ccs" Jan 27 09:24:08 crc kubenswrapper[4799]: I0127 09:24:08.018381 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-76ccs" Jan 27 09:24:08 crc kubenswrapper[4799]: I0127 09:24:08.510740 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-76ccs"] Jan 27 09:24:08 crc kubenswrapper[4799]: I0127 09:24:08.736538 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pd9m7" event={"ID":"9005d037-5f85-4c10-a08d-dd696195e149","Type":"ContainerStarted","Data":"c95b9a13caa1034f6c99132bf73702181793032017d3d2040f55949d6da939d7"} Jan 27 09:24:08 crc kubenswrapper[4799]: I0127 09:24:08.736989 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:08 crc kubenswrapper[4799]: I0127 09:24:08.737005 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:08 crc kubenswrapper[4799]: I0127 09:24:08.737017 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pd9m7" event={"ID":"9005d037-5f85-4c10-a08d-dd696195e149","Type":"ContainerStarted","Data":"cc45cc6b3ce1fa8d95b754ba7e8e7ed8b508a8bd2e1d19e62bda98d0a1b0cbf7"} Jan 27 09:24:08 crc kubenswrapper[4799]: I0127 09:24:08.740728 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-76ccs" event={"ID":"cb2d4eca-17ab-4ead-8e06-cd9d2c197577","Type":"ContainerStarted","Data":"b55f486cddfd33c8aa28760ed4b46f0110726e6c0ce9c5bfbfe3025cf7914e1a"} Jan 27 09:24:08 crc kubenswrapper[4799]: I0127 09:24:08.740770 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-76ccs" event={"ID":"cb2d4eca-17ab-4ead-8e06-cd9d2c197577","Type":"ContainerStarted","Data":"bf0a77721c678b8ca9a98a7570b4bdd1952b30dd423e864bf209f8b39525f6ab"} Jan 27 09:24:08 crc kubenswrapper[4799]: I0127 09:24:08.760655 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-pd9m7" podStartSLOduration=3.76063088 podStartE2EDuration="3.76063088s" podCreationTimestamp="2026-01-27 09:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:24:08.755359756 +0000 UTC m=+5915.066463831" watchObservedRunningTime="2026-01-27 09:24:08.76063088 +0000 UTC m=+5915.071734945" Jan 27 09:24:08 crc kubenswrapper[4799]: I0127 09:24:08.922545 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-create-76ccs" podStartSLOduration=1.9225221559999999 podStartE2EDuration="1.922522156s" podCreationTimestamp="2026-01-27 09:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:24:08.777274663 +0000 UTC m=+5915.088378728" watchObservedRunningTime="2026-01-27 09:24:08.922522156 +0000 UTC m=+5915.233626221" Jan 27 09:24:08 crc kubenswrapper[4799]: I0127 09:24:08.935966 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-ec2b-account-create-update-l5zz2"] Jan 27 09:24:08 crc kubenswrapper[4799]: I0127 09:24:08.937424 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ec2b-account-create-update-l5zz2" Jan 27 09:24:08 crc kubenswrapper[4799]: I0127 09:24:08.940170 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Jan 27 09:24:08 crc kubenswrapper[4799]: I0127 09:24:08.945447 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-ec2b-account-create-update-l5zz2"] Jan 27 09:24:09 crc kubenswrapper[4799]: I0127 09:24:09.051142 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f41bc52-606a-49b5-bdc8-161692d0c525-operator-scripts\") pod \"octavia-ec2b-account-create-update-l5zz2\" (UID: \"0f41bc52-606a-49b5-bdc8-161692d0c525\") " pod="openstack/octavia-ec2b-account-create-update-l5zz2" Jan 27 09:24:09 crc kubenswrapper[4799]: I0127 09:24:09.051588 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9dvw\" (UniqueName: \"kubernetes.io/projected/0f41bc52-606a-49b5-bdc8-161692d0c525-kube-api-access-l9dvw\") pod \"octavia-ec2b-account-create-update-l5zz2\" (UID: \"0f41bc52-606a-49b5-bdc8-161692d0c525\") " pod="openstack/octavia-ec2b-account-create-update-l5zz2" Jan 27 09:24:09 crc kubenswrapper[4799]: I0127 09:24:09.153223 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9dvw\" (UniqueName: \"kubernetes.io/projected/0f41bc52-606a-49b5-bdc8-161692d0c525-kube-api-access-l9dvw\") pod \"octavia-ec2b-account-create-update-l5zz2\" (UID: \"0f41bc52-606a-49b5-bdc8-161692d0c525\") " pod="openstack/octavia-ec2b-account-create-update-l5zz2" Jan 27 09:24:09 crc kubenswrapper[4799]: I0127 09:24:09.153295 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f41bc52-606a-49b5-bdc8-161692d0c525-operator-scripts\") pod \"octavia-ec2b-account-create-update-l5zz2\" (UID: \"0f41bc52-606a-49b5-bdc8-161692d0c525\") " pod="openstack/octavia-ec2b-account-create-update-l5zz2" Jan 27 09:24:09 crc kubenswrapper[4799]: I0127 09:24:09.155291 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f41bc52-606a-49b5-bdc8-161692d0c525-operator-scripts\") pod \"octavia-ec2b-account-create-update-l5zz2\" (UID: \"0f41bc52-606a-49b5-bdc8-161692d0c525\") " pod="openstack/octavia-ec2b-account-create-update-l5zz2" Jan 27 09:24:09 crc kubenswrapper[4799]: I0127 09:24:09.174875 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9dvw\" (UniqueName: \"kubernetes.io/projected/0f41bc52-606a-49b5-bdc8-161692d0c525-kube-api-access-l9dvw\") pod \"octavia-ec2b-account-create-update-l5zz2\" (UID: \"0f41bc52-606a-49b5-bdc8-161692d0c525\") " pod="openstack/octavia-ec2b-account-create-update-l5zz2" Jan 27 09:24:09 crc kubenswrapper[4799]: I0127 09:24:09.347337 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ec2b-account-create-update-l5zz2" Jan 27 09:24:09 crc kubenswrapper[4799]: I0127 09:24:09.749405 4799 generic.go:334] "Generic (PLEG): container finished" podID="cb2d4eca-17ab-4ead-8e06-cd9d2c197577" containerID="b55f486cddfd33c8aa28760ed4b46f0110726e6c0ce9c5bfbfe3025cf7914e1a" exitCode=0 Jan 27 09:24:09 crc kubenswrapper[4799]: I0127 09:24:09.749521 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-76ccs" event={"ID":"cb2d4eca-17ab-4ead-8e06-cd9d2c197577","Type":"ContainerDied","Data":"b55f486cddfd33c8aa28760ed4b46f0110726e6c0ce9c5bfbfe3025cf7914e1a"} Jan 27 09:24:09 crc kubenswrapper[4799]: I0127 09:24:09.833951 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-ec2b-account-create-update-l5zz2"] Jan 27 09:24:10 crc kubenswrapper[4799]: I0127 09:24:10.761970 4799 generic.go:334] "Generic (PLEG): container finished" podID="0f41bc52-606a-49b5-bdc8-161692d0c525" containerID="fe155a79035691eb092677750b72cba05ed0a916a56f82840dc382a08c95efad" exitCode=0 Jan 27 09:24:10 crc kubenswrapper[4799]: I0127 09:24:10.762111 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ec2b-account-create-update-l5zz2" event={"ID":"0f41bc52-606a-49b5-bdc8-161692d0c525","Type":"ContainerDied","Data":"fe155a79035691eb092677750b72cba05ed0a916a56f82840dc382a08c95efad"} Jan 27 09:24:10 crc kubenswrapper[4799]: I0127 09:24:10.762442 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ec2b-account-create-update-l5zz2" event={"ID":"0f41bc52-606a-49b5-bdc8-161692d0c525","Type":"ContainerStarted","Data":"2bf22761b457e51f178d77125c260d0428f8503831a0657c699ecbfd9b58c460"} Jan 27 09:24:11 crc kubenswrapper[4799]: I0127 09:24:11.147873 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-76ccs" Jan 27 09:24:11 crc kubenswrapper[4799]: I0127 09:24:11.297664 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb2d4eca-17ab-4ead-8e06-cd9d2c197577-operator-scripts\") pod \"cb2d4eca-17ab-4ead-8e06-cd9d2c197577\" (UID: \"cb2d4eca-17ab-4ead-8e06-cd9d2c197577\") " Jan 27 09:24:11 crc kubenswrapper[4799]: I0127 09:24:11.297862 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s8wb\" (UniqueName: \"kubernetes.io/projected/cb2d4eca-17ab-4ead-8e06-cd9d2c197577-kube-api-access-2s8wb\") pod \"cb2d4eca-17ab-4ead-8e06-cd9d2c197577\" (UID: \"cb2d4eca-17ab-4ead-8e06-cd9d2c197577\") " Jan 27 09:24:11 crc kubenswrapper[4799]: I0127 09:24:11.298968 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb2d4eca-17ab-4ead-8e06-cd9d2c197577-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb2d4eca-17ab-4ead-8e06-cd9d2c197577" (UID: "cb2d4eca-17ab-4ead-8e06-cd9d2c197577"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:24:11 crc kubenswrapper[4799]: I0127 09:24:11.307228 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb2d4eca-17ab-4ead-8e06-cd9d2c197577-kube-api-access-2s8wb" (OuterVolumeSpecName: "kube-api-access-2s8wb") pod "cb2d4eca-17ab-4ead-8e06-cd9d2c197577" (UID: "cb2d4eca-17ab-4ead-8e06-cd9d2c197577"). InnerVolumeSpecName "kube-api-access-2s8wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:24:11 crc kubenswrapper[4799]: I0127 09:24:11.400676 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb2d4eca-17ab-4ead-8e06-cd9d2c197577-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:11 crc kubenswrapper[4799]: I0127 09:24:11.400719 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s8wb\" (UniqueName: \"kubernetes.io/projected/cb2d4eca-17ab-4ead-8e06-cd9d2c197577-kube-api-access-2s8wb\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:11 crc kubenswrapper[4799]: I0127 09:24:11.777035 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-76ccs" Jan 27 09:24:11 crc kubenswrapper[4799]: I0127 09:24:11.777023 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-76ccs" event={"ID":"cb2d4eca-17ab-4ead-8e06-cd9d2c197577","Type":"ContainerDied","Data":"bf0a77721c678b8ca9a98a7570b4bdd1952b30dd423e864bf209f8b39525f6ab"} Jan 27 09:24:11 crc kubenswrapper[4799]: I0127 09:24:11.777254 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf0a77721c678b8ca9a98a7570b4bdd1952b30dd423e864bf209f8b39525f6ab" Jan 27 09:24:12 crc kubenswrapper[4799]: I0127 09:24:12.152476 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ec2b-account-create-update-l5zz2" Jan 27 09:24:12 crc kubenswrapper[4799]: I0127 09:24:12.320611 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f41bc52-606a-49b5-bdc8-161692d0c525-operator-scripts\") pod \"0f41bc52-606a-49b5-bdc8-161692d0c525\" (UID: \"0f41bc52-606a-49b5-bdc8-161692d0c525\") " Jan 27 09:24:12 crc kubenswrapper[4799]: I0127 09:24:12.320716 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9dvw\" (UniqueName: \"kubernetes.io/projected/0f41bc52-606a-49b5-bdc8-161692d0c525-kube-api-access-l9dvw\") pod \"0f41bc52-606a-49b5-bdc8-161692d0c525\" (UID: \"0f41bc52-606a-49b5-bdc8-161692d0c525\") " Jan 27 09:24:12 crc kubenswrapper[4799]: I0127 09:24:12.321595 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f41bc52-606a-49b5-bdc8-161692d0c525-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0f41bc52-606a-49b5-bdc8-161692d0c525" (UID: "0f41bc52-606a-49b5-bdc8-161692d0c525"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:24:12 crc kubenswrapper[4799]: I0127 09:24:12.324891 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f41bc52-606a-49b5-bdc8-161692d0c525-kube-api-access-l9dvw" (OuterVolumeSpecName: "kube-api-access-l9dvw") pod "0f41bc52-606a-49b5-bdc8-161692d0c525" (UID: "0f41bc52-606a-49b5-bdc8-161692d0c525"). InnerVolumeSpecName "kube-api-access-l9dvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:24:12 crc kubenswrapper[4799]: I0127 09:24:12.423726 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f41bc52-606a-49b5-bdc8-161692d0c525-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:12 crc kubenswrapper[4799]: I0127 09:24:12.423774 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9dvw\" (UniqueName: \"kubernetes.io/projected/0f41bc52-606a-49b5-bdc8-161692d0c525-kube-api-access-l9dvw\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:12 crc kubenswrapper[4799]: I0127 09:24:12.787944 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ec2b-account-create-update-l5zz2" event={"ID":"0f41bc52-606a-49b5-bdc8-161692d0c525","Type":"ContainerDied","Data":"2bf22761b457e51f178d77125c260d0428f8503831a0657c699ecbfd9b58c460"} Jan 27 09:24:12 crc kubenswrapper[4799]: I0127 09:24:12.788400 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bf22761b457e51f178d77125c260d0428f8503831a0657c699ecbfd9b58c460" Jan 27 09:24:12 crc kubenswrapper[4799]: I0127 09:24:12.788008 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ec2b-account-create-update-l5zz2" Jan 27 09:24:14 crc kubenswrapper[4799]: I0127 09:24:14.736367 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-cx4gb"] Jan 27 09:24:14 crc kubenswrapper[4799]: E0127 09:24:14.737187 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2d4eca-17ab-4ead-8e06-cd9d2c197577" containerName="mariadb-database-create" Jan 27 09:24:14 crc kubenswrapper[4799]: I0127 09:24:14.737201 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2d4eca-17ab-4ead-8e06-cd9d2c197577" containerName="mariadb-database-create" Jan 27 09:24:14 crc kubenswrapper[4799]: E0127 09:24:14.737216 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f41bc52-606a-49b5-bdc8-161692d0c525" containerName="mariadb-account-create-update" Jan 27 09:24:14 crc kubenswrapper[4799]: I0127 09:24:14.737224 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f41bc52-606a-49b5-bdc8-161692d0c525" containerName="mariadb-account-create-update" Jan 27 09:24:14 crc kubenswrapper[4799]: I0127 09:24:14.737420 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb2d4eca-17ab-4ead-8e06-cd9d2c197577" containerName="mariadb-database-create" Jan 27 09:24:14 crc kubenswrapper[4799]: I0127 09:24:14.737444 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f41bc52-606a-49b5-bdc8-161692d0c525" containerName="mariadb-account-create-update" Jan 27 09:24:14 crc kubenswrapper[4799]: I0127 09:24:14.738163 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-cx4gb" Jan 27 09:24:14 crc kubenswrapper[4799]: I0127 09:24:14.744796 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-cx4gb"] Jan 27 09:24:14 crc kubenswrapper[4799]: I0127 09:24:14.890849 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71d3a837-3194-4f39-b5b5-129fd1881f24-operator-scripts\") pod \"octavia-persistence-db-create-cx4gb\" (UID: \"71d3a837-3194-4f39-b5b5-129fd1881f24\") " pod="openstack/octavia-persistence-db-create-cx4gb" Jan 27 09:24:14 crc kubenswrapper[4799]: I0127 09:24:14.890908 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m769\" (UniqueName: \"kubernetes.io/projected/71d3a837-3194-4f39-b5b5-129fd1881f24-kube-api-access-8m769\") pod \"octavia-persistence-db-create-cx4gb\" (UID: \"71d3a837-3194-4f39-b5b5-129fd1881f24\") " pod="openstack/octavia-persistence-db-create-cx4gb" Jan 27 09:24:14 crc kubenswrapper[4799]: I0127 09:24:14.992653 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71d3a837-3194-4f39-b5b5-129fd1881f24-operator-scripts\") pod \"octavia-persistence-db-create-cx4gb\" (UID: \"71d3a837-3194-4f39-b5b5-129fd1881f24\") " pod="openstack/octavia-persistence-db-create-cx4gb" Jan 27 09:24:14 crc kubenswrapper[4799]: I0127 09:24:14.992707 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m769\" (UniqueName: \"kubernetes.io/projected/71d3a837-3194-4f39-b5b5-129fd1881f24-kube-api-access-8m769\") pod \"octavia-persistence-db-create-cx4gb\" (UID: \"71d3a837-3194-4f39-b5b5-129fd1881f24\") " pod="openstack/octavia-persistence-db-create-cx4gb" Jan 27 09:24:14 crc kubenswrapper[4799]: I0127 09:24:14.993439 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71d3a837-3194-4f39-b5b5-129fd1881f24-operator-scripts\") pod \"octavia-persistence-db-create-cx4gb\" (UID: \"71d3a837-3194-4f39-b5b5-129fd1881f24\") " pod="openstack/octavia-persistence-db-create-cx4gb" Jan 27 09:24:15 crc kubenswrapper[4799]: I0127 09:24:15.021027 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m769\" (UniqueName: \"kubernetes.io/projected/71d3a837-3194-4f39-b5b5-129fd1881f24-kube-api-access-8m769\") pod \"octavia-persistence-db-create-cx4gb\" (UID: \"71d3a837-3194-4f39-b5b5-129fd1881f24\") " pod="openstack/octavia-persistence-db-create-cx4gb" Jan 27 09:24:15 crc kubenswrapper[4799]: I0127 09:24:15.067015 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-cx4gb" Jan 27 09:24:15 crc kubenswrapper[4799]: I0127 09:24:15.330665 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-383d-account-create-update-928q5"] Jan 27 09:24:15 crc kubenswrapper[4799]: I0127 09:24:15.332644 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-383d-account-create-update-928q5" Jan 27 09:24:15 crc kubenswrapper[4799]: I0127 09:24:15.336088 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Jan 27 09:24:15 crc kubenswrapper[4799]: I0127 09:24:15.340227 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-383d-account-create-update-928q5"] Jan 27 09:24:15 crc kubenswrapper[4799]: I0127 09:24:15.505291 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj9w5\" (UniqueName: \"kubernetes.io/projected/ad9bee34-4a93-4e3f-bf8c-ed07be0400f3-kube-api-access-cj9w5\") pod \"octavia-383d-account-create-update-928q5\" (UID: \"ad9bee34-4a93-4e3f-bf8c-ed07be0400f3\") " pod="openstack/octavia-383d-account-create-update-928q5" Jan 27 09:24:15 crc kubenswrapper[4799]: I0127 09:24:15.505558 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad9bee34-4a93-4e3f-bf8c-ed07be0400f3-operator-scripts\") pod \"octavia-383d-account-create-update-928q5\" (UID: \"ad9bee34-4a93-4e3f-bf8c-ed07be0400f3\") " pod="openstack/octavia-383d-account-create-update-928q5" Jan 27 09:24:15 crc kubenswrapper[4799]: I0127 09:24:15.556497 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-cx4gb"] Jan 27 09:24:15 crc kubenswrapper[4799]: I0127 09:24:15.607747 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad9bee34-4a93-4e3f-bf8c-ed07be0400f3-operator-scripts\") pod \"octavia-383d-account-create-update-928q5\" (UID: \"ad9bee34-4a93-4e3f-bf8c-ed07be0400f3\") " pod="openstack/octavia-383d-account-create-update-928q5" Jan 27 09:24:15 crc kubenswrapper[4799]: I0127 09:24:15.607890 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj9w5\" (UniqueName: \"kubernetes.io/projected/ad9bee34-4a93-4e3f-bf8c-ed07be0400f3-kube-api-access-cj9w5\") pod \"octavia-383d-account-create-update-928q5\" (UID: \"ad9bee34-4a93-4e3f-bf8c-ed07be0400f3\") " pod="openstack/octavia-383d-account-create-update-928q5" Jan 27 09:24:15 crc kubenswrapper[4799]: I0127 09:24:15.608603 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad9bee34-4a93-4e3f-bf8c-ed07be0400f3-operator-scripts\") pod \"octavia-383d-account-create-update-928q5\" (UID: \"ad9bee34-4a93-4e3f-bf8c-ed07be0400f3\") " pod="openstack/octavia-383d-account-create-update-928q5" Jan 27 09:24:15 crc kubenswrapper[4799]: I0127 09:24:15.631894 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj9w5\" (UniqueName: \"kubernetes.io/projected/ad9bee34-4a93-4e3f-bf8c-ed07be0400f3-kube-api-access-cj9w5\") pod \"octavia-383d-account-create-update-928q5\" (UID: \"ad9bee34-4a93-4e3f-bf8c-ed07be0400f3\") " pod="openstack/octavia-383d-account-create-update-928q5" Jan 27 09:24:15 crc kubenswrapper[4799]: I0127 09:24:15.667784 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-383d-account-create-update-928q5" Jan 27 09:24:15 crc kubenswrapper[4799]: I0127 09:24:15.829420 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-cx4gb" event={"ID":"71d3a837-3194-4f39-b5b5-129fd1881f24","Type":"ContainerStarted","Data":"4936bfb27642b13f48eb8712017bd7b243ccdce6ef350a596c03138849bcf02f"} Jan 27 09:24:15 crc kubenswrapper[4799]: I0127 09:24:15.829839 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-cx4gb" event={"ID":"71d3a837-3194-4f39-b5b5-129fd1881f24","Type":"ContainerStarted","Data":"b74ae6d38ff1841aeacceba42188dba201c71fbfd1ecce25391981ef8947dd7e"} Jan 27 09:24:15 crc kubenswrapper[4799]: I0127 09:24:15.850408 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-persistence-db-create-cx4gb" podStartSLOduration=1.850379852 podStartE2EDuration="1.850379852s" podCreationTimestamp="2026-01-27 09:24:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:24:15.847435782 +0000 UTC m=+5922.158539847" watchObservedRunningTime="2026-01-27 09:24:15.850379852 +0000 UTC m=+5922.161483917" Jan 27 09:24:16 crc kubenswrapper[4799]: I0127 09:24:16.124388 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-383d-account-create-update-928q5"] Jan 27 09:24:16 crc kubenswrapper[4799]: W0127 09:24:16.129709 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad9bee34_4a93_4e3f_bf8c_ed07be0400f3.slice/crio-97bac02142f0ce3db0f8195fe89df0289b4842189f91db38cdac8f280523025b WatchSource:0}: Error finding container 97bac02142f0ce3db0f8195fe89df0289b4842189f91db38cdac8f280523025b: Status 404 returned error can't find the container with id 97bac02142f0ce3db0f8195fe89df0289b4842189f91db38cdac8f280523025b Jan 27 09:24:16 crc kubenswrapper[4799]: I0127 09:24:16.846934 4799 generic.go:334] "Generic (PLEG): container finished" podID="71d3a837-3194-4f39-b5b5-129fd1881f24" containerID="4936bfb27642b13f48eb8712017bd7b243ccdce6ef350a596c03138849bcf02f" exitCode=0 Jan 27 09:24:16 crc kubenswrapper[4799]: I0127 09:24:16.847429 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-cx4gb" event={"ID":"71d3a837-3194-4f39-b5b5-129fd1881f24","Type":"ContainerDied","Data":"4936bfb27642b13f48eb8712017bd7b243ccdce6ef350a596c03138849bcf02f"} Jan 27 09:24:16 crc kubenswrapper[4799]: I0127 09:24:16.850272 4799 generic.go:334] "Generic (PLEG): container finished" podID="ad9bee34-4a93-4e3f-bf8c-ed07be0400f3" containerID="c115cb34f1882e9319984e8a024ebbab4685a5002b1752653e73038f1cb08ebf" exitCode=0 Jan 27 09:24:16 crc kubenswrapper[4799]: I0127 09:24:16.850363 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-383d-account-create-update-928q5" event={"ID":"ad9bee34-4a93-4e3f-bf8c-ed07be0400f3","Type":"ContainerDied","Data":"c115cb34f1882e9319984e8a024ebbab4685a5002b1752653e73038f1cb08ebf"} Jan 27 09:24:16 crc kubenswrapper[4799]: I0127 09:24:16.850409 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-383d-account-create-update-928q5" event={"ID":"ad9bee34-4a93-4e3f-bf8c-ed07be0400f3","Type":"ContainerStarted","Data":"97bac02142f0ce3db0f8195fe89df0289b4842189f91db38cdac8f280523025b"} Jan 27 09:24:18 crc kubenswrapper[4799]: I0127 09:24:18.416928 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-383d-account-create-update-928q5" Jan 27 09:24:18 crc kubenswrapper[4799]: I0127 09:24:18.426517 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-cx4gb" Jan 27 09:24:18 crc kubenswrapper[4799]: I0127 09:24:18.570662 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71d3a837-3194-4f39-b5b5-129fd1881f24-operator-scripts\") pod \"71d3a837-3194-4f39-b5b5-129fd1881f24\" (UID: \"71d3a837-3194-4f39-b5b5-129fd1881f24\") " Jan 27 09:24:18 crc kubenswrapper[4799]: I0127 09:24:18.570753 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad9bee34-4a93-4e3f-bf8c-ed07be0400f3-operator-scripts\") pod \"ad9bee34-4a93-4e3f-bf8c-ed07be0400f3\" (UID: \"ad9bee34-4a93-4e3f-bf8c-ed07be0400f3\") " Jan 27 09:24:18 crc kubenswrapper[4799]: I0127 09:24:18.570859 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj9w5\" (UniqueName: \"kubernetes.io/projected/ad9bee34-4a93-4e3f-bf8c-ed07be0400f3-kube-api-access-cj9w5\") pod \"ad9bee34-4a93-4e3f-bf8c-ed07be0400f3\" (UID: \"ad9bee34-4a93-4e3f-bf8c-ed07be0400f3\") " Jan 27 09:24:18 crc kubenswrapper[4799]: I0127 09:24:18.570919 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m769\" (UniqueName: \"kubernetes.io/projected/71d3a837-3194-4f39-b5b5-129fd1881f24-kube-api-access-8m769\") pod \"71d3a837-3194-4f39-b5b5-129fd1881f24\" (UID: \"71d3a837-3194-4f39-b5b5-129fd1881f24\") " Jan 27 09:24:18 crc kubenswrapper[4799]: I0127 09:24:18.576183 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71d3a837-3194-4f39-b5b5-129fd1881f24-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "71d3a837-3194-4f39-b5b5-129fd1881f24" (UID: "71d3a837-3194-4f39-b5b5-129fd1881f24"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:24:18 crc kubenswrapper[4799]: I0127 09:24:18.576482 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad9bee34-4a93-4e3f-bf8c-ed07be0400f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad9bee34-4a93-4e3f-bf8c-ed07be0400f3" (UID: "ad9bee34-4a93-4e3f-bf8c-ed07be0400f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:24:18 crc kubenswrapper[4799]: I0127 09:24:18.598664 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad9bee34-4a93-4e3f-bf8c-ed07be0400f3-kube-api-access-cj9w5" (OuterVolumeSpecName: "kube-api-access-cj9w5") pod "ad9bee34-4a93-4e3f-bf8c-ed07be0400f3" (UID: "ad9bee34-4a93-4e3f-bf8c-ed07be0400f3"). InnerVolumeSpecName "kube-api-access-cj9w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:24:18 crc kubenswrapper[4799]: I0127 09:24:18.679238 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cj9w5\" (UniqueName: \"kubernetes.io/projected/ad9bee34-4a93-4e3f-bf8c-ed07be0400f3-kube-api-access-cj9w5\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:18 crc kubenswrapper[4799]: I0127 09:24:18.679281 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71d3a837-3194-4f39-b5b5-129fd1881f24-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:18 crc kubenswrapper[4799]: I0127 09:24:18.679322 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad9bee34-4a93-4e3f-bf8c-ed07be0400f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:18 crc kubenswrapper[4799]: I0127 09:24:18.681563 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71d3a837-3194-4f39-b5b5-129fd1881f24-kube-api-access-8m769" (OuterVolumeSpecName: "kube-api-access-8m769") pod "71d3a837-3194-4f39-b5b5-129fd1881f24" (UID: "71d3a837-3194-4f39-b5b5-129fd1881f24"). InnerVolumeSpecName "kube-api-access-8m769". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:24:18 crc kubenswrapper[4799]: I0127 09:24:18.781913 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m769\" (UniqueName: \"kubernetes.io/projected/71d3a837-3194-4f39-b5b5-129fd1881f24-kube-api-access-8m769\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:18 crc kubenswrapper[4799]: I0127 09:24:18.871012 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-383d-account-create-update-928q5" Jan 27 09:24:18 crc kubenswrapper[4799]: I0127 09:24:18.871337 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-383d-account-create-update-928q5" event={"ID":"ad9bee34-4a93-4e3f-bf8c-ed07be0400f3","Type":"ContainerDied","Data":"97bac02142f0ce3db0f8195fe89df0289b4842189f91db38cdac8f280523025b"} Jan 27 09:24:18 crc kubenswrapper[4799]: I0127 09:24:18.871423 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97bac02142f0ce3db0f8195fe89df0289b4842189f91db38cdac8f280523025b" Jan 27 09:24:18 crc kubenswrapper[4799]: I0127 09:24:18.873636 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-cx4gb" event={"ID":"71d3a837-3194-4f39-b5b5-129fd1881f24","Type":"ContainerDied","Data":"b74ae6d38ff1841aeacceba42188dba201c71fbfd1ecce25391981ef8947dd7e"} Jan 27 09:24:18 crc kubenswrapper[4799]: I0127 09:24:18.873698 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b74ae6d38ff1841aeacceba42188dba201c71fbfd1ecce25391981ef8947dd7e" Jan 27 09:24:18 crc kubenswrapper[4799]: I0127 09:24:18.873700 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-cx4gb" Jan 27 09:24:20 crc kubenswrapper[4799]: I0127 09:24:20.862106 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-6c4dfc9d78-dch6f"] Jan 27 09:24:20 crc kubenswrapper[4799]: E0127 09:24:20.863057 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71d3a837-3194-4f39-b5b5-129fd1881f24" containerName="mariadb-database-create" Jan 27 09:24:20 crc kubenswrapper[4799]: I0127 09:24:20.863076 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="71d3a837-3194-4f39-b5b5-129fd1881f24" containerName="mariadb-database-create" Jan 27 09:24:20 crc kubenswrapper[4799]: E0127 09:24:20.863117 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad9bee34-4a93-4e3f-bf8c-ed07be0400f3" containerName="mariadb-account-create-update" Jan 27 09:24:20 crc kubenswrapper[4799]: I0127 09:24:20.863127 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad9bee34-4a93-4e3f-bf8c-ed07be0400f3" containerName="mariadb-account-create-update" Jan 27 09:24:20 crc kubenswrapper[4799]: I0127 09:24:20.863353 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="71d3a837-3194-4f39-b5b5-129fd1881f24" containerName="mariadb-database-create" Jan 27 09:24:20 crc kubenswrapper[4799]: I0127 09:24:20.863387 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad9bee34-4a93-4e3f-bf8c-ed07be0400f3" containerName="mariadb-account-create-update" Jan 27 09:24:20 crc kubenswrapper[4799]: I0127 09:24:20.865088 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6c4dfc9d78-dch6f" Jan 27 09:24:20 crc kubenswrapper[4799]: I0127 09:24:20.870855 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Jan 27 09:24:20 crc kubenswrapper[4799]: I0127 09:24:20.871440 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-lqx8w" Jan 27 09:24:20 crc kubenswrapper[4799]: I0127 09:24:20.871577 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Jan 27 09:24:20 crc kubenswrapper[4799]: I0127 09:24:20.880430 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-6c4dfc9d78-dch6f"] Jan 27 09:24:20 crc kubenswrapper[4799]: I0127 09:24:20.930961 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/044571bd-c726-4af3-8344-7df2aafcca9a-octavia-run\") pod \"octavia-api-6c4dfc9d78-dch6f\" (UID: \"044571bd-c726-4af3-8344-7df2aafcca9a\") " pod="openstack/octavia-api-6c4dfc9d78-dch6f" Jan 27 09:24:20 crc kubenswrapper[4799]: I0127 09:24:20.931057 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044571bd-c726-4af3-8344-7df2aafcca9a-config-data\") pod \"octavia-api-6c4dfc9d78-dch6f\" (UID: \"044571bd-c726-4af3-8344-7df2aafcca9a\") " pod="openstack/octavia-api-6c4dfc9d78-dch6f" Jan 27 09:24:20 crc kubenswrapper[4799]: I0127 09:24:20.931086 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044571bd-c726-4af3-8344-7df2aafcca9a-combined-ca-bundle\") pod \"octavia-api-6c4dfc9d78-dch6f\" (UID: \"044571bd-c726-4af3-8344-7df2aafcca9a\") " pod="openstack/octavia-api-6c4dfc9d78-dch6f" Jan 27 09:24:20 crc kubenswrapper[4799]: I0127 09:24:20.931123 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044571bd-c726-4af3-8344-7df2aafcca9a-scripts\") pod \"octavia-api-6c4dfc9d78-dch6f\" (UID: \"044571bd-c726-4af3-8344-7df2aafcca9a\") " pod="openstack/octavia-api-6c4dfc9d78-dch6f" Jan 27 09:24:20 crc kubenswrapper[4799]: I0127 09:24:20.931160 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/044571bd-c726-4af3-8344-7df2aafcca9a-config-data-merged\") pod \"octavia-api-6c4dfc9d78-dch6f\" (UID: \"044571bd-c726-4af3-8344-7df2aafcca9a\") " pod="openstack/octavia-api-6c4dfc9d78-dch6f" Jan 27 09:24:21 crc kubenswrapper[4799]: I0127 09:24:21.033540 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044571bd-c726-4af3-8344-7df2aafcca9a-config-data\") pod \"octavia-api-6c4dfc9d78-dch6f\" (UID: \"044571bd-c726-4af3-8344-7df2aafcca9a\") " pod="openstack/octavia-api-6c4dfc9d78-dch6f" Jan 27 09:24:21 crc kubenswrapper[4799]: I0127 09:24:21.033602 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044571bd-c726-4af3-8344-7df2aafcca9a-combined-ca-bundle\") pod \"octavia-api-6c4dfc9d78-dch6f\" (UID: \"044571bd-c726-4af3-8344-7df2aafcca9a\") " pod="openstack/octavia-api-6c4dfc9d78-dch6f" Jan 27 09:24:21 crc kubenswrapper[4799]: I0127 09:24:21.033634 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044571bd-c726-4af3-8344-7df2aafcca9a-scripts\") pod \"octavia-api-6c4dfc9d78-dch6f\" (UID: \"044571bd-c726-4af3-8344-7df2aafcca9a\") " pod="openstack/octavia-api-6c4dfc9d78-dch6f" Jan 27 09:24:21 crc kubenswrapper[4799]: I0127 09:24:21.033675 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/044571bd-c726-4af3-8344-7df2aafcca9a-config-data-merged\") pod \"octavia-api-6c4dfc9d78-dch6f\" (UID: \"044571bd-c726-4af3-8344-7df2aafcca9a\") " pod="openstack/octavia-api-6c4dfc9d78-dch6f" Jan 27 09:24:21 crc kubenswrapper[4799]: I0127 09:24:21.033771 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/044571bd-c726-4af3-8344-7df2aafcca9a-octavia-run\") pod \"octavia-api-6c4dfc9d78-dch6f\" (UID: \"044571bd-c726-4af3-8344-7df2aafcca9a\") " pod="openstack/octavia-api-6c4dfc9d78-dch6f" Jan 27 09:24:21 crc kubenswrapper[4799]: I0127 09:24:21.034228 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/044571bd-c726-4af3-8344-7df2aafcca9a-octavia-run\") pod \"octavia-api-6c4dfc9d78-dch6f\" (UID: \"044571bd-c726-4af3-8344-7df2aafcca9a\") " pod="openstack/octavia-api-6c4dfc9d78-dch6f" Jan 27 09:24:21 crc kubenswrapper[4799]: I0127 09:24:21.035676 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/044571bd-c726-4af3-8344-7df2aafcca9a-config-data-merged\") pod \"octavia-api-6c4dfc9d78-dch6f\" (UID: \"044571bd-c726-4af3-8344-7df2aafcca9a\") " pod="openstack/octavia-api-6c4dfc9d78-dch6f" Jan 27 09:24:21 crc kubenswrapper[4799]: I0127 09:24:21.044132 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044571bd-c726-4af3-8344-7df2aafcca9a-scripts\") pod \"octavia-api-6c4dfc9d78-dch6f\" (UID: \"044571bd-c726-4af3-8344-7df2aafcca9a\") " pod="openstack/octavia-api-6c4dfc9d78-dch6f" Jan 27 09:24:21 crc kubenswrapper[4799]: I0127 09:24:21.044250 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044571bd-c726-4af3-8344-7df2aafcca9a-combined-ca-bundle\") pod \"octavia-api-6c4dfc9d78-dch6f\" (UID: \"044571bd-c726-4af3-8344-7df2aafcca9a\") " pod="openstack/octavia-api-6c4dfc9d78-dch6f" Jan 27 09:24:21 crc kubenswrapper[4799]: I0127 09:24:21.044531 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044571bd-c726-4af3-8344-7df2aafcca9a-config-data\") pod \"octavia-api-6c4dfc9d78-dch6f\" (UID: \"044571bd-c726-4af3-8344-7df2aafcca9a\") " pod="openstack/octavia-api-6c4dfc9d78-dch6f" Jan 27 09:24:21 crc kubenswrapper[4799]: I0127 09:24:21.193215 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6c4dfc9d78-dch6f" Jan 27 09:24:21 crc kubenswrapper[4799]: I0127 09:24:21.749689 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-6c4dfc9d78-dch6f"] Jan 27 09:24:21 crc kubenswrapper[4799]: I0127 09:24:21.907546 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6c4dfc9d78-dch6f" event={"ID":"044571bd-c726-4af3-8344-7df2aafcca9a","Type":"ContainerStarted","Data":"60d2c94405a790d3e6e5593da98391dcbd127d1c0489c7b76510df45065b7634"} Jan 27 09:24:22 crc kubenswrapper[4799]: I0127 09:24:22.032467 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-77xvm"] Jan 27 09:24:22 crc kubenswrapper[4799]: I0127 09:24:22.034582 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-77xvm" Jan 27 09:24:22 crc kubenswrapper[4799]: I0127 09:24:22.066283 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-77xvm"] Jan 27 09:24:22 crc kubenswrapper[4799]: I0127 09:24:22.160369 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3e9bf0-9aee-43e5-8a9d-126e43c3814d-utilities\") pod \"certified-operators-77xvm\" (UID: \"0e3e9bf0-9aee-43e5-8a9d-126e43c3814d\") " pod="openshift-marketplace/certified-operators-77xvm" Jan 27 09:24:22 crc kubenswrapper[4799]: I0127 09:24:22.160499 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3e9bf0-9aee-43e5-8a9d-126e43c3814d-catalog-content\") pod \"certified-operators-77xvm\" (UID: \"0e3e9bf0-9aee-43e5-8a9d-126e43c3814d\") " pod="openshift-marketplace/certified-operators-77xvm" Jan 27 09:24:22 crc kubenswrapper[4799]: I0127 09:24:22.160647 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbt9c\" (UniqueName: \"kubernetes.io/projected/0e3e9bf0-9aee-43e5-8a9d-126e43c3814d-kube-api-access-pbt9c\") pod \"certified-operators-77xvm\" (UID: \"0e3e9bf0-9aee-43e5-8a9d-126e43c3814d\") " pod="openshift-marketplace/certified-operators-77xvm" Jan 27 09:24:22 crc kubenswrapper[4799]: I0127 09:24:22.262891 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3e9bf0-9aee-43e5-8a9d-126e43c3814d-catalog-content\") pod \"certified-operators-77xvm\" (UID: \"0e3e9bf0-9aee-43e5-8a9d-126e43c3814d\") " pod="openshift-marketplace/certified-operators-77xvm" Jan 27 09:24:22 crc kubenswrapper[4799]: I0127 09:24:22.263035 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbt9c\" (UniqueName: \"kubernetes.io/projected/0e3e9bf0-9aee-43e5-8a9d-126e43c3814d-kube-api-access-pbt9c\") pod \"certified-operators-77xvm\" (UID: \"0e3e9bf0-9aee-43e5-8a9d-126e43c3814d\") " pod="openshift-marketplace/certified-operators-77xvm" Jan 27 09:24:22 crc kubenswrapper[4799]: I0127 09:24:22.263088 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3e9bf0-9aee-43e5-8a9d-126e43c3814d-utilities\") pod \"certified-operators-77xvm\" (UID: \"0e3e9bf0-9aee-43e5-8a9d-126e43c3814d\") " pod="openshift-marketplace/certified-operators-77xvm" Jan 27 09:24:22 crc kubenswrapper[4799]: I0127 09:24:22.263475 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3e9bf0-9aee-43e5-8a9d-126e43c3814d-catalog-content\") pod \"certified-operators-77xvm\" (UID: \"0e3e9bf0-9aee-43e5-8a9d-126e43c3814d\") " pod="openshift-marketplace/certified-operators-77xvm" Jan 27 09:24:22 crc kubenswrapper[4799]: I0127 09:24:22.263613 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3e9bf0-9aee-43e5-8a9d-126e43c3814d-utilities\") pod \"certified-operators-77xvm\" (UID: \"0e3e9bf0-9aee-43e5-8a9d-126e43c3814d\") " pod="openshift-marketplace/certified-operators-77xvm" Jan 27 09:24:22 crc kubenswrapper[4799]: I0127 09:24:22.290238 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbt9c\" (UniqueName: \"kubernetes.io/projected/0e3e9bf0-9aee-43e5-8a9d-126e43c3814d-kube-api-access-pbt9c\") pod \"certified-operators-77xvm\" (UID: \"0e3e9bf0-9aee-43e5-8a9d-126e43c3814d\") " pod="openshift-marketplace/certified-operators-77xvm" Jan 27 09:24:22 crc kubenswrapper[4799]: I0127 09:24:22.359685 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-77xvm" Jan 27 09:24:23 crc kubenswrapper[4799]: I0127 09:24:23.215767 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-77xvm"] Jan 27 09:24:23 crc kubenswrapper[4799]: I0127 09:24:23.954129 4799 generic.go:334] "Generic (PLEG): container finished" podID="0e3e9bf0-9aee-43e5-8a9d-126e43c3814d" containerID="7675b5ba00d56c1d54470098ff196a9c64a64b2110c5e92e64a865bd43f2f5a1" exitCode=0 Jan 27 09:24:23 crc kubenswrapper[4799]: I0127 09:24:23.954283 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77xvm" event={"ID":"0e3e9bf0-9aee-43e5-8a9d-126e43c3814d","Type":"ContainerDied","Data":"7675b5ba00d56c1d54470098ff196a9c64a64b2110c5e92e64a865bd43f2f5a1"} Jan 27 09:24:23 crc kubenswrapper[4799]: I0127 09:24:23.954561 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77xvm" event={"ID":"0e3e9bf0-9aee-43e5-8a9d-126e43c3814d","Type":"ContainerStarted","Data":"c20879a845f289b66d99909d19d424033350dd7ed6489266e08d957a8b0b71c1"} Jan 27 09:24:24 crc kubenswrapper[4799]: I0127 09:24:24.977455 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77xvm" event={"ID":"0e3e9bf0-9aee-43e5-8a9d-126e43c3814d","Type":"ContainerStarted","Data":"59032dd814407e41b78a1542a27b3080fbbb53ec31be0b01d592b52aae27373c"} Jan 27 09:24:25 crc kubenswrapper[4799]: I0127 09:24:25.992556 4799 generic.go:334] "Generic (PLEG): container finished" podID="0e3e9bf0-9aee-43e5-8a9d-126e43c3814d" containerID="59032dd814407e41b78a1542a27b3080fbbb53ec31be0b01d592b52aae27373c" exitCode=0 Jan 27 09:24:25 crc kubenswrapper[4799]: I0127 09:24:25.992605 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77xvm" event={"ID":"0e3e9bf0-9aee-43e5-8a9d-126e43c3814d","Type":"ContainerDied","Data":"59032dd814407e41b78a1542a27b3080fbbb53ec31be0b01d592b52aae27373c"} Jan 27 09:24:34 crc kubenswrapper[4799]: I0127 09:24:34.101572 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77xvm" event={"ID":"0e3e9bf0-9aee-43e5-8a9d-126e43c3814d","Type":"ContainerStarted","Data":"34e4ee4d102896bafb750a0c277d80fbc84d426b4bcf08a4b0b034035f39eb31"} Jan 27 09:24:34 crc kubenswrapper[4799]: I0127 09:24:34.104229 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6c4dfc9d78-dch6f" event={"ID":"044571bd-c726-4af3-8344-7df2aafcca9a","Type":"ContainerStarted","Data":"3fe7be1446cd071adeac43b52d81d1c4a630f47074c8caff57428ec19fd5a45e"} Jan 27 09:24:34 crc kubenswrapper[4799]: I0127 09:24:34.129794 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-77xvm" podStartSLOduration=2.521006194 podStartE2EDuration="12.12976892s" podCreationTimestamp="2026-01-27 09:24:22 +0000 UTC" firstStartedPulling="2026-01-27 09:24:23.957701952 +0000 UTC m=+5930.268806017" lastFinishedPulling="2026-01-27 09:24:33.566464678 +0000 UTC m=+5939.877568743" observedRunningTime="2026-01-27 09:24:34.126605494 +0000 UTC m=+5940.437709569" watchObservedRunningTime="2026-01-27 09:24:34.12976892 +0000 UTC m=+5940.440872975" Jan 27 09:24:34 crc kubenswrapper[4799]: E0127 09:24:34.328886 4799 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod044571bd_c726_4af3_8344_7df2aafcca9a.slice/crio-conmon-3fe7be1446cd071adeac43b52d81d1c4a630f47074c8caff57428ec19fd5a45e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod044571bd_c726_4af3_8344_7df2aafcca9a.slice/crio-3fe7be1446cd071adeac43b52d81d1c4a630f47074c8caff57428ec19fd5a45e.scope\": RecentStats: unable to find data in memory cache]" Jan 27 09:24:35 crc kubenswrapper[4799]: I0127 09:24:35.117086 4799 generic.go:334] "Generic (PLEG): container finished" podID="044571bd-c726-4af3-8344-7df2aafcca9a" containerID="3fe7be1446cd071adeac43b52d81d1c4a630f47074c8caff57428ec19fd5a45e" exitCode=0 Jan 27 09:24:35 crc kubenswrapper[4799]: I0127 09:24:35.117212 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6c4dfc9d78-dch6f" event={"ID":"044571bd-c726-4af3-8344-7df2aafcca9a","Type":"ContainerDied","Data":"3fe7be1446cd071adeac43b52d81d1c4a630f47074c8caff57428ec19fd5a45e"} Jan 27 09:24:36 crc kubenswrapper[4799]: I0127 09:24:36.129543 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6c4dfc9d78-dch6f" event={"ID":"044571bd-c726-4af3-8344-7df2aafcca9a","Type":"ContainerStarted","Data":"59f0f124bdc963f12e5b87a22d06b5a8a16b012f7b3ac89bc2da65ae415cc8ea"} Jan 27 09:24:36 crc kubenswrapper[4799]: I0127 09:24:36.130065 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6c4dfc9d78-dch6f" event={"ID":"044571bd-c726-4af3-8344-7df2aafcca9a","Type":"ContainerStarted","Data":"f3e883bb3c16bbb8f5d99d9f8417a9d08ba0a37a72c8cd2c4f71c7175d643698"} Jan 27 09:24:36 crc kubenswrapper[4799]: I0127 09:24:36.130657 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-6c4dfc9d78-dch6f" Jan 27 09:24:36 crc kubenswrapper[4799]: I0127 09:24:36.130681 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-6c4dfc9d78-dch6f" Jan 27 09:24:36 crc kubenswrapper[4799]: I0127 09:24:36.160155 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-6c4dfc9d78-dch6f" podStartSLOduration=4.235841962 podStartE2EDuration="16.160133233s" podCreationTimestamp="2026-01-27 09:24:20 +0000 UTC" firstStartedPulling="2026-01-27 09:24:21.753481136 +0000 UTC m=+5928.064585201" lastFinishedPulling="2026-01-27 09:24:33.677772407 +0000 UTC m=+5939.988876472" observedRunningTime="2026-01-27 09:24:36.158005856 +0000 UTC m=+5942.469109921" watchObservedRunningTime="2026-01-27 09:24:36.160133233 +0000 UTC m=+5942.471237308" Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.594137 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.602798 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pd9m7" Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.610823 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-q5btc" podUID="1570f261-65d6-442d-8b5b-237d9497476f" containerName="ovn-controller" probeResult="failure" output=< Jan 27 09:24:40 crc kubenswrapper[4799]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 09:24:40 crc kubenswrapper[4799]: > Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.749867 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-q5btc-config-pkjqn"] Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.751539 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q5btc-config-pkjqn" Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.754878 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.783997 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q5btc-config-pkjqn"] Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.827979 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-var-run\") pod \"ovn-controller-q5btc-config-pkjqn\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " pod="openstack/ovn-controller-q5btc-config-pkjqn" Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.828445 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-additional-scripts\") pod \"ovn-controller-q5btc-config-pkjqn\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " pod="openstack/ovn-controller-q5btc-config-pkjqn" Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.828610 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-var-run-ovn\") pod \"ovn-controller-q5btc-config-pkjqn\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " pod="openstack/ovn-controller-q5btc-config-pkjqn" Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.828746 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-var-log-ovn\") pod \"ovn-controller-q5btc-config-pkjqn\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " pod="openstack/ovn-controller-q5btc-config-pkjqn" Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.828862 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-scripts\") pod \"ovn-controller-q5btc-config-pkjqn\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " pod="openstack/ovn-controller-q5btc-config-pkjqn" Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.829046 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dqfb\" (UniqueName: \"kubernetes.io/projected/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-kube-api-access-2dqfb\") pod \"ovn-controller-q5btc-config-pkjqn\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " pod="openstack/ovn-controller-q5btc-config-pkjqn" Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.931261 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-var-run\") pod \"ovn-controller-q5btc-config-pkjqn\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " pod="openstack/ovn-controller-q5btc-config-pkjqn" Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.931351 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-additional-scripts\") pod \"ovn-controller-q5btc-config-pkjqn\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " pod="openstack/ovn-controller-q5btc-config-pkjqn" Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.931390 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-var-run-ovn\") pod \"ovn-controller-q5btc-config-pkjqn\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " pod="openstack/ovn-controller-q5btc-config-pkjqn" Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.931424 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-var-log-ovn\") pod \"ovn-controller-q5btc-config-pkjqn\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " pod="openstack/ovn-controller-q5btc-config-pkjqn" Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.931449 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-scripts\") pod \"ovn-controller-q5btc-config-pkjqn\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " pod="openstack/ovn-controller-q5btc-config-pkjqn" Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.931502 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dqfb\" (UniqueName: \"kubernetes.io/projected/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-kube-api-access-2dqfb\") pod \"ovn-controller-q5btc-config-pkjqn\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " pod="openstack/ovn-controller-q5btc-config-pkjqn" Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.931745 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-var-run\") pod \"ovn-controller-q5btc-config-pkjqn\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " pod="openstack/ovn-controller-q5btc-config-pkjqn" Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.931830 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-var-run-ovn\") pod \"ovn-controller-q5btc-config-pkjqn\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " pod="openstack/ovn-controller-q5btc-config-pkjqn" Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.932001 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-var-log-ovn\") pod \"ovn-controller-q5btc-config-pkjqn\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " pod="openstack/ovn-controller-q5btc-config-pkjqn" Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.932716 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-additional-scripts\") pod \"ovn-controller-q5btc-config-pkjqn\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " pod="openstack/ovn-controller-q5btc-config-pkjqn" Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.934163 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-scripts\") pod \"ovn-controller-q5btc-config-pkjqn\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " pod="openstack/ovn-controller-q5btc-config-pkjqn" Jan 27 09:24:40 crc kubenswrapper[4799]: I0127 09:24:40.967396 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dqfb\" (UniqueName: \"kubernetes.io/projected/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-kube-api-access-2dqfb\") pod \"ovn-controller-q5btc-config-pkjqn\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " pod="openstack/ovn-controller-q5btc-config-pkjqn" Jan 27 09:24:41 crc kubenswrapper[4799]: I0127 09:24:41.082841 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q5btc-config-pkjqn" Jan 27 09:24:41 crc kubenswrapper[4799]: I0127 09:24:41.403190 4799 scope.go:117] "RemoveContainer" containerID="46b0b8c4e4233f7b805ce62a3871409ef973b98d19b0d9dd81b337f123e5dc86" Jan 27 09:24:41 crc kubenswrapper[4799]: I0127 09:24:41.586331 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q5btc-config-pkjqn"] Jan 27 09:24:42 crc kubenswrapper[4799]: I0127 09:24:42.192384 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q5btc-config-pkjqn" event={"ID":"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6","Type":"ContainerStarted","Data":"61da552352cecd64b0a511126ecfcf6ac4f2ca9e83ced390078c938112204a20"} Jan 27 09:24:42 crc kubenswrapper[4799]: I0127 09:24:42.192902 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q5btc-config-pkjqn" event={"ID":"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6","Type":"ContainerStarted","Data":"7019f45b881ed32a510de0bb85cc3cda302c463509c12da71525e6bddb6207ee"} Jan 27 09:24:42 crc kubenswrapper[4799]: I0127 09:24:42.218255 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-q5btc-config-pkjqn" podStartSLOduration=2.218235445 podStartE2EDuration="2.218235445s" podCreationTimestamp="2026-01-27 09:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:24:42.211883882 +0000 UTC m=+5948.522987957" watchObservedRunningTime="2026-01-27 09:24:42.218235445 +0000 UTC m=+5948.529339510" Jan 27 09:24:42 crc kubenswrapper[4799]: I0127 09:24:42.360830 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-77xvm" Jan 27 09:24:42 crc kubenswrapper[4799]: I0127 09:24:42.360883 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-77xvm" Jan 27 09:24:43 crc kubenswrapper[4799]: I0127 09:24:43.204955 4799 generic.go:334] "Generic (PLEG): container finished" podID="3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6" containerID="61da552352cecd64b0a511126ecfcf6ac4f2ca9e83ced390078c938112204a20" exitCode=0 Jan 27 09:24:43 crc kubenswrapper[4799]: I0127 09:24:43.205006 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q5btc-config-pkjqn" event={"ID":"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6","Type":"ContainerDied","Data":"61da552352cecd64b0a511126ecfcf6ac4f2ca9e83ced390078c938112204a20"} Jan 27 09:24:43 crc kubenswrapper[4799]: I0127 09:24:43.416574 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-77xvm" podUID="0e3e9bf0-9aee-43e5-8a9d-126e43c3814d" containerName="registry-server" probeResult="failure" output=< Jan 27 09:24:43 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Jan 27 09:24:43 crc kubenswrapper[4799]: > Jan 27 09:24:44 crc kubenswrapper[4799]: I0127 09:24:44.657642 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q5btc-config-pkjqn" Jan 27 09:24:44 crc kubenswrapper[4799]: I0127 09:24:44.811845 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-var-run\") pod \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " Jan 27 09:24:44 crc kubenswrapper[4799]: I0127 09:24:44.812034 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-var-run" (OuterVolumeSpecName: "var-run") pod "3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6" (UID: "3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:24:44 crc kubenswrapper[4799]: I0127 09:24:44.812332 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-additional-scripts\") pod \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " Jan 27 09:24:44 crc kubenswrapper[4799]: I0127 09:24:44.812385 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-var-log-ovn\") pod \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " Jan 27 09:24:44 crc kubenswrapper[4799]: I0127 09:24:44.812444 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dqfb\" (UniqueName: \"kubernetes.io/projected/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-kube-api-access-2dqfb\") pod \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " Jan 27 09:24:44 crc kubenswrapper[4799]: I0127 09:24:44.812488 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-var-run-ovn\") pod \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " Jan 27 09:24:44 crc kubenswrapper[4799]: I0127 09:24:44.812499 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6" (UID: "3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:24:44 crc kubenswrapper[4799]: I0127 09:24:44.812555 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-scripts\") pod \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\" (UID: \"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6\") " Jan 27 09:24:44 crc kubenswrapper[4799]: I0127 09:24:44.812646 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6" (UID: "3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:24:44 crc kubenswrapper[4799]: I0127 09:24:44.813086 4799 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:44 crc kubenswrapper[4799]: I0127 09:24:44.813102 4799 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:44 crc kubenswrapper[4799]: I0127 09:24:44.813111 4799 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:44 crc kubenswrapper[4799]: I0127 09:24:44.813323 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6" (UID: "3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:24:44 crc kubenswrapper[4799]: I0127 09:24:44.813786 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-scripts" (OuterVolumeSpecName: "scripts") pod "3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6" (UID: "3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:24:44 crc kubenswrapper[4799]: I0127 09:24:44.822942 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-kube-api-access-2dqfb" (OuterVolumeSpecName: "kube-api-access-2dqfb") pod "3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6" (UID: "3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6"). InnerVolumeSpecName "kube-api-access-2dqfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:24:44 crc kubenswrapper[4799]: I0127 09:24:44.915344 4799 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:44 crc kubenswrapper[4799]: I0127 09:24:44.915393 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dqfb\" (UniqueName: \"kubernetes.io/projected/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-kube-api-access-2dqfb\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:44 crc kubenswrapper[4799]: I0127 09:24:44.915407 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.229729 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q5btc-config-pkjqn" event={"ID":"3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6","Type":"ContainerDied","Data":"7019f45b881ed32a510de0bb85cc3cda302c463509c12da71525e6bddb6207ee"} Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.229785 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7019f45b881ed32a510de0bb85cc3cda302c463509c12da71525e6bddb6207ee" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.229790 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q5btc-config-pkjqn" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.324590 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-q5btc-config-pkjqn"] Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.341370 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-q5btc-config-pkjqn"] Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.435943 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-q5btc-config-fbpk7"] Jan 27 09:24:45 crc kubenswrapper[4799]: E0127 09:24:45.438706 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6" containerName="ovn-config" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.438740 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6" containerName="ovn-config" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.439023 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6" containerName="ovn-config" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.439753 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q5btc-config-fbpk7" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.443231 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.457708 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q5btc-config-fbpk7"] Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.528609 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f4gz\" (UniqueName: \"kubernetes.io/projected/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-kube-api-access-2f4gz\") pod \"ovn-controller-q5btc-config-fbpk7\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " pod="openstack/ovn-controller-q5btc-config-fbpk7" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.528675 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-var-run-ovn\") pod \"ovn-controller-q5btc-config-fbpk7\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " pod="openstack/ovn-controller-q5btc-config-fbpk7" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.528773 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-additional-scripts\") pod \"ovn-controller-q5btc-config-fbpk7\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " pod="openstack/ovn-controller-q5btc-config-fbpk7" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.528803 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-scripts\") pod \"ovn-controller-q5btc-config-fbpk7\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " pod="openstack/ovn-controller-q5btc-config-fbpk7" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.528836 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-var-run\") pod \"ovn-controller-q5btc-config-fbpk7\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " pod="openstack/ovn-controller-q5btc-config-fbpk7" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.529275 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-var-log-ovn\") pod \"ovn-controller-q5btc-config-fbpk7\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " pod="openstack/ovn-controller-q5btc-config-fbpk7" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.624207 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-q5btc" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.632823 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-var-log-ovn\") pod \"ovn-controller-q5btc-config-fbpk7\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " pod="openstack/ovn-controller-q5btc-config-fbpk7" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.632952 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f4gz\" (UniqueName: \"kubernetes.io/projected/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-kube-api-access-2f4gz\") pod \"ovn-controller-q5btc-config-fbpk7\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " pod="openstack/ovn-controller-q5btc-config-fbpk7" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.632993 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-var-run-ovn\") pod \"ovn-controller-q5btc-config-fbpk7\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " pod="openstack/ovn-controller-q5btc-config-fbpk7" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.633105 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-additional-scripts\") pod \"ovn-controller-q5btc-config-fbpk7\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " pod="openstack/ovn-controller-q5btc-config-fbpk7" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.633147 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-scripts\") pod \"ovn-controller-q5btc-config-fbpk7\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " pod="openstack/ovn-controller-q5btc-config-fbpk7" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.633184 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-var-run\") pod \"ovn-controller-q5btc-config-fbpk7\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " pod="openstack/ovn-controller-q5btc-config-fbpk7" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.633595 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-var-run\") pod \"ovn-controller-q5btc-config-fbpk7\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " pod="openstack/ovn-controller-q5btc-config-fbpk7" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.633685 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-var-run-ovn\") pod \"ovn-controller-q5btc-config-fbpk7\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " pod="openstack/ovn-controller-q5btc-config-fbpk7" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.633738 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-var-log-ovn\") pod \"ovn-controller-q5btc-config-fbpk7\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " pod="openstack/ovn-controller-q5btc-config-fbpk7" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.635076 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-additional-scripts\") pod \"ovn-controller-q5btc-config-fbpk7\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " pod="openstack/ovn-controller-q5btc-config-fbpk7" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.636667 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-scripts\") pod \"ovn-controller-q5btc-config-fbpk7\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " pod="openstack/ovn-controller-q5btc-config-fbpk7" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.659055 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f4gz\" (UniqueName: \"kubernetes.io/projected/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-kube-api-access-2f4gz\") pod \"ovn-controller-q5btc-config-fbpk7\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " pod="openstack/ovn-controller-q5btc-config-fbpk7" Jan 27 09:24:45 crc kubenswrapper[4799]: I0127 09:24:45.759627 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q5btc-config-fbpk7" Jan 27 09:24:46 crc kubenswrapper[4799]: I0127 09:24:46.254889 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q5btc-config-fbpk7"] Jan 27 09:24:46 crc kubenswrapper[4799]: I0127 09:24:46.472995 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6" path="/var/lib/kubelet/pods/3e8a76d1-a0d1-4637-b6b3-010a32d4b1f6/volumes" Jan 27 09:24:47 crc kubenswrapper[4799]: I0127 09:24:47.249209 4799 generic.go:334] "Generic (PLEG): container finished" podID="ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926" containerID="cad6087e6a0e33226beea40cd5cdaf5ab81489f199ed38b9689639cdf12e923d" exitCode=0 Jan 27 09:24:47 crc kubenswrapper[4799]: I0127 09:24:47.249404 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q5btc-config-fbpk7" event={"ID":"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926","Type":"ContainerDied","Data":"cad6087e6a0e33226beea40cd5cdaf5ab81489f199ed38b9689639cdf12e923d"} Jan 27 09:24:47 crc kubenswrapper[4799]: I0127 09:24:47.250288 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q5btc-config-fbpk7" event={"ID":"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926","Type":"ContainerStarted","Data":"98ee02faefe805e9bc898be0109e950cd04d78a7ed68bdf7d3f276ba4312d88e"} Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.322205 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-qml5p"] Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.324697 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-qml5p" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.327096 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.327544 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.327836 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.345696 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-qml5p"] Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.391773 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa282712-e1dc-48b8-99ce-d801c095eac0-scripts\") pod \"octavia-rsyslog-qml5p\" (UID: \"aa282712-e1dc-48b8-99ce-d801c095eac0\") " pod="openstack/octavia-rsyslog-qml5p" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.391882 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/aa282712-e1dc-48b8-99ce-d801c095eac0-hm-ports\") pod \"octavia-rsyslog-qml5p\" (UID: \"aa282712-e1dc-48b8-99ce-d801c095eac0\") " pod="openstack/octavia-rsyslog-qml5p" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.391941 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/aa282712-e1dc-48b8-99ce-d801c095eac0-config-data-merged\") pod \"octavia-rsyslog-qml5p\" (UID: \"aa282712-e1dc-48b8-99ce-d801c095eac0\") " pod="openstack/octavia-rsyslog-qml5p" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.391983 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa282712-e1dc-48b8-99ce-d801c095eac0-config-data\") pod \"octavia-rsyslog-qml5p\" (UID: \"aa282712-e1dc-48b8-99ce-d801c095eac0\") " pod="openstack/octavia-rsyslog-qml5p" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.494834 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa282712-e1dc-48b8-99ce-d801c095eac0-config-data\") pod \"octavia-rsyslog-qml5p\" (UID: \"aa282712-e1dc-48b8-99ce-d801c095eac0\") " pod="openstack/octavia-rsyslog-qml5p" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.495035 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa282712-e1dc-48b8-99ce-d801c095eac0-scripts\") pod \"octavia-rsyslog-qml5p\" (UID: \"aa282712-e1dc-48b8-99ce-d801c095eac0\") " pod="openstack/octavia-rsyslog-qml5p" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.495111 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/aa282712-e1dc-48b8-99ce-d801c095eac0-hm-ports\") pod \"octavia-rsyslog-qml5p\" (UID: \"aa282712-e1dc-48b8-99ce-d801c095eac0\") " pod="openstack/octavia-rsyslog-qml5p" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.495159 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/aa282712-e1dc-48b8-99ce-d801c095eac0-config-data-merged\") pod \"octavia-rsyslog-qml5p\" (UID: \"aa282712-e1dc-48b8-99ce-d801c095eac0\") " pod="openstack/octavia-rsyslog-qml5p" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.495764 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/aa282712-e1dc-48b8-99ce-d801c095eac0-config-data-merged\") pod \"octavia-rsyslog-qml5p\" (UID: \"aa282712-e1dc-48b8-99ce-d801c095eac0\") " pod="openstack/octavia-rsyslog-qml5p" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.496608 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/aa282712-e1dc-48b8-99ce-d801c095eac0-hm-ports\") pod \"octavia-rsyslog-qml5p\" (UID: \"aa282712-e1dc-48b8-99ce-d801c095eac0\") " pod="openstack/octavia-rsyslog-qml5p" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.504279 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa282712-e1dc-48b8-99ce-d801c095eac0-scripts\") pod \"octavia-rsyslog-qml5p\" (UID: \"aa282712-e1dc-48b8-99ce-d801c095eac0\") " pod="openstack/octavia-rsyslog-qml5p" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.558669 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa282712-e1dc-48b8-99ce-d801c095eac0-config-data\") pod \"octavia-rsyslog-qml5p\" (UID: \"aa282712-e1dc-48b8-99ce-d801c095eac0\") " pod="openstack/octavia-rsyslog-qml5p" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.660206 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-qml5p" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.818274 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q5btc-config-fbpk7" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.907486 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-scripts\") pod \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.907604 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-var-log-ovn\") pod \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.907763 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-var-run\") pod \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.907788 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f4gz\" (UniqueName: \"kubernetes.io/projected/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-kube-api-access-2f4gz\") pod \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.907824 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-additional-scripts\") pod \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.907868 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-var-run-ovn\") pod \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\" (UID: \"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926\") " Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.907969 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-var-run" (OuterVolumeSpecName: "var-run") pod "ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926" (UID: "ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.908120 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926" (UID: "ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.908160 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926" (UID: "ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.908447 4799 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.908465 4799 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.908476 4799 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.909233 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926" (UID: "ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.909368 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-scripts" (OuterVolumeSpecName: "scripts") pod "ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926" (UID: "ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:24:48 crc kubenswrapper[4799]: I0127 09:24:48.930416 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-kube-api-access-2f4gz" (OuterVolumeSpecName: "kube-api-access-2f4gz") pod "ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926" (UID: "ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926"). InnerVolumeSpecName "kube-api-access-2f4gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.010049 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f4gz\" (UniqueName: \"kubernetes.io/projected/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-kube-api-access-2f4gz\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.010088 4799 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.010097 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.139924 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-59f8cff499-64dwt"] Jan 27 09:24:49 crc kubenswrapper[4799]: E0127 09:24:49.140432 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926" containerName="ovn-config" Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.140448 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926" containerName="ovn-config" Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.140714 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926" containerName="ovn-config" Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.141786 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-64dwt" Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.148266 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.157405 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-64dwt"] Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.221822 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f27057ff-7088-4d3e-b007-b96e0f91bea8-amphora-image\") pod \"octavia-image-upload-59f8cff499-64dwt\" (UID: \"f27057ff-7088-4d3e-b007-b96e0f91bea8\") " pod="openstack/octavia-image-upload-59f8cff499-64dwt" Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.222345 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f27057ff-7088-4d3e-b007-b96e0f91bea8-httpd-config\") pod \"octavia-image-upload-59f8cff499-64dwt\" (UID: \"f27057ff-7088-4d3e-b007-b96e0f91bea8\") " pod="openstack/octavia-image-upload-59f8cff499-64dwt" Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.277719 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q5btc-config-fbpk7" event={"ID":"ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926","Type":"ContainerDied","Data":"98ee02faefe805e9bc898be0109e950cd04d78a7ed68bdf7d3f276ba4312d88e"} Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.277777 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98ee02faefe805e9bc898be0109e950cd04d78a7ed68bdf7d3f276ba4312d88e" Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.277814 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q5btc-config-fbpk7" Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.335225 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f27057ff-7088-4d3e-b007-b96e0f91bea8-amphora-image\") pod \"octavia-image-upload-59f8cff499-64dwt\" (UID: \"f27057ff-7088-4d3e-b007-b96e0f91bea8\") " pod="openstack/octavia-image-upload-59f8cff499-64dwt" Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.335293 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f27057ff-7088-4d3e-b007-b96e0f91bea8-httpd-config\") pod \"octavia-image-upload-59f8cff499-64dwt\" (UID: \"f27057ff-7088-4d3e-b007-b96e0f91bea8\") " pod="openstack/octavia-image-upload-59f8cff499-64dwt" Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.337267 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f27057ff-7088-4d3e-b007-b96e0f91bea8-amphora-image\") pod \"octavia-image-upload-59f8cff499-64dwt\" (UID: \"f27057ff-7088-4d3e-b007-b96e0f91bea8\") " pod="openstack/octavia-image-upload-59f8cff499-64dwt" Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.340502 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f27057ff-7088-4d3e-b007-b96e0f91bea8-httpd-config\") pod \"octavia-image-upload-59f8cff499-64dwt\" (UID: \"f27057ff-7088-4d3e-b007-b96e0f91bea8\") " pod="openstack/octavia-image-upload-59f8cff499-64dwt" Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.346919 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-qml5p"] Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.406235 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-qml5p"] Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.463235 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-64dwt" Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.918775 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-q5btc-config-fbpk7"] Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.930259 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-q5btc-config-fbpk7"] Jan 27 09:24:49 crc kubenswrapper[4799]: I0127 09:24:49.960400 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-64dwt"] Jan 27 09:24:50 crc kubenswrapper[4799]: I0127 09:24:50.302034 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-qml5p" event={"ID":"aa282712-e1dc-48b8-99ce-d801c095eac0","Type":"ContainerStarted","Data":"6854c174a375de572dcd39c3dec4e808ada5da7eecd6a17767c02185d73aa562"} Jan 27 09:24:50 crc kubenswrapper[4799]: I0127 09:24:50.304851 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-64dwt" event={"ID":"f27057ff-7088-4d3e-b007-b96e0f91bea8","Type":"ContainerStarted","Data":"cac124d345a559f4d1c6bc75c6db6fdb37397e4ca6ac9bb6df914e112b1a19eb"} Jan 27 09:24:50 crc kubenswrapper[4799]: I0127 09:24:50.495483 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926" path="/var/lib/kubelet/pods/ba8eb0e2-3d0b-429c-8c4a-65ae4e2ec926/volumes" Jan 27 09:24:50 crc kubenswrapper[4799]: I0127 09:24:50.771860 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-gvt2q"] Jan 27 09:24:50 crc kubenswrapper[4799]: I0127 09:24:50.773857 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-gvt2q" Jan 27 09:24:50 crc kubenswrapper[4799]: I0127 09:24:50.778708 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Jan 27 09:24:50 crc kubenswrapper[4799]: I0127 09:24:50.783792 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-gvt2q"] Jan 27 09:24:50 crc kubenswrapper[4799]: I0127 09:24:50.904443 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1672179-ba8a-4842-aebd-cf496ff726e4-config-data\") pod \"octavia-db-sync-gvt2q\" (UID: \"d1672179-ba8a-4842-aebd-cf496ff726e4\") " pod="openstack/octavia-db-sync-gvt2q" Jan 27 09:24:50 crc kubenswrapper[4799]: I0127 09:24:50.904617 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1672179-ba8a-4842-aebd-cf496ff726e4-scripts\") pod \"octavia-db-sync-gvt2q\" (UID: \"d1672179-ba8a-4842-aebd-cf496ff726e4\") " pod="openstack/octavia-db-sync-gvt2q" Jan 27 09:24:50 crc kubenswrapper[4799]: I0127 09:24:50.904697 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d1672179-ba8a-4842-aebd-cf496ff726e4-config-data-merged\") pod \"octavia-db-sync-gvt2q\" (UID: \"d1672179-ba8a-4842-aebd-cf496ff726e4\") " pod="openstack/octavia-db-sync-gvt2q" Jan 27 09:24:50 crc kubenswrapper[4799]: I0127 09:24:50.904723 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1672179-ba8a-4842-aebd-cf496ff726e4-combined-ca-bundle\") pod \"octavia-db-sync-gvt2q\" (UID: \"d1672179-ba8a-4842-aebd-cf496ff726e4\") " pod="openstack/octavia-db-sync-gvt2q" Jan 27 09:24:51 crc kubenswrapper[4799]: I0127 09:24:51.010393 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d1672179-ba8a-4842-aebd-cf496ff726e4-config-data-merged\") pod \"octavia-db-sync-gvt2q\" (UID: \"d1672179-ba8a-4842-aebd-cf496ff726e4\") " pod="openstack/octavia-db-sync-gvt2q" Jan 27 09:24:51 crc kubenswrapper[4799]: I0127 09:24:51.010466 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1672179-ba8a-4842-aebd-cf496ff726e4-combined-ca-bundle\") pod \"octavia-db-sync-gvt2q\" (UID: \"d1672179-ba8a-4842-aebd-cf496ff726e4\") " pod="openstack/octavia-db-sync-gvt2q" Jan 27 09:24:51 crc kubenswrapper[4799]: I0127 09:24:51.010545 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1672179-ba8a-4842-aebd-cf496ff726e4-config-data\") pod \"octavia-db-sync-gvt2q\" (UID: \"d1672179-ba8a-4842-aebd-cf496ff726e4\") " pod="openstack/octavia-db-sync-gvt2q" Jan 27 09:24:51 crc kubenswrapper[4799]: I0127 09:24:51.010806 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1672179-ba8a-4842-aebd-cf496ff726e4-scripts\") pod \"octavia-db-sync-gvt2q\" (UID: \"d1672179-ba8a-4842-aebd-cf496ff726e4\") " pod="openstack/octavia-db-sync-gvt2q" Jan 27 09:24:51 crc kubenswrapper[4799]: I0127 09:24:51.013820 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d1672179-ba8a-4842-aebd-cf496ff726e4-config-data-merged\") pod \"octavia-db-sync-gvt2q\" (UID: \"d1672179-ba8a-4842-aebd-cf496ff726e4\") " pod="openstack/octavia-db-sync-gvt2q" Jan 27 09:24:51 crc kubenswrapper[4799]: I0127 09:24:51.020601 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1672179-ba8a-4842-aebd-cf496ff726e4-config-data\") pod \"octavia-db-sync-gvt2q\" (UID: \"d1672179-ba8a-4842-aebd-cf496ff726e4\") " pod="openstack/octavia-db-sync-gvt2q" Jan 27 09:24:51 crc kubenswrapper[4799]: I0127 09:24:51.026072 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1672179-ba8a-4842-aebd-cf496ff726e4-scripts\") pod \"octavia-db-sync-gvt2q\" (UID: \"d1672179-ba8a-4842-aebd-cf496ff726e4\") " pod="openstack/octavia-db-sync-gvt2q" Jan 27 09:24:51 crc kubenswrapper[4799]: I0127 09:24:51.042036 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1672179-ba8a-4842-aebd-cf496ff726e4-combined-ca-bundle\") pod \"octavia-db-sync-gvt2q\" (UID: \"d1672179-ba8a-4842-aebd-cf496ff726e4\") " pod="openstack/octavia-db-sync-gvt2q" Jan 27 09:24:51 crc kubenswrapper[4799]: I0127 09:24:51.117579 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-gvt2q" Jan 27 09:24:51 crc kubenswrapper[4799]: I0127 09:24:51.741171 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-gvt2q"] Jan 27 09:24:51 crc kubenswrapper[4799]: W0127 09:24:51.769656 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1672179_ba8a_4842_aebd_cf496ff726e4.slice/crio-cbcc12b31b51354a57357804e3562b0d1435eca95e66919c855cc3ccd832d309 WatchSource:0}: Error finding container cbcc12b31b51354a57357804e3562b0d1435eca95e66919c855cc3ccd832d309: Status 404 returned error can't find the container with id cbcc12b31b51354a57357804e3562b0d1435eca95e66919c855cc3ccd832d309 Jan 27 09:24:52 crc kubenswrapper[4799]: I0127 09:24:52.333941 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-qml5p" event={"ID":"aa282712-e1dc-48b8-99ce-d801c095eac0","Type":"ContainerStarted","Data":"7e0a6f6abdf9378604b09a9e598a810c0860dfd94de091a87225fd462f9129d5"} Jan 27 09:24:52 crc kubenswrapper[4799]: I0127 09:24:52.338287 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-gvt2q" event={"ID":"d1672179-ba8a-4842-aebd-cf496ff726e4","Type":"ContainerStarted","Data":"cbcc12b31b51354a57357804e3562b0d1435eca95e66919c855cc3ccd832d309"} Jan 27 09:24:52 crc kubenswrapper[4799]: I0127 09:24:52.427654 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-77xvm" Jan 27 09:24:52 crc kubenswrapper[4799]: I0127 09:24:52.495465 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-77xvm" Jan 27 09:24:53 crc kubenswrapper[4799]: I0127 09:24:53.239627 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-77xvm"] Jan 27 09:24:53 crc kubenswrapper[4799]: I0127 09:24:53.358286 4799 generic.go:334] "Generic (PLEG): container finished" podID="d1672179-ba8a-4842-aebd-cf496ff726e4" containerID="ee2651b2e4df67e32a7b6a299a1baf0dcbdfed4e1ea625868216aa59c8820b92" exitCode=0 Jan 27 09:24:53 crc kubenswrapper[4799]: I0127 09:24:53.358449 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-gvt2q" event={"ID":"d1672179-ba8a-4842-aebd-cf496ff726e4","Type":"ContainerDied","Data":"ee2651b2e4df67e32a7b6a299a1baf0dcbdfed4e1ea625868216aa59c8820b92"} Jan 27 09:24:53 crc kubenswrapper[4799]: I0127 09:24:53.731270 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:24:53 crc kubenswrapper[4799]: I0127 09:24:53.731370 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:24:54 crc kubenswrapper[4799]: I0127 09:24:54.402120 4799 generic.go:334] "Generic (PLEG): container finished" podID="aa282712-e1dc-48b8-99ce-d801c095eac0" containerID="7e0a6f6abdf9378604b09a9e598a810c0860dfd94de091a87225fd462f9129d5" exitCode=0 Jan 27 09:24:54 crc kubenswrapper[4799]: I0127 09:24:54.402233 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-qml5p" event={"ID":"aa282712-e1dc-48b8-99ce-d801c095eac0","Type":"ContainerDied","Data":"7e0a6f6abdf9378604b09a9e598a810c0860dfd94de091a87225fd462f9129d5"} Jan 27 09:24:54 crc kubenswrapper[4799]: I0127 09:24:54.410179 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-gvt2q" event={"ID":"d1672179-ba8a-4842-aebd-cf496ff726e4","Type":"ContainerStarted","Data":"4e005bb135414f50cfa85509a0ab5ff9446852027bec38ce811e4b58559aca9e"} Jan 27 09:24:54 crc kubenswrapper[4799]: I0127 09:24:54.410588 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-77xvm" podUID="0e3e9bf0-9aee-43e5-8a9d-126e43c3814d" containerName="registry-server" containerID="cri-o://34e4ee4d102896bafb750a0c277d80fbc84d426b4bcf08a4b0b034035f39eb31" gracePeriod=2 Jan 27 09:24:54 crc kubenswrapper[4799]: I0127 09:24:54.455980 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-gvt2q" podStartSLOduration=4.455957276 podStartE2EDuration="4.455957276s" podCreationTimestamp="2026-01-27 09:24:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:24:54.455819713 +0000 UTC m=+5960.766923788" watchObservedRunningTime="2026-01-27 09:24:54.455957276 +0000 UTC m=+5960.767061351" Jan 27 09:24:55 crc kubenswrapper[4799]: I0127 09:24:55.430839 4799 generic.go:334] "Generic (PLEG): container finished" podID="0e3e9bf0-9aee-43e5-8a9d-126e43c3814d" containerID="34e4ee4d102896bafb750a0c277d80fbc84d426b4bcf08a4b0b034035f39eb31" exitCode=0 Jan 27 09:24:55 crc kubenswrapper[4799]: I0127 09:24:55.430893 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77xvm" event={"ID":"0e3e9bf0-9aee-43e5-8a9d-126e43c3814d","Type":"ContainerDied","Data":"34e4ee4d102896bafb750a0c277d80fbc84d426b4bcf08a4b0b034035f39eb31"} Jan 27 09:24:55 crc kubenswrapper[4799]: I0127 09:24:55.893382 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-6c4dfc9d78-dch6f" Jan 27 09:24:56 crc kubenswrapper[4799]: I0127 09:24:56.233741 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-77xvm" Jan 27 09:24:56 crc kubenswrapper[4799]: I0127 09:24:56.361419 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-6c4dfc9d78-dch6f" Jan 27 09:24:56 crc kubenswrapper[4799]: I0127 09:24:56.368334 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbt9c\" (UniqueName: \"kubernetes.io/projected/0e3e9bf0-9aee-43e5-8a9d-126e43c3814d-kube-api-access-pbt9c\") pod \"0e3e9bf0-9aee-43e5-8a9d-126e43c3814d\" (UID: \"0e3e9bf0-9aee-43e5-8a9d-126e43c3814d\") " Jan 27 09:24:56 crc kubenswrapper[4799]: I0127 09:24:56.368464 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3e9bf0-9aee-43e5-8a9d-126e43c3814d-utilities\") pod \"0e3e9bf0-9aee-43e5-8a9d-126e43c3814d\" (UID: \"0e3e9bf0-9aee-43e5-8a9d-126e43c3814d\") " Jan 27 09:24:56 crc kubenswrapper[4799]: I0127 09:24:56.368542 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3e9bf0-9aee-43e5-8a9d-126e43c3814d-catalog-content\") pod \"0e3e9bf0-9aee-43e5-8a9d-126e43c3814d\" (UID: \"0e3e9bf0-9aee-43e5-8a9d-126e43c3814d\") " Jan 27 09:24:56 crc kubenswrapper[4799]: I0127 09:24:56.369479 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e3e9bf0-9aee-43e5-8a9d-126e43c3814d-utilities" (OuterVolumeSpecName: "utilities") pod "0e3e9bf0-9aee-43e5-8a9d-126e43c3814d" (UID: "0e3e9bf0-9aee-43e5-8a9d-126e43c3814d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:24:56 crc kubenswrapper[4799]: I0127 09:24:56.374956 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e3e9bf0-9aee-43e5-8a9d-126e43c3814d-kube-api-access-pbt9c" (OuterVolumeSpecName: "kube-api-access-pbt9c") pod "0e3e9bf0-9aee-43e5-8a9d-126e43c3814d" (UID: "0e3e9bf0-9aee-43e5-8a9d-126e43c3814d"). InnerVolumeSpecName "kube-api-access-pbt9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:24:56 crc kubenswrapper[4799]: I0127 09:24:56.453834 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e3e9bf0-9aee-43e5-8a9d-126e43c3814d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e3e9bf0-9aee-43e5-8a9d-126e43c3814d" (UID: "0e3e9bf0-9aee-43e5-8a9d-126e43c3814d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:24:56 crc kubenswrapper[4799]: I0127 09:24:56.460644 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-77xvm" Jan 27 09:24:56 crc kubenswrapper[4799]: I0127 09:24:56.465870 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77xvm" event={"ID":"0e3e9bf0-9aee-43e5-8a9d-126e43c3814d","Type":"ContainerDied","Data":"c20879a845f289b66d99909d19d424033350dd7ed6489266e08d957a8b0b71c1"} Jan 27 09:24:56 crc kubenswrapper[4799]: I0127 09:24:56.465950 4799 scope.go:117] "RemoveContainer" containerID="34e4ee4d102896bafb750a0c277d80fbc84d426b4bcf08a4b0b034035f39eb31" Jan 27 09:24:56 crc kubenswrapper[4799]: I0127 09:24:56.470819 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3e9bf0-9aee-43e5-8a9d-126e43c3814d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:56 crc kubenswrapper[4799]: I0127 09:24:56.470855 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3e9bf0-9aee-43e5-8a9d-126e43c3814d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:56 crc kubenswrapper[4799]: I0127 09:24:56.470870 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbt9c\" (UniqueName: \"kubernetes.io/projected/0e3e9bf0-9aee-43e5-8a9d-126e43c3814d-kube-api-access-pbt9c\") on node \"crc\" DevicePath \"\"" Jan 27 09:24:56 crc kubenswrapper[4799]: I0127 09:24:56.471102 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-qml5p" event={"ID":"aa282712-e1dc-48b8-99ce-d801c095eac0","Type":"ContainerStarted","Data":"51d5f6dc819fd1503d13141d21c9f8f7be541bb1e16e5a5635dcddfad018411d"} Jan 27 09:24:56 crc kubenswrapper[4799]: I0127 09:24:56.473505 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-qml5p" Jan 27 09:24:56 crc kubenswrapper[4799]: I0127 09:24:56.497865 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-qml5p" podStartSLOduration=1.781822564 podStartE2EDuration="8.497276448s" podCreationTimestamp="2026-01-27 09:24:48 +0000 UTC" firstStartedPulling="2026-01-27 09:24:49.372574015 +0000 UTC m=+5955.683678080" lastFinishedPulling="2026-01-27 09:24:56.088027899 +0000 UTC m=+5962.399131964" observedRunningTime="2026-01-27 09:24:56.494870282 +0000 UTC m=+5962.805974347" watchObservedRunningTime="2026-01-27 09:24:56.497276448 +0000 UTC m=+5962.808380513" Jan 27 09:24:56 crc kubenswrapper[4799]: I0127 09:24:56.516760 4799 scope.go:117] "RemoveContainer" containerID="59032dd814407e41b78a1542a27b3080fbbb53ec31be0b01d592b52aae27373c" Jan 27 09:24:56 crc kubenswrapper[4799]: I0127 09:24:56.521727 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-77xvm"] Jan 27 09:24:56 crc kubenswrapper[4799]: I0127 09:24:56.538037 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-77xvm"] Jan 27 09:24:56 crc kubenswrapper[4799]: I0127 09:24:56.592161 4799 scope.go:117] "RemoveContainer" containerID="7675b5ba00d56c1d54470098ff196a9c64a64b2110c5e92e64a865bd43f2f5a1" Jan 27 09:24:57 crc kubenswrapper[4799]: I0127 09:24:57.484605 4799 generic.go:334] "Generic (PLEG): container finished" podID="d1672179-ba8a-4842-aebd-cf496ff726e4" containerID="4e005bb135414f50cfa85509a0ab5ff9446852027bec38ce811e4b58559aca9e" exitCode=0 Jan 27 09:24:57 crc kubenswrapper[4799]: I0127 09:24:57.484671 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-gvt2q" event={"ID":"d1672179-ba8a-4842-aebd-cf496ff726e4","Type":"ContainerDied","Data":"4e005bb135414f50cfa85509a0ab5ff9446852027bec38ce811e4b58559aca9e"} Jan 27 09:24:58 crc kubenswrapper[4799]: I0127 09:24:58.463590 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e3e9bf0-9aee-43e5-8a9d-126e43c3814d" path="/var/lib/kubelet/pods/0e3e9bf0-9aee-43e5-8a9d-126e43c3814d/volumes" Jan 27 09:25:02 crc kubenswrapper[4799]: I0127 09:25:02.671083 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-gvt2q" Jan 27 09:25:02 crc kubenswrapper[4799]: I0127 09:25:02.832801 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1672179-ba8a-4842-aebd-cf496ff726e4-combined-ca-bundle\") pod \"d1672179-ba8a-4842-aebd-cf496ff726e4\" (UID: \"d1672179-ba8a-4842-aebd-cf496ff726e4\") " Jan 27 09:25:02 crc kubenswrapper[4799]: I0127 09:25:02.832896 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1672179-ba8a-4842-aebd-cf496ff726e4-scripts\") pod \"d1672179-ba8a-4842-aebd-cf496ff726e4\" (UID: \"d1672179-ba8a-4842-aebd-cf496ff726e4\") " Jan 27 09:25:02 crc kubenswrapper[4799]: I0127 09:25:02.832969 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d1672179-ba8a-4842-aebd-cf496ff726e4-config-data-merged\") pod \"d1672179-ba8a-4842-aebd-cf496ff726e4\" (UID: \"d1672179-ba8a-4842-aebd-cf496ff726e4\") " Jan 27 09:25:02 crc kubenswrapper[4799]: I0127 09:25:02.833039 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1672179-ba8a-4842-aebd-cf496ff726e4-config-data\") pod \"d1672179-ba8a-4842-aebd-cf496ff726e4\" (UID: \"d1672179-ba8a-4842-aebd-cf496ff726e4\") " Jan 27 09:25:02 crc kubenswrapper[4799]: I0127 09:25:02.839336 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1672179-ba8a-4842-aebd-cf496ff726e4-config-data" (OuterVolumeSpecName: "config-data") pod "d1672179-ba8a-4842-aebd-cf496ff726e4" (UID: "d1672179-ba8a-4842-aebd-cf496ff726e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:25:02 crc kubenswrapper[4799]: I0127 09:25:02.840236 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1672179-ba8a-4842-aebd-cf496ff726e4-scripts" (OuterVolumeSpecName: "scripts") pod "d1672179-ba8a-4842-aebd-cf496ff726e4" (UID: "d1672179-ba8a-4842-aebd-cf496ff726e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:25:02 crc kubenswrapper[4799]: I0127 09:25:02.864373 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1672179-ba8a-4842-aebd-cf496ff726e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1672179-ba8a-4842-aebd-cf496ff726e4" (UID: "d1672179-ba8a-4842-aebd-cf496ff726e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:25:02 crc kubenswrapper[4799]: I0127 09:25:02.876834 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1672179-ba8a-4842-aebd-cf496ff726e4-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "d1672179-ba8a-4842-aebd-cf496ff726e4" (UID: "d1672179-ba8a-4842-aebd-cf496ff726e4"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:25:02 crc kubenswrapper[4799]: I0127 09:25:02.935830 4799 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d1672179-ba8a-4842-aebd-cf496ff726e4-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 27 09:25:02 crc kubenswrapper[4799]: I0127 09:25:02.935879 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1672179-ba8a-4842-aebd-cf496ff726e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:25:02 crc kubenswrapper[4799]: I0127 09:25:02.935892 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1672179-ba8a-4842-aebd-cf496ff726e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:25:02 crc kubenswrapper[4799]: I0127 09:25:02.935906 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1672179-ba8a-4842-aebd-cf496ff726e4-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 09:25:03 crc kubenswrapper[4799]: I0127 09:25:03.564319 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-gvt2q" event={"ID":"d1672179-ba8a-4842-aebd-cf496ff726e4","Type":"ContainerDied","Data":"cbcc12b31b51354a57357804e3562b0d1435eca95e66919c855cc3ccd832d309"} Jan 27 09:25:03 crc kubenswrapper[4799]: I0127 09:25:03.564718 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbcc12b31b51354a57357804e3562b0d1435eca95e66919c855cc3ccd832d309" Jan 27 09:25:03 crc kubenswrapper[4799]: I0127 09:25:03.564380 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-gvt2q" Jan 27 09:25:03 crc kubenswrapper[4799]: I0127 09:25:03.690247 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-qml5p" Jan 27 09:25:04 crc kubenswrapper[4799]: I0127 09:25:04.597597 4799 generic.go:334] "Generic (PLEG): container finished" podID="f27057ff-7088-4d3e-b007-b96e0f91bea8" containerID="25d6ab910b3b7f56c321e2f96aa7cdc8d31c84d408da5f7497aaf4e9ef33fab5" exitCode=0 Jan 27 09:25:04 crc kubenswrapper[4799]: I0127 09:25:04.598061 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-64dwt" event={"ID":"f27057ff-7088-4d3e-b007-b96e0f91bea8","Type":"ContainerDied","Data":"25d6ab910b3b7f56c321e2f96aa7cdc8d31c84d408da5f7497aaf4e9ef33fab5"} Jan 27 09:25:06 crc kubenswrapper[4799]: I0127 09:25:06.616577 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-64dwt" event={"ID":"f27057ff-7088-4d3e-b007-b96e0f91bea8","Type":"ContainerStarted","Data":"9251235c43aa4d9e6bd6a8a57844aa8d9c8fddaed9797f6ab2ed2ef28d1837e5"} Jan 27 09:25:06 crc kubenswrapper[4799]: I0127 09:25:06.636533 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-59f8cff499-64dwt" podStartSLOduration=1.434317745 podStartE2EDuration="17.636514105s" podCreationTimestamp="2026-01-27 09:24:49 +0000 UTC" firstStartedPulling="2026-01-27 09:24:49.974504349 +0000 UTC m=+5956.285608414" lastFinishedPulling="2026-01-27 09:25:06.176700709 +0000 UTC m=+5972.487804774" observedRunningTime="2026-01-27 09:25:06.634211562 +0000 UTC m=+5972.945315627" watchObservedRunningTime="2026-01-27 09:25:06.636514105 +0000 UTC m=+5972.947618160" Jan 27 09:25:23 crc kubenswrapper[4799]: I0127 09:25:23.731290 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:25:23 crc kubenswrapper[4799]: I0127 09:25:23.732136 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:25:27 crc kubenswrapper[4799]: I0127 09:25:27.602779 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-64dwt"] Jan 27 09:25:27 crc kubenswrapper[4799]: I0127 09:25:27.603691 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-59f8cff499-64dwt" podUID="f27057ff-7088-4d3e-b007-b96e0f91bea8" containerName="octavia-amphora-httpd" containerID="cri-o://9251235c43aa4d9e6bd6a8a57844aa8d9c8fddaed9797f6ab2ed2ef28d1837e5" gracePeriod=30 Jan 27 09:25:27 crc kubenswrapper[4799]: I0127 09:25:27.823217 4799 generic.go:334] "Generic (PLEG): container finished" podID="f27057ff-7088-4d3e-b007-b96e0f91bea8" containerID="9251235c43aa4d9e6bd6a8a57844aa8d9c8fddaed9797f6ab2ed2ef28d1837e5" exitCode=0 Jan 27 09:25:27 crc kubenswrapper[4799]: I0127 09:25:27.823266 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-64dwt" event={"ID":"f27057ff-7088-4d3e-b007-b96e0f91bea8","Type":"ContainerDied","Data":"9251235c43aa4d9e6bd6a8a57844aa8d9c8fddaed9797f6ab2ed2ef28d1837e5"} Jan 27 09:25:28 crc kubenswrapper[4799]: I0127 09:25:28.106370 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-64dwt" Jan 27 09:25:28 crc kubenswrapper[4799]: I0127 09:25:28.277544 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f27057ff-7088-4d3e-b007-b96e0f91bea8-amphora-image\") pod \"f27057ff-7088-4d3e-b007-b96e0f91bea8\" (UID: \"f27057ff-7088-4d3e-b007-b96e0f91bea8\") " Jan 27 09:25:28 crc kubenswrapper[4799]: I0127 09:25:28.277738 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f27057ff-7088-4d3e-b007-b96e0f91bea8-httpd-config\") pod \"f27057ff-7088-4d3e-b007-b96e0f91bea8\" (UID: \"f27057ff-7088-4d3e-b007-b96e0f91bea8\") " Jan 27 09:25:28 crc kubenswrapper[4799]: I0127 09:25:28.312701 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f27057ff-7088-4d3e-b007-b96e0f91bea8-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f27057ff-7088-4d3e-b007-b96e0f91bea8" (UID: "f27057ff-7088-4d3e-b007-b96e0f91bea8"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:25:28 crc kubenswrapper[4799]: I0127 09:25:28.336634 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f27057ff-7088-4d3e-b007-b96e0f91bea8-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "f27057ff-7088-4d3e-b007-b96e0f91bea8" (UID: "f27057ff-7088-4d3e-b007-b96e0f91bea8"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:25:28 crc kubenswrapper[4799]: I0127 09:25:28.380546 4799 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f27057ff-7088-4d3e-b007-b96e0f91bea8-amphora-image\") on node \"crc\" DevicePath \"\"" Jan 27 09:25:28 crc kubenswrapper[4799]: I0127 09:25:28.380770 4799 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f27057ff-7088-4d3e-b007-b96e0f91bea8-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:25:28 crc kubenswrapper[4799]: I0127 09:25:28.834497 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-64dwt" event={"ID":"f27057ff-7088-4d3e-b007-b96e0f91bea8","Type":"ContainerDied","Data":"cac124d345a559f4d1c6bc75c6db6fdb37397e4ca6ac9bb6df914e112b1a19eb"} Jan 27 09:25:28 crc kubenswrapper[4799]: I0127 09:25:28.834558 4799 scope.go:117] "RemoveContainer" containerID="9251235c43aa4d9e6bd6a8a57844aa8d9c8fddaed9797f6ab2ed2ef28d1837e5" Jan 27 09:25:28 crc kubenswrapper[4799]: I0127 09:25:28.834718 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-64dwt" Jan 27 09:25:28 crc kubenswrapper[4799]: I0127 09:25:28.864717 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-64dwt"] Jan 27 09:25:28 crc kubenswrapper[4799]: I0127 09:25:28.873180 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-64dwt"] Jan 27 09:25:28 crc kubenswrapper[4799]: I0127 09:25:28.875270 4799 scope.go:117] "RemoveContainer" containerID="25d6ab910b3b7f56c321e2f96aa7cdc8d31c84d408da5f7497aaf4e9ef33fab5" Jan 27 09:25:30 crc kubenswrapper[4799]: I0127 09:25:30.461508 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f27057ff-7088-4d3e-b007-b96e0f91bea8" path="/var/lib/kubelet/pods/f27057ff-7088-4d3e-b007-b96e0f91bea8/volumes" Jan 27 09:25:32 crc kubenswrapper[4799]: I0127 09:25:32.993288 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-59f8cff499-tdxwr"] Jan 27 09:25:32 crc kubenswrapper[4799]: E0127 09:25:32.996039 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3e9bf0-9aee-43e5-8a9d-126e43c3814d" containerName="registry-server" Jan 27 09:25:32 crc kubenswrapper[4799]: I0127 09:25:32.996056 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3e9bf0-9aee-43e5-8a9d-126e43c3814d" containerName="registry-server" Jan 27 09:25:32 crc kubenswrapper[4799]: E0127 09:25:32.996070 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1672179-ba8a-4842-aebd-cf496ff726e4" containerName="init" Jan 27 09:25:32 crc kubenswrapper[4799]: I0127 09:25:32.996076 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1672179-ba8a-4842-aebd-cf496ff726e4" containerName="init" Jan 27 09:25:32 crc kubenswrapper[4799]: E0127 09:25:32.996095 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1672179-ba8a-4842-aebd-cf496ff726e4" containerName="octavia-db-sync" Jan 27 09:25:32 crc kubenswrapper[4799]: I0127 09:25:32.996102 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1672179-ba8a-4842-aebd-cf496ff726e4" containerName="octavia-db-sync" Jan 27 09:25:32 crc kubenswrapper[4799]: E0127 09:25:32.996111 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f27057ff-7088-4d3e-b007-b96e0f91bea8" containerName="octavia-amphora-httpd" Jan 27 09:25:32 crc kubenswrapper[4799]: I0127 09:25:32.996118 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f27057ff-7088-4d3e-b007-b96e0f91bea8" containerName="octavia-amphora-httpd" Jan 27 09:25:32 crc kubenswrapper[4799]: E0127 09:25:32.996129 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3e9bf0-9aee-43e5-8a9d-126e43c3814d" containerName="extract-content" Jan 27 09:25:32 crc kubenswrapper[4799]: I0127 09:25:32.996134 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3e9bf0-9aee-43e5-8a9d-126e43c3814d" containerName="extract-content" Jan 27 09:25:32 crc kubenswrapper[4799]: E0127 09:25:32.996159 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3e9bf0-9aee-43e5-8a9d-126e43c3814d" containerName="extract-utilities" Jan 27 09:25:32 crc kubenswrapper[4799]: I0127 09:25:32.996166 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3e9bf0-9aee-43e5-8a9d-126e43c3814d" containerName="extract-utilities" Jan 27 09:25:32 crc kubenswrapper[4799]: E0127 09:25:32.996176 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f27057ff-7088-4d3e-b007-b96e0f91bea8" containerName="init" Jan 27 09:25:32 crc kubenswrapper[4799]: I0127 09:25:32.996183 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f27057ff-7088-4d3e-b007-b96e0f91bea8" containerName="init" Jan 27 09:25:32 crc kubenswrapper[4799]: I0127 09:25:32.996415 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1672179-ba8a-4842-aebd-cf496ff726e4" containerName="octavia-db-sync" Jan 27 09:25:32 crc kubenswrapper[4799]: I0127 09:25:32.996438 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f27057ff-7088-4d3e-b007-b96e0f91bea8" containerName="octavia-amphora-httpd" Jan 27 09:25:32 crc kubenswrapper[4799]: I0127 09:25:32.996458 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e3e9bf0-9aee-43e5-8a9d-126e43c3814d" containerName="registry-server" Jan 27 09:25:32 crc kubenswrapper[4799]: I0127 09:25:32.997973 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-tdxwr" Jan 27 09:25:33 crc kubenswrapper[4799]: I0127 09:25:33.004416 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 27 09:25:33 crc kubenswrapper[4799]: I0127 09:25:33.006901 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-tdxwr"] Jan 27 09:25:33 crc kubenswrapper[4799]: I0127 09:25:33.077725 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/93bb1d47-c669-4b0e-a8a3-5b272962a266-amphora-image\") pod \"octavia-image-upload-59f8cff499-tdxwr\" (UID: \"93bb1d47-c669-4b0e-a8a3-5b272962a266\") " pod="openstack/octavia-image-upload-59f8cff499-tdxwr" Jan 27 09:25:33 crc kubenswrapper[4799]: I0127 09:25:33.078177 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/93bb1d47-c669-4b0e-a8a3-5b272962a266-httpd-config\") pod \"octavia-image-upload-59f8cff499-tdxwr\" (UID: \"93bb1d47-c669-4b0e-a8a3-5b272962a266\") " pod="openstack/octavia-image-upload-59f8cff499-tdxwr" Jan 27 09:25:33 crc kubenswrapper[4799]: I0127 09:25:33.179988 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/93bb1d47-c669-4b0e-a8a3-5b272962a266-amphora-image\") pod \"octavia-image-upload-59f8cff499-tdxwr\" (UID: \"93bb1d47-c669-4b0e-a8a3-5b272962a266\") " pod="openstack/octavia-image-upload-59f8cff499-tdxwr" Jan 27 09:25:33 crc kubenswrapper[4799]: I0127 09:25:33.180094 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/93bb1d47-c669-4b0e-a8a3-5b272962a266-httpd-config\") pod \"octavia-image-upload-59f8cff499-tdxwr\" (UID: \"93bb1d47-c669-4b0e-a8a3-5b272962a266\") " pod="openstack/octavia-image-upload-59f8cff499-tdxwr" Jan 27 09:25:33 crc kubenswrapper[4799]: I0127 09:25:33.180928 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/93bb1d47-c669-4b0e-a8a3-5b272962a266-amphora-image\") pod \"octavia-image-upload-59f8cff499-tdxwr\" (UID: \"93bb1d47-c669-4b0e-a8a3-5b272962a266\") " pod="openstack/octavia-image-upload-59f8cff499-tdxwr" Jan 27 09:25:33 crc kubenswrapper[4799]: I0127 09:25:33.188032 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/93bb1d47-c669-4b0e-a8a3-5b272962a266-httpd-config\") pod \"octavia-image-upload-59f8cff499-tdxwr\" (UID: \"93bb1d47-c669-4b0e-a8a3-5b272962a266\") " pod="openstack/octavia-image-upload-59f8cff499-tdxwr" Jan 27 09:25:33 crc kubenswrapper[4799]: I0127 09:25:33.316927 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-tdxwr" Jan 27 09:25:33 crc kubenswrapper[4799]: I0127 09:25:33.816530 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-tdxwr"] Jan 27 09:25:33 crc kubenswrapper[4799]: I0127 09:25:33.881396 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-tdxwr" event={"ID":"93bb1d47-c669-4b0e-a8a3-5b272962a266","Type":"ContainerStarted","Data":"add1c78e01f008fe3afeed8e5c4914c798145d2a5af8e28a4e2fc8d4270f9366"} Jan 27 09:25:34 crc kubenswrapper[4799]: I0127 09:25:34.893474 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-tdxwr" event={"ID":"93bb1d47-c669-4b0e-a8a3-5b272962a266","Type":"ContainerStarted","Data":"47055c2c93966c8c20fb0c5b6ea0c6176a4d7eaa1e52a0ce21affb9ef257ab7d"} Jan 27 09:25:35 crc kubenswrapper[4799]: I0127 09:25:35.904101 4799 generic.go:334] "Generic (PLEG): container finished" podID="93bb1d47-c669-4b0e-a8a3-5b272962a266" containerID="47055c2c93966c8c20fb0c5b6ea0c6176a4d7eaa1e52a0ce21affb9ef257ab7d" exitCode=0 Jan 27 09:25:35 crc kubenswrapper[4799]: I0127 09:25:35.904156 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-tdxwr" event={"ID":"93bb1d47-c669-4b0e-a8a3-5b272962a266","Type":"ContainerDied","Data":"47055c2c93966c8c20fb0c5b6ea0c6176a4d7eaa1e52a0ce21affb9ef257ab7d"} Jan 27 09:25:36 crc kubenswrapper[4799]: I0127 09:25:36.917529 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-tdxwr" event={"ID":"93bb1d47-c669-4b0e-a8a3-5b272962a266","Type":"ContainerStarted","Data":"8964c0f28928477cef7e20047b02fab93739a189b2ff7ed5c35ac73ccad7e65f"} Jan 27 09:25:36 crc kubenswrapper[4799]: I0127 09:25:36.950060 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-59f8cff499-tdxwr" podStartSLOduration=2.955968226 podStartE2EDuration="4.950031331s" podCreationTimestamp="2026-01-27 09:25:32 +0000 UTC" firstStartedPulling="2026-01-27 09:25:33.810717754 +0000 UTC m=+6000.121821819" lastFinishedPulling="2026-01-27 09:25:35.804780859 +0000 UTC m=+6002.115884924" observedRunningTime="2026-01-27 09:25:36.946909846 +0000 UTC m=+6003.258013921" watchObservedRunningTime="2026-01-27 09:25:36.950031331 +0000 UTC m=+6003.261135426" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.493630 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-xglg8"] Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.496581 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-xglg8" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.502967 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-xglg8"] Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.534639 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.534965 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.537630 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.638048 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/de098d43-15c6-4e83-8ff9-704c15633680-config-data-merged\") pod \"octavia-healthmanager-xglg8\" (UID: \"de098d43-15c6-4e83-8ff9-704c15633680\") " pod="openstack/octavia-healthmanager-xglg8" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.638277 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de098d43-15c6-4e83-8ff9-704c15633680-config-data\") pod \"octavia-healthmanager-xglg8\" (UID: \"de098d43-15c6-4e83-8ff9-704c15633680\") " pod="openstack/octavia-healthmanager-xglg8" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.638359 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/de098d43-15c6-4e83-8ff9-704c15633680-amphora-certs\") pod \"octavia-healthmanager-xglg8\" (UID: \"de098d43-15c6-4e83-8ff9-704c15633680\") " pod="openstack/octavia-healthmanager-xglg8" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.638400 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/de098d43-15c6-4e83-8ff9-704c15633680-hm-ports\") pod \"octavia-healthmanager-xglg8\" (UID: \"de098d43-15c6-4e83-8ff9-704c15633680\") " pod="openstack/octavia-healthmanager-xglg8" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.638544 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de098d43-15c6-4e83-8ff9-704c15633680-combined-ca-bundle\") pod \"octavia-healthmanager-xglg8\" (UID: \"de098d43-15c6-4e83-8ff9-704c15633680\") " pod="openstack/octavia-healthmanager-xglg8" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.639476 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de098d43-15c6-4e83-8ff9-704c15633680-scripts\") pod \"octavia-healthmanager-xglg8\" (UID: \"de098d43-15c6-4e83-8ff9-704c15633680\") " pod="openstack/octavia-healthmanager-xglg8" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.741941 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/de098d43-15c6-4e83-8ff9-704c15633680-amphora-certs\") pod \"octavia-healthmanager-xglg8\" (UID: \"de098d43-15c6-4e83-8ff9-704c15633680\") " pod="openstack/octavia-healthmanager-xglg8" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.742012 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/de098d43-15c6-4e83-8ff9-704c15633680-hm-ports\") pod \"octavia-healthmanager-xglg8\" (UID: \"de098d43-15c6-4e83-8ff9-704c15633680\") " pod="openstack/octavia-healthmanager-xglg8" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.742061 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de098d43-15c6-4e83-8ff9-704c15633680-combined-ca-bundle\") pod \"octavia-healthmanager-xglg8\" (UID: \"de098d43-15c6-4e83-8ff9-704c15633680\") " pod="openstack/octavia-healthmanager-xglg8" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.742101 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de098d43-15c6-4e83-8ff9-704c15633680-scripts\") pod \"octavia-healthmanager-xglg8\" (UID: \"de098d43-15c6-4e83-8ff9-704c15633680\") " pod="openstack/octavia-healthmanager-xglg8" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.742164 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/de098d43-15c6-4e83-8ff9-704c15633680-config-data-merged\") pod \"octavia-healthmanager-xglg8\" (UID: \"de098d43-15c6-4e83-8ff9-704c15633680\") " pod="openstack/octavia-healthmanager-xglg8" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.742278 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de098d43-15c6-4e83-8ff9-704c15633680-config-data\") pod \"octavia-healthmanager-xglg8\" (UID: \"de098d43-15c6-4e83-8ff9-704c15633680\") " pod="openstack/octavia-healthmanager-xglg8" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.742913 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/de098d43-15c6-4e83-8ff9-704c15633680-config-data-merged\") pod \"octavia-healthmanager-xglg8\" (UID: \"de098d43-15c6-4e83-8ff9-704c15633680\") " pod="openstack/octavia-healthmanager-xglg8" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.743258 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/de098d43-15c6-4e83-8ff9-704c15633680-hm-ports\") pod \"octavia-healthmanager-xglg8\" (UID: \"de098d43-15c6-4e83-8ff9-704c15633680\") " pod="openstack/octavia-healthmanager-xglg8" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.752570 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/de098d43-15c6-4e83-8ff9-704c15633680-amphora-certs\") pod \"octavia-healthmanager-xglg8\" (UID: \"de098d43-15c6-4e83-8ff9-704c15633680\") " pod="openstack/octavia-healthmanager-xglg8" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.753075 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de098d43-15c6-4e83-8ff9-704c15633680-scripts\") pod \"octavia-healthmanager-xglg8\" (UID: \"de098d43-15c6-4e83-8ff9-704c15633680\") " pod="openstack/octavia-healthmanager-xglg8" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.753084 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de098d43-15c6-4e83-8ff9-704c15633680-combined-ca-bundle\") pod \"octavia-healthmanager-xglg8\" (UID: \"de098d43-15c6-4e83-8ff9-704c15633680\") " pod="openstack/octavia-healthmanager-xglg8" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.760390 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de098d43-15c6-4e83-8ff9-704c15633680-config-data\") pod \"octavia-healthmanager-xglg8\" (UID: \"de098d43-15c6-4e83-8ff9-704c15633680\") " pod="openstack/octavia-healthmanager-xglg8" Jan 27 09:25:52 crc kubenswrapper[4799]: I0127 09:25:52.852075 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-xglg8" Jan 27 09:25:53 crc kubenswrapper[4799]: I0127 09:25:53.546046 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-xglg8"] Jan 27 09:25:53 crc kubenswrapper[4799]: W0127 09:25:53.550614 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde098d43_15c6_4e83_8ff9_704c15633680.slice/crio-2f3df4dc84f6d81756deab7e4c0d97684e3bec969e24f560a77412b3af50625a WatchSource:0}: Error finding container 2f3df4dc84f6d81756deab7e4c0d97684e3bec969e24f560a77412b3af50625a: Status 404 returned error can't find the container with id 2f3df4dc84f6d81756deab7e4c0d97684e3bec969e24f560a77412b3af50625a Jan 27 09:25:53 crc kubenswrapper[4799]: I0127 09:25:53.730931 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:25:53 crc kubenswrapper[4799]: I0127 09:25:53.731461 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:25:53 crc kubenswrapper[4799]: I0127 09:25:53.731530 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 09:25:53 crc kubenswrapper[4799]: I0127 09:25:53.732819 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 09:25:53 crc kubenswrapper[4799]: I0127 09:25:53.732886 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" gracePeriod=600 Jan 27 09:25:53 crc kubenswrapper[4799]: E0127 09:25:53.852251 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:25:53 crc kubenswrapper[4799]: I0127 09:25:53.924104 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-7dfgl"] Jan 27 09:25:53 crc kubenswrapper[4799]: I0127 09:25:53.925982 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-7dfgl" Jan 27 09:25:53 crc kubenswrapper[4799]: I0127 09:25:53.931416 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Jan 27 09:25:53 crc kubenswrapper[4799]: I0127 09:25:53.931448 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Jan 27 09:25:53 crc kubenswrapper[4799]: I0127 09:25:53.946619 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-7dfgl"] Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.069656 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f3006a64-ba6c-45e8-a92d-309dbe2daaf9-config-data-merged\") pod \"octavia-housekeeping-7dfgl\" (UID: \"f3006a64-ba6c-45e8-a92d-309dbe2daaf9\") " pod="openstack/octavia-housekeeping-7dfgl" Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.070140 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3006a64-ba6c-45e8-a92d-309dbe2daaf9-scripts\") pod \"octavia-housekeeping-7dfgl\" (UID: \"f3006a64-ba6c-45e8-a92d-309dbe2daaf9\") " pod="openstack/octavia-housekeeping-7dfgl" Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.070231 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/f3006a64-ba6c-45e8-a92d-309dbe2daaf9-amphora-certs\") pod \"octavia-housekeeping-7dfgl\" (UID: \"f3006a64-ba6c-45e8-a92d-309dbe2daaf9\") " pod="openstack/octavia-housekeeping-7dfgl" Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.070283 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3006a64-ba6c-45e8-a92d-309dbe2daaf9-combined-ca-bundle\") pod \"octavia-housekeeping-7dfgl\" (UID: \"f3006a64-ba6c-45e8-a92d-309dbe2daaf9\") " pod="openstack/octavia-housekeeping-7dfgl" Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.070422 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3006a64-ba6c-45e8-a92d-309dbe2daaf9-config-data\") pod \"octavia-housekeeping-7dfgl\" (UID: \"f3006a64-ba6c-45e8-a92d-309dbe2daaf9\") " pod="openstack/octavia-housekeeping-7dfgl" Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.070467 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/f3006a64-ba6c-45e8-a92d-309dbe2daaf9-hm-ports\") pod \"octavia-housekeeping-7dfgl\" (UID: \"f3006a64-ba6c-45e8-a92d-309dbe2daaf9\") " pod="openstack/octavia-housekeeping-7dfgl" Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.073373 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" exitCode=0 Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.073425 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0"} Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.073461 4799 scope.go:117] "RemoveContainer" containerID="6d7b80b862be55b1b06bdfee48de3c7e8807494274cc669fc0412f97be57e1fc" Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.073983 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:25:54 crc kubenswrapper[4799]: E0127 09:25:54.074263 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.076835 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-xglg8" event={"ID":"de098d43-15c6-4e83-8ff9-704c15633680","Type":"ContainerStarted","Data":"5f23e089c38ffaac5e66d9ff0c2e7b6a96f1964081c9775711eb28841ac2b614"} Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.076895 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-xglg8" event={"ID":"de098d43-15c6-4e83-8ff9-704c15633680","Type":"ContainerStarted","Data":"2f3df4dc84f6d81756deab7e4c0d97684e3bec969e24f560a77412b3af50625a"} Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.174469 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3006a64-ba6c-45e8-a92d-309dbe2daaf9-config-data\") pod \"octavia-housekeeping-7dfgl\" (UID: \"f3006a64-ba6c-45e8-a92d-309dbe2daaf9\") " pod="openstack/octavia-housekeeping-7dfgl" Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.174586 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/f3006a64-ba6c-45e8-a92d-309dbe2daaf9-hm-ports\") pod \"octavia-housekeeping-7dfgl\" (UID: \"f3006a64-ba6c-45e8-a92d-309dbe2daaf9\") " pod="openstack/octavia-housekeeping-7dfgl" Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.174694 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f3006a64-ba6c-45e8-a92d-309dbe2daaf9-config-data-merged\") pod \"octavia-housekeeping-7dfgl\" (UID: \"f3006a64-ba6c-45e8-a92d-309dbe2daaf9\") " pod="openstack/octavia-housekeeping-7dfgl" Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.174721 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3006a64-ba6c-45e8-a92d-309dbe2daaf9-scripts\") pod \"octavia-housekeeping-7dfgl\" (UID: \"f3006a64-ba6c-45e8-a92d-309dbe2daaf9\") " pod="openstack/octavia-housekeeping-7dfgl" Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.174877 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/f3006a64-ba6c-45e8-a92d-309dbe2daaf9-amphora-certs\") pod \"octavia-housekeeping-7dfgl\" (UID: \"f3006a64-ba6c-45e8-a92d-309dbe2daaf9\") " pod="openstack/octavia-housekeeping-7dfgl" Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.175058 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3006a64-ba6c-45e8-a92d-309dbe2daaf9-combined-ca-bundle\") pod \"octavia-housekeeping-7dfgl\" (UID: \"f3006a64-ba6c-45e8-a92d-309dbe2daaf9\") " pod="openstack/octavia-housekeeping-7dfgl" Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.175106 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f3006a64-ba6c-45e8-a92d-309dbe2daaf9-config-data-merged\") pod \"octavia-housekeeping-7dfgl\" (UID: \"f3006a64-ba6c-45e8-a92d-309dbe2daaf9\") " pod="openstack/octavia-housekeeping-7dfgl" Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.175889 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/f3006a64-ba6c-45e8-a92d-309dbe2daaf9-hm-ports\") pod \"octavia-housekeeping-7dfgl\" (UID: \"f3006a64-ba6c-45e8-a92d-309dbe2daaf9\") " pod="openstack/octavia-housekeeping-7dfgl" Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.188236 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3006a64-ba6c-45e8-a92d-309dbe2daaf9-combined-ca-bundle\") pod \"octavia-housekeeping-7dfgl\" (UID: \"f3006a64-ba6c-45e8-a92d-309dbe2daaf9\") " pod="openstack/octavia-housekeeping-7dfgl" Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.188596 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3006a64-ba6c-45e8-a92d-309dbe2daaf9-config-data\") pod \"octavia-housekeeping-7dfgl\" (UID: \"f3006a64-ba6c-45e8-a92d-309dbe2daaf9\") " pod="openstack/octavia-housekeeping-7dfgl" Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.189558 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3006a64-ba6c-45e8-a92d-309dbe2daaf9-scripts\") pod \"octavia-housekeeping-7dfgl\" (UID: \"f3006a64-ba6c-45e8-a92d-309dbe2daaf9\") " pod="openstack/octavia-housekeeping-7dfgl" Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.199183 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/f3006a64-ba6c-45e8-a92d-309dbe2daaf9-amphora-certs\") pod \"octavia-housekeeping-7dfgl\" (UID: \"f3006a64-ba6c-45e8-a92d-309dbe2daaf9\") " pod="openstack/octavia-housekeeping-7dfgl" Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.280819 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-7dfgl" Jan 27 09:25:54 crc kubenswrapper[4799]: I0127 09:25:54.831166 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-7dfgl"] Jan 27 09:25:54 crc kubenswrapper[4799]: W0127 09:25:54.832478 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3006a64_ba6c_45e8_a92d_309dbe2daaf9.slice/crio-629888d63d51c7da7e3f46411f9dfacaad8699857bc56ee2b32ca7bd60484ae8 WatchSource:0}: Error finding container 629888d63d51c7da7e3f46411f9dfacaad8699857bc56ee2b32ca7bd60484ae8: Status 404 returned error can't find the container with id 629888d63d51c7da7e3f46411f9dfacaad8699857bc56ee2b32ca7bd60484ae8 Jan 27 09:25:55 crc kubenswrapper[4799]: I0127 09:25:55.100177 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-7dfgl" event={"ID":"f3006a64-ba6c-45e8-a92d-309dbe2daaf9","Type":"ContainerStarted","Data":"629888d63d51c7da7e3f46411f9dfacaad8699857bc56ee2b32ca7bd60484ae8"} Jan 27 09:25:55 crc kubenswrapper[4799]: I0127 09:25:55.959140 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-p4lqq"] Jan 27 09:25:55 crc kubenswrapper[4799]: I0127 09:25:55.961829 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-p4lqq" Jan 27 09:25:55 crc kubenswrapper[4799]: I0127 09:25:55.963843 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Jan 27 09:25:55 crc kubenswrapper[4799]: I0127 09:25:55.964795 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Jan 27 09:25:55 crc kubenswrapper[4799]: I0127 09:25:55.972430 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-p4lqq"] Jan 27 09:25:56 crc kubenswrapper[4799]: I0127 09:25:56.018794 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f550278-caf7-42a5-9747-446a53632485-scripts\") pod \"octavia-worker-p4lqq\" (UID: \"8f550278-caf7-42a5-9747-446a53632485\") " pod="openstack/octavia-worker-p4lqq" Jan 27 09:25:56 crc kubenswrapper[4799]: I0127 09:25:56.018900 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/8f550278-caf7-42a5-9747-446a53632485-hm-ports\") pod \"octavia-worker-p4lqq\" (UID: \"8f550278-caf7-42a5-9747-446a53632485\") " pod="openstack/octavia-worker-p4lqq" Jan 27 09:25:56 crc kubenswrapper[4799]: I0127 09:25:56.018977 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f550278-caf7-42a5-9747-446a53632485-combined-ca-bundle\") pod \"octavia-worker-p4lqq\" (UID: \"8f550278-caf7-42a5-9747-446a53632485\") " pod="openstack/octavia-worker-p4lqq" Jan 27 09:25:56 crc kubenswrapper[4799]: I0127 09:25:56.019008 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f550278-caf7-42a5-9747-446a53632485-config-data\") pod \"octavia-worker-p4lqq\" (UID: \"8f550278-caf7-42a5-9747-446a53632485\") " pod="openstack/octavia-worker-p4lqq" Jan 27 09:25:56 crc kubenswrapper[4799]: I0127 09:25:56.019119 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8f550278-caf7-42a5-9747-446a53632485-config-data-merged\") pod \"octavia-worker-p4lqq\" (UID: \"8f550278-caf7-42a5-9747-446a53632485\") " pod="openstack/octavia-worker-p4lqq" Jan 27 09:25:56 crc kubenswrapper[4799]: I0127 09:25:56.019182 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/8f550278-caf7-42a5-9747-446a53632485-amphora-certs\") pod \"octavia-worker-p4lqq\" (UID: \"8f550278-caf7-42a5-9747-446a53632485\") " pod="openstack/octavia-worker-p4lqq" Jan 27 09:25:56 crc kubenswrapper[4799]: I0127 09:25:56.114844 4799 generic.go:334] "Generic (PLEG): container finished" podID="de098d43-15c6-4e83-8ff9-704c15633680" containerID="5f23e089c38ffaac5e66d9ff0c2e7b6a96f1964081c9775711eb28841ac2b614" exitCode=0 Jan 27 09:25:56 crc kubenswrapper[4799]: I0127 09:25:56.114900 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-xglg8" event={"ID":"de098d43-15c6-4e83-8ff9-704c15633680","Type":"ContainerDied","Data":"5f23e089c38ffaac5e66d9ff0c2e7b6a96f1964081c9775711eb28841ac2b614"} Jan 27 09:25:56 crc kubenswrapper[4799]: I0127 09:25:56.120318 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8f550278-caf7-42a5-9747-446a53632485-config-data-merged\") pod \"octavia-worker-p4lqq\" (UID: \"8f550278-caf7-42a5-9747-446a53632485\") " pod="openstack/octavia-worker-p4lqq" Jan 27 09:25:56 crc kubenswrapper[4799]: I0127 09:25:56.120387 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/8f550278-caf7-42a5-9747-446a53632485-amphora-certs\") pod \"octavia-worker-p4lqq\" (UID: \"8f550278-caf7-42a5-9747-446a53632485\") " pod="openstack/octavia-worker-p4lqq" Jan 27 09:25:56 crc kubenswrapper[4799]: I0127 09:25:56.120447 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f550278-caf7-42a5-9747-446a53632485-scripts\") pod \"octavia-worker-p4lqq\" (UID: \"8f550278-caf7-42a5-9747-446a53632485\") " pod="openstack/octavia-worker-p4lqq" Jan 27 09:25:56 crc kubenswrapper[4799]: I0127 09:25:56.120524 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/8f550278-caf7-42a5-9747-446a53632485-hm-ports\") pod \"octavia-worker-p4lqq\" (UID: \"8f550278-caf7-42a5-9747-446a53632485\") " pod="openstack/octavia-worker-p4lqq" Jan 27 09:25:56 crc kubenswrapper[4799]: I0127 09:25:56.120616 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f550278-caf7-42a5-9747-446a53632485-combined-ca-bundle\") pod \"octavia-worker-p4lqq\" (UID: \"8f550278-caf7-42a5-9747-446a53632485\") " pod="openstack/octavia-worker-p4lqq" Jan 27 09:25:56 crc kubenswrapper[4799]: I0127 09:25:56.120648 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f550278-caf7-42a5-9747-446a53632485-config-data\") pod \"octavia-worker-p4lqq\" (UID: \"8f550278-caf7-42a5-9747-446a53632485\") " pod="openstack/octavia-worker-p4lqq" Jan 27 09:25:56 crc kubenswrapper[4799]: I0127 09:25:56.121132 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8f550278-caf7-42a5-9747-446a53632485-config-data-merged\") pod \"octavia-worker-p4lqq\" (UID: \"8f550278-caf7-42a5-9747-446a53632485\") " pod="openstack/octavia-worker-p4lqq" Jan 27 09:25:56 crc kubenswrapper[4799]: I0127 09:25:56.121957 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/8f550278-caf7-42a5-9747-446a53632485-hm-ports\") pod \"octavia-worker-p4lqq\" (UID: \"8f550278-caf7-42a5-9747-446a53632485\") " pod="openstack/octavia-worker-p4lqq" Jan 27 09:25:56 crc kubenswrapper[4799]: I0127 09:25:56.126520 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/8f550278-caf7-42a5-9747-446a53632485-amphora-certs\") pod \"octavia-worker-p4lqq\" (UID: \"8f550278-caf7-42a5-9747-446a53632485\") " pod="openstack/octavia-worker-p4lqq" Jan 27 09:25:56 crc kubenswrapper[4799]: I0127 09:25:56.126573 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f550278-caf7-42a5-9747-446a53632485-combined-ca-bundle\") pod \"octavia-worker-p4lqq\" (UID: \"8f550278-caf7-42a5-9747-446a53632485\") " pod="openstack/octavia-worker-p4lqq" Jan 27 09:25:56 crc kubenswrapper[4799]: I0127 09:25:56.129542 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f550278-caf7-42a5-9747-446a53632485-config-data\") pod \"octavia-worker-p4lqq\" (UID: \"8f550278-caf7-42a5-9747-446a53632485\") " pod="openstack/octavia-worker-p4lqq" Jan 27 09:25:56 crc kubenswrapper[4799]: I0127 09:25:56.142264 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f550278-caf7-42a5-9747-446a53632485-scripts\") pod \"octavia-worker-p4lqq\" (UID: \"8f550278-caf7-42a5-9747-446a53632485\") " pod="openstack/octavia-worker-p4lqq" Jan 27 09:25:56 crc kubenswrapper[4799]: I0127 09:25:56.287810 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-p4lqq" Jan 27 09:25:56 crc kubenswrapper[4799]: I0127 09:25:56.873139 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-p4lqq"] Jan 27 09:25:57 crc kubenswrapper[4799]: I0127 09:25:57.125499 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-p4lqq" event={"ID":"8f550278-caf7-42a5-9747-446a53632485","Type":"ContainerStarted","Data":"4d46e15a415569a3be4195f934956f246b70ae48035cebc6267ffc7c5e3a94be"} Jan 27 09:25:57 crc kubenswrapper[4799]: I0127 09:25:57.127331 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-7dfgl" event={"ID":"f3006a64-ba6c-45e8-a92d-309dbe2daaf9","Type":"ContainerStarted","Data":"697eff59fb4d0a5b10eececd7c16ba1aac8571c58a1957cacf04ec7044e12484"} Jan 27 09:25:57 crc kubenswrapper[4799]: I0127 09:25:57.131913 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-xglg8" event={"ID":"de098d43-15c6-4e83-8ff9-704c15633680","Type":"ContainerStarted","Data":"04c9e9adbc3cb2eb68d8ec15e61b7e06f81bd025f92fa090a1d6ead2b7933329"} Jan 27 09:25:57 crc kubenswrapper[4799]: I0127 09:25:57.132549 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-xglg8" Jan 27 09:25:57 crc kubenswrapper[4799]: I0127 09:25:57.183745 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-xglg8" podStartSLOduration=5.18372319 podStartE2EDuration="5.18372319s" podCreationTimestamp="2026-01-27 09:25:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:25:57.181617163 +0000 UTC m=+6023.492721238" watchObservedRunningTime="2026-01-27 09:25:57.18372319 +0000 UTC m=+6023.494827255" Jan 27 09:25:57 crc kubenswrapper[4799]: I0127 09:25:57.720481 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-xglg8"] Jan 27 09:25:58 crc kubenswrapper[4799]: I0127 09:25:58.148428 4799 generic.go:334] "Generic (PLEG): container finished" podID="f3006a64-ba6c-45e8-a92d-309dbe2daaf9" containerID="697eff59fb4d0a5b10eececd7c16ba1aac8571c58a1957cacf04ec7044e12484" exitCode=0 Jan 27 09:25:58 crc kubenswrapper[4799]: I0127 09:25:58.148508 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-7dfgl" event={"ID":"f3006a64-ba6c-45e8-a92d-309dbe2daaf9","Type":"ContainerDied","Data":"697eff59fb4d0a5b10eececd7c16ba1aac8571c58a1957cacf04ec7044e12484"} Jan 27 09:25:59 crc kubenswrapper[4799]: I0127 09:25:59.163337 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-p4lqq" event={"ID":"8f550278-caf7-42a5-9747-446a53632485","Type":"ContainerStarted","Data":"acf1e1c22cbcd453b7449808488b9af512177e9c30970e1a7d9430879e371ef4"} Jan 27 09:25:59 crc kubenswrapper[4799]: I0127 09:25:59.165932 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-7dfgl" event={"ID":"f3006a64-ba6c-45e8-a92d-309dbe2daaf9","Type":"ContainerStarted","Data":"7e8242a8f18d8697724404d8444b80dbea61140f6bc4769e742c654c9213d68e"} Jan 27 09:25:59 crc kubenswrapper[4799]: I0127 09:25:59.206200 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-7dfgl" podStartSLOduration=5.010752952 podStartE2EDuration="6.206169429s" podCreationTimestamp="2026-01-27 09:25:53 +0000 UTC" firstStartedPulling="2026-01-27 09:25:54.835121246 +0000 UTC m=+6021.146225311" lastFinishedPulling="2026-01-27 09:25:56.030537723 +0000 UTC m=+6022.341641788" observedRunningTime="2026-01-27 09:25:59.195773055 +0000 UTC m=+6025.506877140" watchObservedRunningTime="2026-01-27 09:25:59.206169429 +0000 UTC m=+6025.517273494" Jan 27 09:26:00 crc kubenswrapper[4799]: I0127 09:26:00.177527 4799 generic.go:334] "Generic (PLEG): container finished" podID="8f550278-caf7-42a5-9747-446a53632485" containerID="acf1e1c22cbcd453b7449808488b9af512177e9c30970e1a7d9430879e371ef4" exitCode=0 Jan 27 09:26:00 crc kubenswrapper[4799]: I0127 09:26:00.177628 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-p4lqq" event={"ID":"8f550278-caf7-42a5-9747-446a53632485","Type":"ContainerDied","Data":"acf1e1c22cbcd453b7449808488b9af512177e9c30970e1a7d9430879e371ef4"} Jan 27 09:26:00 crc kubenswrapper[4799]: I0127 09:26:00.178455 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-7dfgl" Jan 27 09:26:01 crc kubenswrapper[4799]: I0127 09:26:01.190324 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-p4lqq" event={"ID":"8f550278-caf7-42a5-9747-446a53632485","Type":"ContainerStarted","Data":"214378435a6c6e368b766f278adb9d58119d77a188dbfb717efe96b6ed6ecb2b"} Jan 27 09:26:01 crc kubenswrapper[4799]: I0127 09:26:01.191121 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-p4lqq" Jan 27 09:26:01 crc kubenswrapper[4799]: I0127 09:26:01.219960 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-p4lqq" podStartSLOduration=4.97960287 podStartE2EDuration="6.2199372s" podCreationTimestamp="2026-01-27 09:25:55 +0000 UTC" firstStartedPulling="2026-01-27 09:25:56.884746993 +0000 UTC m=+6023.195851048" lastFinishedPulling="2026-01-27 09:25:58.125081313 +0000 UTC m=+6024.436185378" observedRunningTime="2026-01-27 09:26:01.206841414 +0000 UTC m=+6027.517945469" watchObservedRunningTime="2026-01-27 09:26:01.2199372 +0000 UTC m=+6027.531041275" Jan 27 09:26:06 crc kubenswrapper[4799]: I0127 09:26:06.452611 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:26:06 crc kubenswrapper[4799]: E0127 09:26:06.454001 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:26:07 crc kubenswrapper[4799]: I0127 09:26:07.886860 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-xglg8" Jan 27 09:26:09 crc kubenswrapper[4799]: I0127 09:26:09.315256 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-7dfgl" Jan 27 09:26:11 crc kubenswrapper[4799]: I0127 09:26:11.327816 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-p4lqq" Jan 27 09:26:14 crc kubenswrapper[4799]: I0127 09:26:14.050683 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-2843-account-create-update-48mnx"] Jan 27 09:26:14 crc kubenswrapper[4799]: I0127 09:26:14.064896 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-ljhr7"] Jan 27 09:26:14 crc kubenswrapper[4799]: I0127 09:26:14.076148 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-2843-account-create-update-48mnx"] Jan 27 09:26:14 crc kubenswrapper[4799]: I0127 09:26:14.085028 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-ljhr7"] Jan 27 09:26:14 crc kubenswrapper[4799]: I0127 09:26:14.469919 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07984ba6-b448-4418-bc8d-e09313294368" path="/var/lib/kubelet/pods/07984ba6-b448-4418-bc8d-e09313294368/volumes" Jan 27 09:26:14 crc kubenswrapper[4799]: I0127 09:26:14.472730 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46006d52-e5ef-4ec0-bf9e-e0c77c0cd441" path="/var/lib/kubelet/pods/46006d52-e5ef-4ec0-bf9e-e0c77c0cd441/volumes" Jan 27 09:26:17 crc kubenswrapper[4799]: I0127 09:26:17.451622 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:26:17 crc kubenswrapper[4799]: E0127 09:26:17.452574 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:26:20 crc kubenswrapper[4799]: I0127 09:26:20.038067 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-fgxzt"] Jan 27 09:26:20 crc kubenswrapper[4799]: I0127 09:26:20.050131 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-fgxzt"] Jan 27 09:26:20 crc kubenswrapper[4799]: I0127 09:26:20.465767 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2" path="/var/lib/kubelet/pods/e9c71ef8-de3c-4b61-ac4d-3bfd8d96a8d2/volumes" Jan 27 09:26:30 crc kubenswrapper[4799]: I0127 09:26:30.452136 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:26:30 crc kubenswrapper[4799]: E0127 09:26:30.453313 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:26:41 crc kubenswrapper[4799]: I0127 09:26:41.452419 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:26:41 crc kubenswrapper[4799]: E0127 09:26:41.453418 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:26:41 crc kubenswrapper[4799]: I0127 09:26:41.551674 4799 scope.go:117] "RemoveContainer" containerID="aa2c906c4a284eea692fb23f88b544ea42b085d555455e752957b466572524cb" Jan 27 09:26:41 crc kubenswrapper[4799]: I0127 09:26:41.592164 4799 scope.go:117] "RemoveContainer" containerID="6677da93dd3166c302eb3c4b5f95f8f8fffff34a9c376da8a6c00206498c184f" Jan 27 09:26:41 crc kubenswrapper[4799]: I0127 09:26:41.635492 4799 scope.go:117] "RemoveContainer" containerID="60766fcd0e55a6dedac127c12ca5f0eb8b00492830d64598c0bd57ab460d9d1f" Jan 27 09:26:48 crc kubenswrapper[4799]: I0127 09:26:48.046715 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-86jnb"] Jan 27 09:26:48 crc kubenswrapper[4799]: I0127 09:26:48.060505 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d0e9-account-create-update-6v2sg"] Jan 27 09:26:48 crc kubenswrapper[4799]: I0127 09:26:48.070444 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d0e9-account-create-update-6v2sg"] Jan 27 09:26:48 crc kubenswrapper[4799]: I0127 09:26:48.077999 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-86jnb"] Jan 27 09:26:48 crc kubenswrapper[4799]: I0127 09:26:48.466677 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0b826cd-24f3-452a-a058-aa6dd0414e73" path="/var/lib/kubelet/pods/c0b826cd-24f3-452a-a058-aa6dd0414e73/volumes" Jan 27 09:26:48 crc kubenswrapper[4799]: I0127 09:26:48.469681 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa25c089-ffcf-47cb-a852-f00efd999834" path="/var/lib/kubelet/pods/fa25c089-ffcf-47cb-a852-f00efd999834/volumes" Jan 27 09:26:56 crc kubenswrapper[4799]: I0127 09:26:56.452722 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:26:56 crc kubenswrapper[4799]: E0127 09:26:56.454211 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:26:59 crc kubenswrapper[4799]: I0127 09:26:59.037384 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-xqszt"] Jan 27 09:26:59 crc kubenswrapper[4799]: I0127 09:26:59.048086 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-xqszt"] Jan 27 09:27:00 crc kubenswrapper[4799]: I0127 09:27:00.469594 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff35b6f5-6086-4d84-be04-e985293fdd87" path="/var/lib/kubelet/pods/ff35b6f5-6086-4d84-be04-e985293fdd87/volumes" Jan 27 09:27:07 crc kubenswrapper[4799]: I0127 09:27:07.451809 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:27:07 crc kubenswrapper[4799]: E0127 09:27:07.453028 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:27:22 crc kubenswrapper[4799]: I0127 09:27:22.452357 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:27:22 crc kubenswrapper[4799]: E0127 09:27:22.453431 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:27:35 crc kubenswrapper[4799]: I0127 09:27:35.452596 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:27:35 crc kubenswrapper[4799]: E0127 09:27:35.454091 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:27:41 crc kubenswrapper[4799]: I0127 09:27:41.053488 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-33d6-account-create-update-nmrkh"] Jan 27 09:27:41 crc kubenswrapper[4799]: I0127 09:27:41.061211 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-vr2jr"] Jan 27 09:27:41 crc kubenswrapper[4799]: I0127 09:27:41.075426 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-33d6-account-create-update-nmrkh"] Jan 27 09:27:41 crc kubenswrapper[4799]: I0127 09:27:41.083425 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-vr2jr"] Jan 27 09:27:41 crc kubenswrapper[4799]: I0127 09:27:41.734058 4799 scope.go:117] "RemoveContainer" containerID="bd9b8a0eb88d29db66f609f0553adfb8c627233e4943e79ced1a508def24aaf3" Jan 27 09:27:41 crc kubenswrapper[4799]: I0127 09:27:41.766217 4799 scope.go:117] "RemoveContainer" containerID="5b455dd7fe8eb8e19cf1247368e12a04f5ba3260c71f621d78bd7fd6edeb7f45" Jan 27 09:27:41 crc kubenswrapper[4799]: I0127 09:27:41.815060 4799 scope.go:117] "RemoveContainer" containerID="561118bd1cfc368adec06eeec6619da8bbf3a3c206a3a17d59935e4034f4949b" Jan 27 09:27:42 crc kubenswrapper[4799]: I0127 09:27:42.464311 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c403195d-3c11-49a2-9f59-31ab7b208057" path="/var/lib/kubelet/pods/c403195d-3c11-49a2-9f59-31ab7b208057/volumes" Jan 27 09:27:42 crc kubenswrapper[4799]: I0127 09:27:42.466142 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0071dd9-0cd6-4403-bfaa-3469fc70f3d8" path="/var/lib/kubelet/pods/f0071dd9-0cd6-4403-bfaa-3469fc70f3d8/volumes" Jan 27 09:27:47 crc kubenswrapper[4799]: I0127 09:27:47.452475 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:27:47 crc kubenswrapper[4799]: E0127 09:27:47.454210 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:27:50 crc kubenswrapper[4799]: I0127 09:27:50.046727 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-cqbgh"] Jan 27 09:27:50 crc kubenswrapper[4799]: I0127 09:27:50.057617 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-cqbgh"] Jan 27 09:27:50 crc kubenswrapper[4799]: I0127 09:27:50.467972 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba" path="/var/lib/kubelet/pods/7bd5090c-5aee-4ca8-a308-c0f27e2ad4ba/volumes" Jan 27 09:28:00 crc kubenswrapper[4799]: I0127 09:28:00.451735 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:28:00 crc kubenswrapper[4799]: E0127 09:28:00.452572 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:28:11 crc kubenswrapper[4799]: I0127 09:28:11.451283 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:28:11 crc kubenswrapper[4799]: E0127 09:28:11.452241 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:28:18 crc kubenswrapper[4799]: I0127 09:28:18.360384 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b5vr4"] Jan 27 09:28:18 crc kubenswrapper[4799]: I0127 09:28:18.368578 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b5vr4" Jan 27 09:28:18 crc kubenswrapper[4799]: I0127 09:28:18.377373 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b5vr4"] Jan 27 09:28:18 crc kubenswrapper[4799]: I0127 09:28:18.533971 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfvwk\" (UniqueName: \"kubernetes.io/projected/2ffacd2d-42f8-42e9-acbf-6538d7612091-kube-api-access-mfvwk\") pod \"redhat-operators-b5vr4\" (UID: \"2ffacd2d-42f8-42e9-acbf-6538d7612091\") " pod="openshift-marketplace/redhat-operators-b5vr4" Jan 27 09:28:18 crc kubenswrapper[4799]: I0127 09:28:18.534073 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ffacd2d-42f8-42e9-acbf-6538d7612091-catalog-content\") pod \"redhat-operators-b5vr4\" (UID: \"2ffacd2d-42f8-42e9-acbf-6538d7612091\") " pod="openshift-marketplace/redhat-operators-b5vr4" Jan 27 09:28:18 crc kubenswrapper[4799]: I0127 09:28:18.534123 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ffacd2d-42f8-42e9-acbf-6538d7612091-utilities\") pod \"redhat-operators-b5vr4\" (UID: \"2ffacd2d-42f8-42e9-acbf-6538d7612091\") " pod="openshift-marketplace/redhat-operators-b5vr4" Jan 27 09:28:18 crc kubenswrapper[4799]: I0127 09:28:18.635779 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfvwk\" (UniqueName: \"kubernetes.io/projected/2ffacd2d-42f8-42e9-acbf-6538d7612091-kube-api-access-mfvwk\") pod \"redhat-operators-b5vr4\" (UID: \"2ffacd2d-42f8-42e9-acbf-6538d7612091\") " pod="openshift-marketplace/redhat-operators-b5vr4" Jan 27 09:28:18 crc kubenswrapper[4799]: I0127 09:28:18.635894 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ffacd2d-42f8-42e9-acbf-6538d7612091-catalog-content\") pod \"redhat-operators-b5vr4\" (UID: \"2ffacd2d-42f8-42e9-acbf-6538d7612091\") " pod="openshift-marketplace/redhat-operators-b5vr4" Jan 27 09:28:18 crc kubenswrapper[4799]: I0127 09:28:18.635956 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ffacd2d-42f8-42e9-acbf-6538d7612091-utilities\") pod \"redhat-operators-b5vr4\" (UID: \"2ffacd2d-42f8-42e9-acbf-6538d7612091\") " pod="openshift-marketplace/redhat-operators-b5vr4" Jan 27 09:28:18 crc kubenswrapper[4799]: I0127 09:28:18.636647 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ffacd2d-42f8-42e9-acbf-6538d7612091-utilities\") pod \"redhat-operators-b5vr4\" (UID: \"2ffacd2d-42f8-42e9-acbf-6538d7612091\") " pod="openshift-marketplace/redhat-operators-b5vr4" Jan 27 09:28:18 crc kubenswrapper[4799]: I0127 09:28:18.636969 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ffacd2d-42f8-42e9-acbf-6538d7612091-catalog-content\") pod \"redhat-operators-b5vr4\" (UID: \"2ffacd2d-42f8-42e9-acbf-6538d7612091\") " pod="openshift-marketplace/redhat-operators-b5vr4" Jan 27 09:28:18 crc kubenswrapper[4799]: I0127 09:28:18.656028 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfvwk\" (UniqueName: \"kubernetes.io/projected/2ffacd2d-42f8-42e9-acbf-6538d7612091-kube-api-access-mfvwk\") pod \"redhat-operators-b5vr4\" (UID: \"2ffacd2d-42f8-42e9-acbf-6538d7612091\") " pod="openshift-marketplace/redhat-operators-b5vr4" Jan 27 09:28:18 crc kubenswrapper[4799]: I0127 09:28:18.689793 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b5vr4" Jan 27 09:28:19 crc kubenswrapper[4799]: I0127 09:28:19.170970 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b5vr4"] Jan 27 09:28:19 crc kubenswrapper[4799]: I0127 09:28:19.759245 4799 generic.go:334] "Generic (PLEG): container finished" podID="2ffacd2d-42f8-42e9-acbf-6538d7612091" containerID="aaa44d6030337db7833cb0cba21b33412e07e1dbff4d9d950e4c612a8ad00cc3" exitCode=0 Jan 27 09:28:19 crc kubenswrapper[4799]: I0127 09:28:19.759417 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5vr4" event={"ID":"2ffacd2d-42f8-42e9-acbf-6538d7612091","Type":"ContainerDied","Data":"aaa44d6030337db7833cb0cba21b33412e07e1dbff4d9d950e4c612a8ad00cc3"} Jan 27 09:28:19 crc kubenswrapper[4799]: I0127 09:28:19.760610 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5vr4" event={"ID":"2ffacd2d-42f8-42e9-acbf-6538d7612091","Type":"ContainerStarted","Data":"ac2fe8bf9fdead6b1c30c985528d93e8fbd2d312a746361dcd1105bb3f31e514"} Jan 27 09:28:19 crc kubenswrapper[4799]: I0127 09:28:19.761679 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 09:28:20 crc kubenswrapper[4799]: I0127 09:28:20.787261 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5vr4" event={"ID":"2ffacd2d-42f8-42e9-acbf-6538d7612091","Type":"ContainerStarted","Data":"3f9ff120616b5e03831aeedbb5ede2febcc6e759ff0ec46a2e973f5160481804"} Jan 27 09:28:21 crc kubenswrapper[4799]: I0127 09:28:21.051694 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-91cb-account-create-update-t4vmh"] Jan 27 09:28:21 crc kubenswrapper[4799]: I0127 09:28:21.065077 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-dthfg"] Jan 27 09:28:21 crc kubenswrapper[4799]: I0127 09:28:21.074780 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-dthfg"] Jan 27 09:28:21 crc kubenswrapper[4799]: I0127 09:28:21.085586 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-91cb-account-create-update-t4vmh"] Jan 27 09:28:22 crc kubenswrapper[4799]: I0127 09:28:22.473869 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d6dce50-cc86-4160-960d-e175a1044a74" path="/var/lib/kubelet/pods/6d6dce50-cc86-4160-960d-e175a1044a74/volumes" Jan 27 09:28:22 crc kubenswrapper[4799]: I0127 09:28:22.485337 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f47ce090-4047-4274-b1f9-6d3b2c467791" path="/var/lib/kubelet/pods/f47ce090-4047-4274-b1f9-6d3b2c467791/volumes" Jan 27 09:28:22 crc kubenswrapper[4799]: I0127 09:28:22.809792 4799 generic.go:334] "Generic (PLEG): container finished" podID="2ffacd2d-42f8-42e9-acbf-6538d7612091" containerID="3f9ff120616b5e03831aeedbb5ede2febcc6e759ff0ec46a2e973f5160481804" exitCode=0 Jan 27 09:28:22 crc kubenswrapper[4799]: I0127 09:28:22.809872 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5vr4" event={"ID":"2ffacd2d-42f8-42e9-acbf-6538d7612091","Type":"ContainerDied","Data":"3f9ff120616b5e03831aeedbb5ede2febcc6e759ff0ec46a2e973f5160481804"} Jan 27 09:28:24 crc kubenswrapper[4799]: I0127 09:28:24.459228 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:28:24 crc kubenswrapper[4799]: E0127 09:28:24.460089 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:28:24 crc kubenswrapper[4799]: I0127 09:28:24.833409 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5vr4" event={"ID":"2ffacd2d-42f8-42e9-acbf-6538d7612091","Type":"ContainerStarted","Data":"a5019aadd56f8e3d93ae14fe87ac8bdec5c8cf792c63691060b01fa57be1a052"} Jan 27 09:28:24 crc kubenswrapper[4799]: I0127 09:28:24.869352 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b5vr4" podStartSLOduration=2.783162823 podStartE2EDuration="6.869326152s" podCreationTimestamp="2026-01-27 09:28:18 +0000 UTC" firstStartedPulling="2026-01-27 09:28:19.761457394 +0000 UTC m=+6166.072561459" lastFinishedPulling="2026-01-27 09:28:23.847620713 +0000 UTC m=+6170.158724788" observedRunningTime="2026-01-27 09:28:24.860283766 +0000 UTC m=+6171.171387861" watchObservedRunningTime="2026-01-27 09:28:24.869326152 +0000 UTC m=+6171.180430237" Jan 27 09:28:28 crc kubenswrapper[4799]: I0127 09:28:28.690864 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b5vr4" Jan 27 09:28:28 crc kubenswrapper[4799]: I0127 09:28:28.692497 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b5vr4" Jan 27 09:28:29 crc kubenswrapper[4799]: I0127 09:28:29.748849 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b5vr4" podUID="2ffacd2d-42f8-42e9-acbf-6538d7612091" containerName="registry-server" probeResult="failure" output=< Jan 27 09:28:29 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Jan 27 09:28:29 crc kubenswrapper[4799]: > Jan 27 09:28:31 crc kubenswrapper[4799]: I0127 09:28:31.035616 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-92vlt"] Jan 27 09:28:31 crc kubenswrapper[4799]: I0127 09:28:31.044246 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-92vlt"] Jan 27 09:28:32 crc kubenswrapper[4799]: I0127 09:28:32.484503 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f78b45b9-36fa-4026-8f87-42cb45906e0d" path="/var/lib/kubelet/pods/f78b45b9-36fa-4026-8f87-42cb45906e0d/volumes" Jan 27 09:28:35 crc kubenswrapper[4799]: I0127 09:28:35.451944 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:28:35 crc kubenswrapper[4799]: E0127 09:28:35.453569 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:28:38 crc kubenswrapper[4799]: I0127 09:28:38.740816 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b5vr4" Jan 27 09:28:38 crc kubenswrapper[4799]: I0127 09:28:38.789902 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b5vr4" Jan 27 09:28:38 crc kubenswrapper[4799]: I0127 09:28:38.980026 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b5vr4"] Jan 27 09:28:39 crc kubenswrapper[4799]: I0127 09:28:39.971277 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b5vr4" podUID="2ffacd2d-42f8-42e9-acbf-6538d7612091" containerName="registry-server" containerID="cri-o://a5019aadd56f8e3d93ae14fe87ac8bdec5c8cf792c63691060b01fa57be1a052" gracePeriod=2 Jan 27 09:28:40 crc kubenswrapper[4799]: I0127 09:28:40.440245 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b5vr4" Jan 27 09:28:40 crc kubenswrapper[4799]: I0127 09:28:40.620838 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfvwk\" (UniqueName: \"kubernetes.io/projected/2ffacd2d-42f8-42e9-acbf-6538d7612091-kube-api-access-mfvwk\") pod \"2ffacd2d-42f8-42e9-acbf-6538d7612091\" (UID: \"2ffacd2d-42f8-42e9-acbf-6538d7612091\") " Jan 27 09:28:40 crc kubenswrapper[4799]: I0127 09:28:40.621019 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ffacd2d-42f8-42e9-acbf-6538d7612091-catalog-content\") pod \"2ffacd2d-42f8-42e9-acbf-6538d7612091\" (UID: \"2ffacd2d-42f8-42e9-acbf-6538d7612091\") " Jan 27 09:28:40 crc kubenswrapper[4799]: I0127 09:28:40.621119 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ffacd2d-42f8-42e9-acbf-6538d7612091-utilities\") pod \"2ffacd2d-42f8-42e9-acbf-6538d7612091\" (UID: \"2ffacd2d-42f8-42e9-acbf-6538d7612091\") " Jan 27 09:28:40 crc kubenswrapper[4799]: I0127 09:28:40.621904 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ffacd2d-42f8-42e9-acbf-6538d7612091-utilities" (OuterVolumeSpecName: "utilities") pod "2ffacd2d-42f8-42e9-acbf-6538d7612091" (UID: "2ffacd2d-42f8-42e9-acbf-6538d7612091"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:28:40 crc kubenswrapper[4799]: I0127 09:28:40.626069 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ffacd2d-42f8-42e9-acbf-6538d7612091-kube-api-access-mfvwk" (OuterVolumeSpecName: "kube-api-access-mfvwk") pod "2ffacd2d-42f8-42e9-acbf-6538d7612091" (UID: "2ffacd2d-42f8-42e9-acbf-6538d7612091"). InnerVolumeSpecName "kube-api-access-mfvwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:28:40 crc kubenswrapper[4799]: I0127 09:28:40.723371 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfvwk\" (UniqueName: \"kubernetes.io/projected/2ffacd2d-42f8-42e9-acbf-6538d7612091-kube-api-access-mfvwk\") on node \"crc\" DevicePath \"\"" Jan 27 09:28:40 crc kubenswrapper[4799]: I0127 09:28:40.723400 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ffacd2d-42f8-42e9-acbf-6538d7612091-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:28:40 crc kubenswrapper[4799]: I0127 09:28:40.725500 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ffacd2d-42f8-42e9-acbf-6538d7612091-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ffacd2d-42f8-42e9-acbf-6538d7612091" (UID: "2ffacd2d-42f8-42e9-acbf-6538d7612091"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:28:40 crc kubenswrapper[4799]: I0127 09:28:40.825427 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ffacd2d-42f8-42e9-acbf-6538d7612091-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:28:41 crc kubenswrapper[4799]: I0127 09:28:41.001373 4799 generic.go:334] "Generic (PLEG): container finished" podID="2ffacd2d-42f8-42e9-acbf-6538d7612091" containerID="a5019aadd56f8e3d93ae14fe87ac8bdec5c8cf792c63691060b01fa57be1a052" exitCode=0 Jan 27 09:28:41 crc kubenswrapper[4799]: I0127 09:28:41.001421 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5vr4" event={"ID":"2ffacd2d-42f8-42e9-acbf-6538d7612091","Type":"ContainerDied","Data":"a5019aadd56f8e3d93ae14fe87ac8bdec5c8cf792c63691060b01fa57be1a052"} Jan 27 09:28:41 crc kubenswrapper[4799]: I0127 09:28:41.001458 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5vr4" event={"ID":"2ffacd2d-42f8-42e9-acbf-6538d7612091","Type":"ContainerDied","Data":"ac2fe8bf9fdead6b1c30c985528d93e8fbd2d312a746361dcd1105bb3f31e514"} Jan 27 09:28:41 crc kubenswrapper[4799]: I0127 09:28:41.001478 4799 scope.go:117] "RemoveContainer" containerID="a5019aadd56f8e3d93ae14fe87ac8bdec5c8cf792c63691060b01fa57be1a052" Jan 27 09:28:41 crc kubenswrapper[4799]: I0127 09:28:41.002482 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b5vr4" Jan 27 09:28:41 crc kubenswrapper[4799]: I0127 09:28:41.039890 4799 scope.go:117] "RemoveContainer" containerID="3f9ff120616b5e03831aeedbb5ede2febcc6e759ff0ec46a2e973f5160481804" Jan 27 09:28:41 crc kubenswrapper[4799]: I0127 09:28:41.047336 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b5vr4"] Jan 27 09:28:41 crc kubenswrapper[4799]: I0127 09:28:41.059136 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b5vr4"] Jan 27 09:28:41 crc kubenswrapper[4799]: I0127 09:28:41.080564 4799 scope.go:117] "RemoveContainer" containerID="aaa44d6030337db7833cb0cba21b33412e07e1dbff4d9d950e4c612a8ad00cc3" Jan 27 09:28:41 crc kubenswrapper[4799]: I0127 09:28:41.138833 4799 scope.go:117] "RemoveContainer" containerID="a5019aadd56f8e3d93ae14fe87ac8bdec5c8cf792c63691060b01fa57be1a052" Jan 27 09:28:41 crc kubenswrapper[4799]: E0127 09:28:41.139434 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5019aadd56f8e3d93ae14fe87ac8bdec5c8cf792c63691060b01fa57be1a052\": container with ID starting with a5019aadd56f8e3d93ae14fe87ac8bdec5c8cf792c63691060b01fa57be1a052 not found: ID does not exist" containerID="a5019aadd56f8e3d93ae14fe87ac8bdec5c8cf792c63691060b01fa57be1a052" Jan 27 09:28:41 crc kubenswrapper[4799]: I0127 09:28:41.139477 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5019aadd56f8e3d93ae14fe87ac8bdec5c8cf792c63691060b01fa57be1a052"} err="failed to get container status \"a5019aadd56f8e3d93ae14fe87ac8bdec5c8cf792c63691060b01fa57be1a052\": rpc error: code = NotFound desc = could not find container \"a5019aadd56f8e3d93ae14fe87ac8bdec5c8cf792c63691060b01fa57be1a052\": container with ID starting with a5019aadd56f8e3d93ae14fe87ac8bdec5c8cf792c63691060b01fa57be1a052 not found: ID does not exist" Jan 27 09:28:41 crc kubenswrapper[4799]: I0127 09:28:41.139502 4799 scope.go:117] "RemoveContainer" containerID="3f9ff120616b5e03831aeedbb5ede2febcc6e759ff0ec46a2e973f5160481804" Jan 27 09:28:41 crc kubenswrapper[4799]: E0127 09:28:41.140087 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f9ff120616b5e03831aeedbb5ede2febcc6e759ff0ec46a2e973f5160481804\": container with ID starting with 3f9ff120616b5e03831aeedbb5ede2febcc6e759ff0ec46a2e973f5160481804 not found: ID does not exist" containerID="3f9ff120616b5e03831aeedbb5ede2febcc6e759ff0ec46a2e973f5160481804" Jan 27 09:28:41 crc kubenswrapper[4799]: I0127 09:28:41.140139 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f9ff120616b5e03831aeedbb5ede2febcc6e759ff0ec46a2e973f5160481804"} err="failed to get container status \"3f9ff120616b5e03831aeedbb5ede2febcc6e759ff0ec46a2e973f5160481804\": rpc error: code = NotFound desc = could not find container \"3f9ff120616b5e03831aeedbb5ede2febcc6e759ff0ec46a2e973f5160481804\": container with ID starting with 3f9ff120616b5e03831aeedbb5ede2febcc6e759ff0ec46a2e973f5160481804 not found: ID does not exist" Jan 27 09:28:41 crc kubenswrapper[4799]: I0127 09:28:41.140174 4799 scope.go:117] "RemoveContainer" containerID="aaa44d6030337db7833cb0cba21b33412e07e1dbff4d9d950e4c612a8ad00cc3" Jan 27 09:28:41 crc kubenswrapper[4799]: E0127 09:28:41.140703 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaa44d6030337db7833cb0cba21b33412e07e1dbff4d9d950e4c612a8ad00cc3\": container with ID starting with aaa44d6030337db7833cb0cba21b33412e07e1dbff4d9d950e4c612a8ad00cc3 not found: ID does not exist" containerID="aaa44d6030337db7833cb0cba21b33412e07e1dbff4d9d950e4c612a8ad00cc3" Jan 27 09:28:41 crc kubenswrapper[4799]: I0127 09:28:41.140731 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaa44d6030337db7833cb0cba21b33412e07e1dbff4d9d950e4c612a8ad00cc3"} err="failed to get container status \"aaa44d6030337db7833cb0cba21b33412e07e1dbff4d9d950e4c612a8ad00cc3\": rpc error: code = NotFound desc = could not find container \"aaa44d6030337db7833cb0cba21b33412e07e1dbff4d9d950e4c612a8ad00cc3\": container with ID starting with aaa44d6030337db7833cb0cba21b33412e07e1dbff4d9d950e4c612a8ad00cc3 not found: ID does not exist" Jan 27 09:28:41 crc kubenswrapper[4799]: I0127 09:28:41.955608 4799 scope.go:117] "RemoveContainer" containerID="a4d62733d08b775cc558b53e2efb5f17698c8e9b63ab6aa88f46f6dc2f5179d1" Jan 27 09:28:41 crc kubenswrapper[4799]: I0127 09:28:41.980556 4799 scope.go:117] "RemoveContainer" containerID="601380be6a575a081a1de72464abe217aac630b6b3a5f0b2e3eb2fdf5a177437" Jan 27 09:28:42 crc kubenswrapper[4799]: I0127 09:28:42.022075 4799 scope.go:117] "RemoveContainer" containerID="84cd476b3734168b1816354fb839e0d21ef1eb27aefd2c60c3bddc6b7782b72f" Jan 27 09:28:42 crc kubenswrapper[4799]: I0127 09:28:42.073582 4799 scope.go:117] "RemoveContainer" containerID="51308f0a6c44f816de55643d5180b5202c7a566b138fea94dccada83476d833c" Jan 27 09:28:42 crc kubenswrapper[4799]: I0127 09:28:42.126402 4799 scope.go:117] "RemoveContainer" containerID="d92728707d8dc5fefcd6e43857c9d7157e13b0096d98e4e881748a1b0f7a66d7" Jan 27 09:28:42 crc kubenswrapper[4799]: I0127 09:28:42.151455 4799 scope.go:117] "RemoveContainer" containerID="760022c8897149c6bc551f19473d3fe27eddab0f2fc20be6d48b15c0eb956001" Jan 27 09:28:42 crc kubenswrapper[4799]: I0127 09:28:42.465192 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ffacd2d-42f8-42e9-acbf-6538d7612091" path="/var/lib/kubelet/pods/2ffacd2d-42f8-42e9-acbf-6538d7612091/volumes" Jan 27 09:28:43 crc kubenswrapper[4799]: I0127 09:28:43.394421 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2jjlp"] Jan 27 09:28:43 crc kubenswrapper[4799]: E0127 09:28:43.395265 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ffacd2d-42f8-42e9-acbf-6538d7612091" containerName="registry-server" Jan 27 09:28:43 crc kubenswrapper[4799]: I0127 09:28:43.395277 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ffacd2d-42f8-42e9-acbf-6538d7612091" containerName="registry-server" Jan 27 09:28:43 crc kubenswrapper[4799]: E0127 09:28:43.395315 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ffacd2d-42f8-42e9-acbf-6538d7612091" containerName="extract-content" Jan 27 09:28:43 crc kubenswrapper[4799]: I0127 09:28:43.395321 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ffacd2d-42f8-42e9-acbf-6538d7612091" containerName="extract-content" Jan 27 09:28:43 crc kubenswrapper[4799]: E0127 09:28:43.395336 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ffacd2d-42f8-42e9-acbf-6538d7612091" containerName="extract-utilities" Jan 27 09:28:43 crc kubenswrapper[4799]: I0127 09:28:43.395344 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ffacd2d-42f8-42e9-acbf-6538d7612091" containerName="extract-utilities" Jan 27 09:28:43 crc kubenswrapper[4799]: I0127 09:28:43.395523 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ffacd2d-42f8-42e9-acbf-6538d7612091" containerName="registry-server" Jan 27 09:28:43 crc kubenswrapper[4799]: I0127 09:28:43.397734 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2jjlp" Jan 27 09:28:43 crc kubenswrapper[4799]: I0127 09:28:43.406987 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2jjlp"] Jan 27 09:28:43 crc kubenswrapper[4799]: I0127 09:28:43.482018 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mt2j\" (UniqueName: \"kubernetes.io/projected/38a3edb5-524f-4cdb-81b1-0f82527784af-kube-api-access-2mt2j\") pod \"community-operators-2jjlp\" (UID: \"38a3edb5-524f-4cdb-81b1-0f82527784af\") " pod="openshift-marketplace/community-operators-2jjlp" Jan 27 09:28:43 crc kubenswrapper[4799]: I0127 09:28:43.482139 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38a3edb5-524f-4cdb-81b1-0f82527784af-catalog-content\") pod \"community-operators-2jjlp\" (UID: \"38a3edb5-524f-4cdb-81b1-0f82527784af\") " pod="openshift-marketplace/community-operators-2jjlp" Jan 27 09:28:43 crc kubenswrapper[4799]: I0127 09:28:43.482406 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38a3edb5-524f-4cdb-81b1-0f82527784af-utilities\") pod \"community-operators-2jjlp\" (UID: \"38a3edb5-524f-4cdb-81b1-0f82527784af\") " pod="openshift-marketplace/community-operators-2jjlp" Jan 27 09:28:43 crc kubenswrapper[4799]: I0127 09:28:43.585127 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mt2j\" (UniqueName: \"kubernetes.io/projected/38a3edb5-524f-4cdb-81b1-0f82527784af-kube-api-access-2mt2j\") pod \"community-operators-2jjlp\" (UID: \"38a3edb5-524f-4cdb-81b1-0f82527784af\") " pod="openshift-marketplace/community-operators-2jjlp" Jan 27 09:28:43 crc kubenswrapper[4799]: I0127 09:28:43.585206 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38a3edb5-524f-4cdb-81b1-0f82527784af-catalog-content\") pod \"community-operators-2jjlp\" (UID: \"38a3edb5-524f-4cdb-81b1-0f82527784af\") " pod="openshift-marketplace/community-operators-2jjlp" Jan 27 09:28:43 crc kubenswrapper[4799]: I0127 09:28:43.585283 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38a3edb5-524f-4cdb-81b1-0f82527784af-utilities\") pod \"community-operators-2jjlp\" (UID: \"38a3edb5-524f-4cdb-81b1-0f82527784af\") " pod="openshift-marketplace/community-operators-2jjlp" Jan 27 09:28:43 crc kubenswrapper[4799]: I0127 09:28:43.585833 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38a3edb5-524f-4cdb-81b1-0f82527784af-utilities\") pod \"community-operators-2jjlp\" (UID: \"38a3edb5-524f-4cdb-81b1-0f82527784af\") " pod="openshift-marketplace/community-operators-2jjlp" Jan 27 09:28:43 crc kubenswrapper[4799]: I0127 09:28:43.586480 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38a3edb5-524f-4cdb-81b1-0f82527784af-catalog-content\") pod \"community-operators-2jjlp\" (UID: \"38a3edb5-524f-4cdb-81b1-0f82527784af\") " pod="openshift-marketplace/community-operators-2jjlp" Jan 27 09:28:43 crc kubenswrapper[4799]: I0127 09:28:43.611763 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mt2j\" (UniqueName: \"kubernetes.io/projected/38a3edb5-524f-4cdb-81b1-0f82527784af-kube-api-access-2mt2j\") pod \"community-operators-2jjlp\" (UID: \"38a3edb5-524f-4cdb-81b1-0f82527784af\") " pod="openshift-marketplace/community-operators-2jjlp" Jan 27 09:28:43 crc kubenswrapper[4799]: I0127 09:28:43.728120 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2jjlp" Jan 27 09:28:44 crc kubenswrapper[4799]: I0127 09:28:44.301170 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2jjlp"] Jan 27 09:28:45 crc kubenswrapper[4799]: I0127 09:28:45.045466 4799 generic.go:334] "Generic (PLEG): container finished" podID="38a3edb5-524f-4cdb-81b1-0f82527784af" containerID="753ec038a8697350b6cf8cff20ac66389f33fee0cccb7ce18b4469de7eefbb07" exitCode=0 Jan 27 09:28:45 crc kubenswrapper[4799]: I0127 09:28:45.045531 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2jjlp" event={"ID":"38a3edb5-524f-4cdb-81b1-0f82527784af","Type":"ContainerDied","Data":"753ec038a8697350b6cf8cff20ac66389f33fee0cccb7ce18b4469de7eefbb07"} Jan 27 09:28:45 crc kubenswrapper[4799]: I0127 09:28:45.045926 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2jjlp" event={"ID":"38a3edb5-524f-4cdb-81b1-0f82527784af","Type":"ContainerStarted","Data":"ef76dfc96c5bc158637df324b0e7db8bc3d4dd5c98d2d214ececa28eca0b42ac"} Jan 27 09:28:46 crc kubenswrapper[4799]: I0127 09:28:46.057401 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2jjlp" event={"ID":"38a3edb5-524f-4cdb-81b1-0f82527784af","Type":"ContainerStarted","Data":"69e7d849463ac9c954734a23b2341a24a9d90ed03b351949d49aead3f5833314"} Jan 27 09:28:46 crc kubenswrapper[4799]: I0127 09:28:46.451755 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:28:46 crc kubenswrapper[4799]: E0127 09:28:46.452082 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:28:47 crc kubenswrapper[4799]: I0127 09:28:47.069475 4799 generic.go:334] "Generic (PLEG): container finished" podID="38a3edb5-524f-4cdb-81b1-0f82527784af" containerID="69e7d849463ac9c954734a23b2341a24a9d90ed03b351949d49aead3f5833314" exitCode=0 Jan 27 09:28:47 crc kubenswrapper[4799]: I0127 09:28:47.069573 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2jjlp" event={"ID":"38a3edb5-524f-4cdb-81b1-0f82527784af","Type":"ContainerDied","Data":"69e7d849463ac9c954734a23b2341a24a9d90ed03b351949d49aead3f5833314"} Jan 27 09:28:48 crc kubenswrapper[4799]: I0127 09:28:48.080065 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2jjlp" event={"ID":"38a3edb5-524f-4cdb-81b1-0f82527784af","Type":"ContainerStarted","Data":"d899aaa956d79a70d50c69b14d1a2f0fedb8628999dfd56d1cf23720ef02a1ee"} Jan 27 09:28:48 crc kubenswrapper[4799]: I0127 09:28:48.096006 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2jjlp" podStartSLOduration=2.665743361 podStartE2EDuration="5.095988639s" podCreationTimestamp="2026-01-27 09:28:43 +0000 UTC" firstStartedPulling="2026-01-27 09:28:45.047587575 +0000 UTC m=+6191.358691650" lastFinishedPulling="2026-01-27 09:28:47.477832823 +0000 UTC m=+6193.788936928" observedRunningTime="2026-01-27 09:28:48.094260612 +0000 UTC m=+6194.405364677" watchObservedRunningTime="2026-01-27 09:28:48.095988639 +0000 UTC m=+6194.407092714" Jan 27 09:28:53 crc kubenswrapper[4799]: I0127 09:28:53.728571 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2jjlp" Jan 27 09:28:53 crc kubenswrapper[4799]: I0127 09:28:53.729356 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2jjlp" Jan 27 09:28:53 crc kubenswrapper[4799]: I0127 09:28:53.778050 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2jjlp" Jan 27 09:28:54 crc kubenswrapper[4799]: I0127 09:28:54.211709 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2jjlp" Jan 27 09:28:54 crc kubenswrapper[4799]: I0127 09:28:54.906291 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2jjlp"] Jan 27 09:28:56 crc kubenswrapper[4799]: I0127 09:28:56.161409 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2jjlp" podUID="38a3edb5-524f-4cdb-81b1-0f82527784af" containerName="registry-server" containerID="cri-o://d899aaa956d79a70d50c69b14d1a2f0fedb8628999dfd56d1cf23720ef02a1ee" gracePeriod=2 Jan 27 09:28:56 crc kubenswrapper[4799]: I0127 09:28:56.617193 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2jjlp" Jan 27 09:28:56 crc kubenswrapper[4799]: I0127 09:28:56.779636 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38a3edb5-524f-4cdb-81b1-0f82527784af-utilities\") pod \"38a3edb5-524f-4cdb-81b1-0f82527784af\" (UID: \"38a3edb5-524f-4cdb-81b1-0f82527784af\") " Jan 27 09:28:56 crc kubenswrapper[4799]: I0127 09:28:56.779757 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mt2j\" (UniqueName: \"kubernetes.io/projected/38a3edb5-524f-4cdb-81b1-0f82527784af-kube-api-access-2mt2j\") pod \"38a3edb5-524f-4cdb-81b1-0f82527784af\" (UID: \"38a3edb5-524f-4cdb-81b1-0f82527784af\") " Jan 27 09:28:56 crc kubenswrapper[4799]: I0127 09:28:56.779869 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38a3edb5-524f-4cdb-81b1-0f82527784af-catalog-content\") pod \"38a3edb5-524f-4cdb-81b1-0f82527784af\" (UID: \"38a3edb5-524f-4cdb-81b1-0f82527784af\") " Jan 27 09:28:56 crc kubenswrapper[4799]: I0127 09:28:56.781126 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38a3edb5-524f-4cdb-81b1-0f82527784af-utilities" (OuterVolumeSpecName: "utilities") pod "38a3edb5-524f-4cdb-81b1-0f82527784af" (UID: "38a3edb5-524f-4cdb-81b1-0f82527784af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:28:56 crc kubenswrapper[4799]: I0127 09:28:56.788879 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38a3edb5-524f-4cdb-81b1-0f82527784af-kube-api-access-2mt2j" (OuterVolumeSpecName: "kube-api-access-2mt2j") pod "38a3edb5-524f-4cdb-81b1-0f82527784af" (UID: "38a3edb5-524f-4cdb-81b1-0f82527784af"). InnerVolumeSpecName "kube-api-access-2mt2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:28:56 crc kubenswrapper[4799]: I0127 09:28:56.848654 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38a3edb5-524f-4cdb-81b1-0f82527784af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38a3edb5-524f-4cdb-81b1-0f82527784af" (UID: "38a3edb5-524f-4cdb-81b1-0f82527784af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:28:56 crc kubenswrapper[4799]: I0127 09:28:56.882290 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38a3edb5-524f-4cdb-81b1-0f82527784af-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:28:56 crc kubenswrapper[4799]: I0127 09:28:56.882367 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mt2j\" (UniqueName: \"kubernetes.io/projected/38a3edb5-524f-4cdb-81b1-0f82527784af-kube-api-access-2mt2j\") on node \"crc\" DevicePath \"\"" Jan 27 09:28:56 crc kubenswrapper[4799]: I0127 09:28:56.882385 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38a3edb5-524f-4cdb-81b1-0f82527784af-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:28:57 crc kubenswrapper[4799]: I0127 09:28:57.188088 4799 generic.go:334] "Generic (PLEG): container finished" podID="38a3edb5-524f-4cdb-81b1-0f82527784af" containerID="d899aaa956d79a70d50c69b14d1a2f0fedb8628999dfd56d1cf23720ef02a1ee" exitCode=0 Jan 27 09:28:57 crc kubenswrapper[4799]: I0127 09:28:57.188160 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2jjlp" event={"ID":"38a3edb5-524f-4cdb-81b1-0f82527784af","Type":"ContainerDied","Data":"d899aaa956d79a70d50c69b14d1a2f0fedb8628999dfd56d1cf23720ef02a1ee"} Jan 27 09:28:57 crc kubenswrapper[4799]: I0127 09:28:57.188195 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2jjlp" event={"ID":"38a3edb5-524f-4cdb-81b1-0f82527784af","Type":"ContainerDied","Data":"ef76dfc96c5bc158637df324b0e7db8bc3d4dd5c98d2d214ececa28eca0b42ac"} Jan 27 09:28:57 crc kubenswrapper[4799]: I0127 09:28:57.188216 4799 scope.go:117] "RemoveContainer" containerID="d899aaa956d79a70d50c69b14d1a2f0fedb8628999dfd56d1cf23720ef02a1ee" Jan 27 09:28:57 crc kubenswrapper[4799]: I0127 09:28:57.188526 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2jjlp" Jan 27 09:28:57 crc kubenswrapper[4799]: I0127 09:28:57.228867 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2jjlp"] Jan 27 09:28:57 crc kubenswrapper[4799]: I0127 09:28:57.233230 4799 scope.go:117] "RemoveContainer" containerID="69e7d849463ac9c954734a23b2341a24a9d90ed03b351949d49aead3f5833314" Jan 27 09:28:57 crc kubenswrapper[4799]: I0127 09:28:57.238199 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2jjlp"] Jan 27 09:28:57 crc kubenswrapper[4799]: I0127 09:28:57.252785 4799 scope.go:117] "RemoveContainer" containerID="753ec038a8697350b6cf8cff20ac66389f33fee0cccb7ce18b4469de7eefbb07" Jan 27 09:28:57 crc kubenswrapper[4799]: I0127 09:28:57.291895 4799 scope.go:117] "RemoveContainer" containerID="d899aaa956d79a70d50c69b14d1a2f0fedb8628999dfd56d1cf23720ef02a1ee" Jan 27 09:28:57 crc kubenswrapper[4799]: E0127 09:28:57.292366 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d899aaa956d79a70d50c69b14d1a2f0fedb8628999dfd56d1cf23720ef02a1ee\": container with ID starting with d899aaa956d79a70d50c69b14d1a2f0fedb8628999dfd56d1cf23720ef02a1ee not found: ID does not exist" containerID="d899aaa956d79a70d50c69b14d1a2f0fedb8628999dfd56d1cf23720ef02a1ee" Jan 27 09:28:57 crc kubenswrapper[4799]: I0127 09:28:57.292429 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d899aaa956d79a70d50c69b14d1a2f0fedb8628999dfd56d1cf23720ef02a1ee"} err="failed to get container status \"d899aaa956d79a70d50c69b14d1a2f0fedb8628999dfd56d1cf23720ef02a1ee\": rpc error: code = NotFound desc = could not find container \"d899aaa956d79a70d50c69b14d1a2f0fedb8628999dfd56d1cf23720ef02a1ee\": container with ID starting with d899aaa956d79a70d50c69b14d1a2f0fedb8628999dfd56d1cf23720ef02a1ee not found: ID does not exist" Jan 27 09:28:57 crc kubenswrapper[4799]: I0127 09:28:57.292458 4799 scope.go:117] "RemoveContainer" containerID="69e7d849463ac9c954734a23b2341a24a9d90ed03b351949d49aead3f5833314" Jan 27 09:28:57 crc kubenswrapper[4799]: E0127 09:28:57.292709 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69e7d849463ac9c954734a23b2341a24a9d90ed03b351949d49aead3f5833314\": container with ID starting with 69e7d849463ac9c954734a23b2341a24a9d90ed03b351949d49aead3f5833314 not found: ID does not exist" containerID="69e7d849463ac9c954734a23b2341a24a9d90ed03b351949d49aead3f5833314" Jan 27 09:28:57 crc kubenswrapper[4799]: I0127 09:28:57.292741 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69e7d849463ac9c954734a23b2341a24a9d90ed03b351949d49aead3f5833314"} err="failed to get container status \"69e7d849463ac9c954734a23b2341a24a9d90ed03b351949d49aead3f5833314\": rpc error: code = NotFound desc = could not find container \"69e7d849463ac9c954734a23b2341a24a9d90ed03b351949d49aead3f5833314\": container with ID starting with 69e7d849463ac9c954734a23b2341a24a9d90ed03b351949d49aead3f5833314 not found: ID does not exist" Jan 27 09:28:57 crc kubenswrapper[4799]: I0127 09:28:57.292764 4799 scope.go:117] "RemoveContainer" containerID="753ec038a8697350b6cf8cff20ac66389f33fee0cccb7ce18b4469de7eefbb07" Jan 27 09:28:57 crc kubenswrapper[4799]: E0127 09:28:57.293048 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"753ec038a8697350b6cf8cff20ac66389f33fee0cccb7ce18b4469de7eefbb07\": container with ID starting with 753ec038a8697350b6cf8cff20ac66389f33fee0cccb7ce18b4469de7eefbb07 not found: ID does not exist" containerID="753ec038a8697350b6cf8cff20ac66389f33fee0cccb7ce18b4469de7eefbb07" Jan 27 09:28:57 crc kubenswrapper[4799]: I0127 09:28:57.293100 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"753ec038a8697350b6cf8cff20ac66389f33fee0cccb7ce18b4469de7eefbb07"} err="failed to get container status \"753ec038a8697350b6cf8cff20ac66389f33fee0cccb7ce18b4469de7eefbb07\": rpc error: code = NotFound desc = could not find container \"753ec038a8697350b6cf8cff20ac66389f33fee0cccb7ce18b4469de7eefbb07\": container with ID starting with 753ec038a8697350b6cf8cff20ac66389f33fee0cccb7ce18b4469de7eefbb07 not found: ID does not exist" Jan 27 09:28:57 crc kubenswrapper[4799]: I0127 09:28:57.451315 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:28:57 crc kubenswrapper[4799]: E0127 09:28:57.451690 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:28:58 crc kubenswrapper[4799]: I0127 09:28:58.463084 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38a3edb5-524f-4cdb-81b1-0f82527784af" path="/var/lib/kubelet/pods/38a3edb5-524f-4cdb-81b1-0f82527784af/volumes" Jan 27 09:29:10 crc kubenswrapper[4799]: I0127 09:29:10.452371 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:29:10 crc kubenswrapper[4799]: E0127 09:29:10.453514 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:29:21 crc kubenswrapper[4799]: I0127 09:29:21.451545 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:29:21 crc kubenswrapper[4799]: E0127 09:29:21.453361 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:29:32 crc kubenswrapper[4799]: I0127 09:29:32.040839 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-w6gg8"] Jan 27 09:29:32 crc kubenswrapper[4799]: I0127 09:29:32.051833 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-mp6ct"] Jan 27 09:29:32 crc kubenswrapper[4799]: I0127 09:29:32.062674 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-mp6ct"] Jan 27 09:29:32 crc kubenswrapper[4799]: I0127 09:29:32.074036 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-w6gg8"] Jan 27 09:29:32 crc kubenswrapper[4799]: I0127 09:29:32.451948 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:29:32 crc kubenswrapper[4799]: E0127 09:29:32.452611 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:29:32 crc kubenswrapper[4799]: I0127 09:29:32.462814 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad63280e-42ab-4d15-8f88-f19cd766140f" path="/var/lib/kubelet/pods/ad63280e-42ab-4d15-8f88-f19cd766140f/volumes" Jan 27 09:29:32 crc kubenswrapper[4799]: I0127 09:29:32.463583 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc217648-43cf-48fc-a4e1-e371aacddb31" path="/var/lib/kubelet/pods/dc217648-43cf-48fc-a4e1-e371aacddb31/volumes" Jan 27 09:29:33 crc kubenswrapper[4799]: I0127 09:29:33.030648 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5ffc-account-create-update-xkr67"] Jan 27 09:29:33 crc kubenswrapper[4799]: I0127 09:29:33.039038 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-019f-account-create-update-fjsnb"] Jan 27 09:29:33 crc kubenswrapper[4799]: I0127 09:29:33.048512 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-dcrt4"] Jan 27 09:29:33 crc kubenswrapper[4799]: I0127 09:29:33.058768 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5ffc-account-create-update-xkr67"] Jan 27 09:29:33 crc kubenswrapper[4799]: I0127 09:29:33.066695 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-019f-account-create-update-fjsnb"] Jan 27 09:29:33 crc kubenswrapper[4799]: I0127 09:29:33.074612 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-4456-account-create-update-tsj7t"] Jan 27 09:29:33 crc kubenswrapper[4799]: I0127 09:29:33.082186 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-dcrt4"] Jan 27 09:29:33 crc kubenswrapper[4799]: I0127 09:29:33.090542 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-4456-account-create-update-tsj7t"] Jan 27 09:29:34 crc kubenswrapper[4799]: I0127 09:29:34.473636 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03cda70d-3c33-441f-bd8f-f838f16d2563" path="/var/lib/kubelet/pods/03cda70d-3c33-441f-bd8f-f838f16d2563/volumes" Jan 27 09:29:34 crc kubenswrapper[4799]: I0127 09:29:34.475165 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c1026c8-75da-4392-99b6-96ccacb81316" path="/var/lib/kubelet/pods/7c1026c8-75da-4392-99b6-96ccacb81316/volumes" Jan 27 09:29:34 crc kubenswrapper[4799]: I0127 09:29:34.476049 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bac970a-8e3f-4265-b625-3af6eeea7cbe" path="/var/lib/kubelet/pods/8bac970a-8e3f-4265-b625-3af6eeea7cbe/volumes" Jan 27 09:29:34 crc kubenswrapper[4799]: I0127 09:29:34.476875 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a70183b9-dfc0-4f3e-838f-81c806acd0fc" path="/var/lib/kubelet/pods/a70183b9-dfc0-4f3e-838f-81c806acd0fc/volumes" Jan 27 09:29:42 crc kubenswrapper[4799]: I0127 09:29:42.316039 4799 scope.go:117] "RemoveContainer" containerID="50394ff2912c3b423eec7eec525fd9dc01fd33a25935b881f697f2bf4d47031c" Jan 27 09:29:42 crc kubenswrapper[4799]: I0127 09:29:42.357022 4799 scope.go:117] "RemoveContainer" containerID="ec23d0cbd1926f410934e781461973f2cebab5b193e91532e49a3ffc146d02b2" Jan 27 09:29:42 crc kubenswrapper[4799]: I0127 09:29:42.420966 4799 scope.go:117] "RemoveContainer" containerID="cd8052c3f085c82eb28dc088485a61689454ea99bf75dea8da82b9288d6a0640" Jan 27 09:29:42 crc kubenswrapper[4799]: I0127 09:29:42.449228 4799 scope.go:117] "RemoveContainer" containerID="cf7afecaee69098e6d0e1c7274159019d7076388b526e03be5db931ca2817e00" Jan 27 09:29:42 crc kubenswrapper[4799]: I0127 09:29:42.489224 4799 scope.go:117] "RemoveContainer" containerID="b84c0ae62afdb4d4431fd7383acce54610ac98a3792aab706d6b8ce699c099e5" Jan 27 09:29:42 crc kubenswrapper[4799]: I0127 09:29:42.532688 4799 scope.go:117] "RemoveContainer" containerID="34eb76d3005a8570a4ad70c772c712fb53a691575d2ee647ddc66e231c3f2c59" Jan 27 09:29:46 crc kubenswrapper[4799]: I0127 09:29:46.451477 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:29:46 crc kubenswrapper[4799]: E0127 09:29:46.452510 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:29:49 crc kubenswrapper[4799]: I0127 09:29:49.049903 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-csq5g"] Jan 27 09:29:49 crc kubenswrapper[4799]: I0127 09:29:49.062044 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-csq5g"] Jan 27 09:29:50 crc kubenswrapper[4799]: I0127 09:29:50.462659 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe" path="/var/lib/kubelet/pods/edc98ab8-f7f1-4eac-b4bc-483e1e6fefbe/volumes" Jan 27 09:29:58 crc kubenswrapper[4799]: I0127 09:29:58.451737 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:29:58 crc kubenswrapper[4799]: E0127 09:29:58.452641 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:30:00 crc kubenswrapper[4799]: I0127 09:30:00.147859 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491770-q9mlb"] Jan 27 09:30:00 crc kubenswrapper[4799]: E0127 09:30:00.149480 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a3edb5-524f-4cdb-81b1-0f82527784af" containerName="extract-content" Jan 27 09:30:00 crc kubenswrapper[4799]: I0127 09:30:00.149624 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a3edb5-524f-4cdb-81b1-0f82527784af" containerName="extract-content" Jan 27 09:30:00 crc kubenswrapper[4799]: E0127 09:30:00.149727 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a3edb5-524f-4cdb-81b1-0f82527784af" containerName="registry-server" Jan 27 09:30:00 crc kubenswrapper[4799]: I0127 09:30:00.149813 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a3edb5-524f-4cdb-81b1-0f82527784af" containerName="registry-server" Jan 27 09:30:00 crc kubenswrapper[4799]: E0127 09:30:00.149916 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a3edb5-524f-4cdb-81b1-0f82527784af" containerName="extract-utilities" Jan 27 09:30:00 crc kubenswrapper[4799]: I0127 09:30:00.150005 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a3edb5-524f-4cdb-81b1-0f82527784af" containerName="extract-utilities" Jan 27 09:30:00 crc kubenswrapper[4799]: I0127 09:30:00.151645 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a3edb5-524f-4cdb-81b1-0f82527784af" containerName="registry-server" Jan 27 09:30:00 crc kubenswrapper[4799]: I0127 09:30:00.152750 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-q9mlb" Jan 27 09:30:00 crc kubenswrapper[4799]: I0127 09:30:00.157324 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 09:30:00 crc kubenswrapper[4799]: I0127 09:30:00.157557 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 09:30:00 crc kubenswrapper[4799]: I0127 09:30:00.160895 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491770-q9mlb"] Jan 27 09:30:00 crc kubenswrapper[4799]: I0127 09:30:00.192553 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b7b0162-4436-4d26-ba2b-58a750a5b02e-config-volume\") pod \"collect-profiles-29491770-q9mlb\" (UID: \"0b7b0162-4436-4d26-ba2b-58a750a5b02e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-q9mlb" Jan 27 09:30:00 crc kubenswrapper[4799]: I0127 09:30:00.192770 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6rpj\" (UniqueName: \"kubernetes.io/projected/0b7b0162-4436-4d26-ba2b-58a750a5b02e-kube-api-access-c6rpj\") pod \"collect-profiles-29491770-q9mlb\" (UID: \"0b7b0162-4436-4d26-ba2b-58a750a5b02e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-q9mlb" Jan 27 09:30:00 crc kubenswrapper[4799]: I0127 09:30:00.193401 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b7b0162-4436-4d26-ba2b-58a750a5b02e-secret-volume\") pod \"collect-profiles-29491770-q9mlb\" (UID: \"0b7b0162-4436-4d26-ba2b-58a750a5b02e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-q9mlb" Jan 27 09:30:00 crc kubenswrapper[4799]: I0127 09:30:00.295420 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b7b0162-4436-4d26-ba2b-58a750a5b02e-secret-volume\") pod \"collect-profiles-29491770-q9mlb\" (UID: \"0b7b0162-4436-4d26-ba2b-58a750a5b02e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-q9mlb" Jan 27 09:30:00 crc kubenswrapper[4799]: I0127 09:30:00.295488 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b7b0162-4436-4d26-ba2b-58a750a5b02e-config-volume\") pod \"collect-profiles-29491770-q9mlb\" (UID: \"0b7b0162-4436-4d26-ba2b-58a750a5b02e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-q9mlb" Jan 27 09:30:00 crc kubenswrapper[4799]: I0127 09:30:00.295531 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6rpj\" (UniqueName: \"kubernetes.io/projected/0b7b0162-4436-4d26-ba2b-58a750a5b02e-kube-api-access-c6rpj\") pod \"collect-profiles-29491770-q9mlb\" (UID: \"0b7b0162-4436-4d26-ba2b-58a750a5b02e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-q9mlb" Jan 27 09:30:00 crc kubenswrapper[4799]: I0127 09:30:00.296698 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b7b0162-4436-4d26-ba2b-58a750a5b02e-config-volume\") pod \"collect-profiles-29491770-q9mlb\" (UID: \"0b7b0162-4436-4d26-ba2b-58a750a5b02e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-q9mlb" Jan 27 09:30:00 crc kubenswrapper[4799]: I0127 09:30:00.302194 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b7b0162-4436-4d26-ba2b-58a750a5b02e-secret-volume\") pod \"collect-profiles-29491770-q9mlb\" (UID: \"0b7b0162-4436-4d26-ba2b-58a750a5b02e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-q9mlb" Jan 27 09:30:00 crc kubenswrapper[4799]: I0127 09:30:00.313531 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6rpj\" (UniqueName: \"kubernetes.io/projected/0b7b0162-4436-4d26-ba2b-58a750a5b02e-kube-api-access-c6rpj\") pod \"collect-profiles-29491770-q9mlb\" (UID: \"0b7b0162-4436-4d26-ba2b-58a750a5b02e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-q9mlb" Jan 27 09:30:00 crc kubenswrapper[4799]: I0127 09:30:00.484611 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-q9mlb" Jan 27 09:30:00 crc kubenswrapper[4799]: I0127 09:30:00.996362 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491770-q9mlb"] Jan 27 09:30:01 crc kubenswrapper[4799]: I0127 09:30:01.049456 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-q9mlb" event={"ID":"0b7b0162-4436-4d26-ba2b-58a750a5b02e","Type":"ContainerStarted","Data":"d9bb879e29876fd9ccce321c1b63b1ec82a49847e9dffc3bf1982960d03296ac"} Jan 27 09:30:02 crc kubenswrapper[4799]: I0127 09:30:02.059423 4799 generic.go:334] "Generic (PLEG): container finished" podID="0b7b0162-4436-4d26-ba2b-58a750a5b02e" containerID="fdc480e47f5ac9c13eafcc30671270a042a08e2e866f3def9f1d225a54ae236f" exitCode=0 Jan 27 09:30:02 crc kubenswrapper[4799]: I0127 09:30:02.059536 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-q9mlb" event={"ID":"0b7b0162-4436-4d26-ba2b-58a750a5b02e","Type":"ContainerDied","Data":"fdc480e47f5ac9c13eafcc30671270a042a08e2e866f3def9f1d225a54ae236f"} Jan 27 09:30:03 crc kubenswrapper[4799]: I0127 09:30:03.461437 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-q9mlb" Jan 27 09:30:03 crc kubenswrapper[4799]: I0127 09:30:03.565603 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6rpj\" (UniqueName: \"kubernetes.io/projected/0b7b0162-4436-4d26-ba2b-58a750a5b02e-kube-api-access-c6rpj\") pod \"0b7b0162-4436-4d26-ba2b-58a750a5b02e\" (UID: \"0b7b0162-4436-4d26-ba2b-58a750a5b02e\") " Jan 27 09:30:03 crc kubenswrapper[4799]: I0127 09:30:03.565669 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b7b0162-4436-4d26-ba2b-58a750a5b02e-secret-volume\") pod \"0b7b0162-4436-4d26-ba2b-58a750a5b02e\" (UID: \"0b7b0162-4436-4d26-ba2b-58a750a5b02e\") " Jan 27 09:30:03 crc kubenswrapper[4799]: I0127 09:30:03.565919 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b7b0162-4436-4d26-ba2b-58a750a5b02e-config-volume\") pod \"0b7b0162-4436-4d26-ba2b-58a750a5b02e\" (UID: \"0b7b0162-4436-4d26-ba2b-58a750a5b02e\") " Jan 27 09:30:03 crc kubenswrapper[4799]: I0127 09:30:03.567406 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b7b0162-4436-4d26-ba2b-58a750a5b02e-config-volume" (OuterVolumeSpecName: "config-volume") pod "0b7b0162-4436-4d26-ba2b-58a750a5b02e" (UID: "0b7b0162-4436-4d26-ba2b-58a750a5b02e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:30:03 crc kubenswrapper[4799]: I0127 09:30:03.572728 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b7b0162-4436-4d26-ba2b-58a750a5b02e-kube-api-access-c6rpj" (OuterVolumeSpecName: "kube-api-access-c6rpj") pod "0b7b0162-4436-4d26-ba2b-58a750a5b02e" (UID: "0b7b0162-4436-4d26-ba2b-58a750a5b02e"). InnerVolumeSpecName "kube-api-access-c6rpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:30:03 crc kubenswrapper[4799]: I0127 09:30:03.573616 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b7b0162-4436-4d26-ba2b-58a750a5b02e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0b7b0162-4436-4d26-ba2b-58a750a5b02e" (UID: "0b7b0162-4436-4d26-ba2b-58a750a5b02e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:30:03 crc kubenswrapper[4799]: I0127 09:30:03.668354 4799 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b7b0162-4436-4d26-ba2b-58a750a5b02e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 09:30:03 crc kubenswrapper[4799]: I0127 09:30:03.668392 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6rpj\" (UniqueName: \"kubernetes.io/projected/0b7b0162-4436-4d26-ba2b-58a750a5b02e-kube-api-access-c6rpj\") on node \"crc\" DevicePath \"\"" Jan 27 09:30:03 crc kubenswrapper[4799]: I0127 09:30:03.668406 4799 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b7b0162-4436-4d26-ba2b-58a750a5b02e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 09:30:04 crc kubenswrapper[4799]: I0127 09:30:04.078186 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-q9mlb" event={"ID":"0b7b0162-4436-4d26-ba2b-58a750a5b02e","Type":"ContainerDied","Data":"d9bb879e29876fd9ccce321c1b63b1ec82a49847e9dffc3bf1982960d03296ac"} Jan 27 09:30:04 crc kubenswrapper[4799]: I0127 09:30:04.078584 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9bb879e29876fd9ccce321c1b63b1ec82a49847e9dffc3bf1982960d03296ac" Jan 27 09:30:04 crc kubenswrapper[4799]: I0127 09:30:04.078321 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491770-q9mlb" Jan 27 09:30:04 crc kubenswrapper[4799]: I0127 09:30:04.528381 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491725-pwqdw"] Jan 27 09:30:04 crc kubenswrapper[4799]: I0127 09:30:04.537612 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491725-pwqdw"] Jan 27 09:30:06 crc kubenswrapper[4799]: I0127 09:30:06.049174 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-7tn7x"] Jan 27 09:30:06 crc kubenswrapper[4799]: I0127 09:30:06.059693 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-7tn7x"] Jan 27 09:30:06 crc kubenswrapper[4799]: I0127 09:30:06.462208 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7" path="/var/lib/kubelet/pods/2cbcbc9c-ee45-4bba-aa82-4c60222a6ef7/volumes" Jan 27 09:30:06 crc kubenswrapper[4799]: I0127 09:30:06.463784 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d678b284-7ca9-4738-a934-e1638038844b" path="/var/lib/kubelet/pods/d678b284-7ca9-4738-a934-e1638038844b/volumes" Jan 27 09:30:08 crc kubenswrapper[4799]: I0127 09:30:08.044191 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9qp8p"] Jan 27 09:30:08 crc kubenswrapper[4799]: I0127 09:30:08.055873 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9qp8p"] Jan 27 09:30:08 crc kubenswrapper[4799]: I0127 09:30:08.464601 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b93d40-5c8a-47d2-8f67-3b22d2594c19" path="/var/lib/kubelet/pods/75b93d40-5c8a-47d2-8f67-3b22d2594c19/volumes" Jan 27 09:30:09 crc kubenswrapper[4799]: I0127 09:30:09.452128 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:30:09 crc kubenswrapper[4799]: E0127 09:30:09.452777 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:30:23 crc kubenswrapper[4799]: I0127 09:30:23.452172 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:30:23 crc kubenswrapper[4799]: E0127 09:30:23.453350 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:30:27 crc kubenswrapper[4799]: I0127 09:30:27.045882 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-776w9"] Jan 27 09:30:27 crc kubenswrapper[4799]: I0127 09:30:27.062820 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-776w9"] Jan 27 09:30:28 crc kubenswrapper[4799]: I0127 09:30:28.462088 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d72d741-e0a5-4876-b3dd-c773184fc95a" path="/var/lib/kubelet/pods/7d72d741-e0a5-4876-b3dd-c773184fc95a/volumes" Jan 27 09:30:34 crc kubenswrapper[4799]: I0127 09:30:34.457858 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:30:34 crc kubenswrapper[4799]: E0127 09:30:34.458689 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:30:42 crc kubenswrapper[4799]: I0127 09:30:42.680807 4799 scope.go:117] "RemoveContainer" containerID="133633e18390780601c541c81bdb2581d684ec4f807068997573a5a17b9e46e0" Jan 27 09:30:42 crc kubenswrapper[4799]: I0127 09:30:42.751571 4799 scope.go:117] "RemoveContainer" containerID="18de9effd4f019730fcbfddf286d189168d7b5c8634c4ba495765e686576b063" Jan 27 09:30:42 crc kubenswrapper[4799]: I0127 09:30:42.791422 4799 scope.go:117] "RemoveContainer" containerID="2c43c8034707a68bf10d7af267a80113908fb2eaff30a8ee5c21ea4387943f67" Jan 27 09:30:42 crc kubenswrapper[4799]: I0127 09:30:42.834561 4799 scope.go:117] "RemoveContainer" containerID="59504f5fa21e27601ec9dcf464434715372428419c275147f21d4409f761f926" Jan 27 09:30:42 crc kubenswrapper[4799]: I0127 09:30:42.852968 4799 scope.go:117] "RemoveContainer" containerID="ebe0841af298b53325004f708c663198dbd52cfe14a24a341ee34a8885048a1a" Jan 27 09:30:42 crc kubenswrapper[4799]: I0127 09:30:42.923033 4799 scope.go:117] "RemoveContainer" containerID="61da552352cecd64b0a511126ecfcf6ac4f2ca9e83ced390078c938112204a20" Jan 27 09:30:47 crc kubenswrapper[4799]: I0127 09:30:47.452071 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:30:47 crc kubenswrapper[4799]: E0127 09:30:47.453955 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:31:02 crc kubenswrapper[4799]: I0127 09:31:02.451473 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:31:03 crc kubenswrapper[4799]: I0127 09:31:03.669321 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"7984a90d4da0fbe6467f9771a20999ac66281203ac9561f330757681a65007b8"} Jan 27 09:31:13 crc kubenswrapper[4799]: I0127 09:31:13.057662 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-hjkv7"] Jan 27 09:31:13 crc kubenswrapper[4799]: I0127 09:31:13.068271 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-hjkv7"] Jan 27 09:31:14 crc kubenswrapper[4799]: I0127 09:31:14.029572 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-a2e4-account-create-update-svrkq"] Jan 27 09:31:14 crc kubenswrapper[4799]: I0127 09:31:14.040417 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-a2e4-account-create-update-svrkq"] Jan 27 09:31:14 crc kubenswrapper[4799]: I0127 09:31:14.465973 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a758b2e8-c50a-4872-9493-7e61d25e8dc4" path="/var/lib/kubelet/pods/a758b2e8-c50a-4872-9493-7e61d25e8dc4/volumes" Jan 27 09:31:14 crc kubenswrapper[4799]: I0127 09:31:14.467566 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebdcba0b-16a3-4b75-a2b1-7d0e3395469e" path="/var/lib/kubelet/pods/ebdcba0b-16a3-4b75-a2b1-7d0e3395469e/volumes" Jan 27 09:31:25 crc kubenswrapper[4799]: I0127 09:31:25.067498 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-22xds"] Jan 27 09:31:25 crc kubenswrapper[4799]: I0127 09:31:25.077257 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-22xds"] Jan 27 09:31:26 crc kubenswrapper[4799]: I0127 09:31:26.465621 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39b6f7cb-8b32-40b5-a24b-a72cd119c6e8" path="/var/lib/kubelet/pods/39b6f7cb-8b32-40b5-a24b-a72cd119c6e8/volumes" Jan 27 09:31:43 crc kubenswrapper[4799]: I0127 09:31:43.038531 4799 scope.go:117] "RemoveContainer" containerID="cad6087e6a0e33226beea40cd5cdaf5ab81489f199ed38b9689639cdf12e923d" Jan 27 09:31:43 crc kubenswrapper[4799]: I0127 09:31:43.087072 4799 scope.go:117] "RemoveContainer" containerID="9b4332251d36345e8ac23413488e8697cb9aefa75890c3035c3a3b1e2d0b4bbe" Jan 27 09:31:43 crc kubenswrapper[4799]: I0127 09:31:43.172116 4799 scope.go:117] "RemoveContainer" containerID="03230bf99dce8a890f0083d2ddf746009b778ad7f279a82209b3b1ccb7f8c1ca" Jan 27 09:31:43 crc kubenswrapper[4799]: I0127 09:31:43.244429 4799 scope.go:117] "RemoveContainer" containerID="95631f3bbcb886e8c7addc815c6f28be7e54ce8046eacf0068004574071c1743" Jan 27 09:32:41 crc kubenswrapper[4799]: I0127 09:32:41.713054 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c2dld"] Jan 27 09:32:41 crc kubenswrapper[4799]: E0127 09:32:41.713990 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b7b0162-4436-4d26-ba2b-58a750a5b02e" containerName="collect-profiles" Jan 27 09:32:41 crc kubenswrapper[4799]: I0127 09:32:41.714008 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b7b0162-4436-4d26-ba2b-58a750a5b02e" containerName="collect-profiles" Jan 27 09:32:41 crc kubenswrapper[4799]: I0127 09:32:41.714223 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b7b0162-4436-4d26-ba2b-58a750a5b02e" containerName="collect-profiles" Jan 27 09:32:41 crc kubenswrapper[4799]: I0127 09:32:41.715549 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2dld" Jan 27 09:32:41 crc kubenswrapper[4799]: I0127 09:32:41.727221 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2dld"] Jan 27 09:32:41 crc kubenswrapper[4799]: I0127 09:32:41.826236 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d17bc51-c5eb-4220-a360-4d32df194105-utilities\") pod \"redhat-marketplace-c2dld\" (UID: \"6d17bc51-c5eb-4220-a360-4d32df194105\") " pod="openshift-marketplace/redhat-marketplace-c2dld" Jan 27 09:32:41 crc kubenswrapper[4799]: I0127 09:32:41.826388 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfkts\" (UniqueName: \"kubernetes.io/projected/6d17bc51-c5eb-4220-a360-4d32df194105-kube-api-access-wfkts\") pod \"redhat-marketplace-c2dld\" (UID: \"6d17bc51-c5eb-4220-a360-4d32df194105\") " pod="openshift-marketplace/redhat-marketplace-c2dld" Jan 27 09:32:41 crc kubenswrapper[4799]: I0127 09:32:41.826432 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d17bc51-c5eb-4220-a360-4d32df194105-catalog-content\") pod \"redhat-marketplace-c2dld\" (UID: \"6d17bc51-c5eb-4220-a360-4d32df194105\") " pod="openshift-marketplace/redhat-marketplace-c2dld" Jan 27 09:32:41 crc kubenswrapper[4799]: I0127 09:32:41.928554 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfkts\" (UniqueName: \"kubernetes.io/projected/6d17bc51-c5eb-4220-a360-4d32df194105-kube-api-access-wfkts\") pod \"redhat-marketplace-c2dld\" (UID: \"6d17bc51-c5eb-4220-a360-4d32df194105\") " pod="openshift-marketplace/redhat-marketplace-c2dld" Jan 27 09:32:41 crc kubenswrapper[4799]: I0127 09:32:41.928622 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d17bc51-c5eb-4220-a360-4d32df194105-catalog-content\") pod \"redhat-marketplace-c2dld\" (UID: \"6d17bc51-c5eb-4220-a360-4d32df194105\") " pod="openshift-marketplace/redhat-marketplace-c2dld" Jan 27 09:32:41 crc kubenswrapper[4799]: I0127 09:32:41.928685 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d17bc51-c5eb-4220-a360-4d32df194105-utilities\") pod \"redhat-marketplace-c2dld\" (UID: \"6d17bc51-c5eb-4220-a360-4d32df194105\") " pod="openshift-marketplace/redhat-marketplace-c2dld" Jan 27 09:32:41 crc kubenswrapper[4799]: I0127 09:32:41.929260 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d17bc51-c5eb-4220-a360-4d32df194105-catalog-content\") pod \"redhat-marketplace-c2dld\" (UID: \"6d17bc51-c5eb-4220-a360-4d32df194105\") " pod="openshift-marketplace/redhat-marketplace-c2dld" Jan 27 09:32:41 crc kubenswrapper[4799]: I0127 09:32:41.929260 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d17bc51-c5eb-4220-a360-4d32df194105-utilities\") pod \"redhat-marketplace-c2dld\" (UID: \"6d17bc51-c5eb-4220-a360-4d32df194105\") " pod="openshift-marketplace/redhat-marketplace-c2dld" Jan 27 09:32:41 crc kubenswrapper[4799]: I0127 09:32:41.951074 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfkts\" (UniqueName: \"kubernetes.io/projected/6d17bc51-c5eb-4220-a360-4d32df194105-kube-api-access-wfkts\") pod \"redhat-marketplace-c2dld\" (UID: \"6d17bc51-c5eb-4220-a360-4d32df194105\") " pod="openshift-marketplace/redhat-marketplace-c2dld" Jan 27 09:32:42 crc kubenswrapper[4799]: I0127 09:32:42.049809 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2dld" Jan 27 09:32:42 crc kubenswrapper[4799]: I0127 09:32:42.539062 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2dld"] Jan 27 09:32:43 crc kubenswrapper[4799]: I0127 09:32:43.517125 4799 generic.go:334] "Generic (PLEG): container finished" podID="6d17bc51-c5eb-4220-a360-4d32df194105" containerID="c0378aa42f0619fc30f134453c875fb68165379c14f50e59b5ac0f5e8688b02c" exitCode=0 Jan 27 09:32:43 crc kubenswrapper[4799]: I0127 09:32:43.517192 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2dld" event={"ID":"6d17bc51-c5eb-4220-a360-4d32df194105","Type":"ContainerDied","Data":"c0378aa42f0619fc30f134453c875fb68165379c14f50e59b5ac0f5e8688b02c"} Jan 27 09:32:43 crc kubenswrapper[4799]: I0127 09:32:43.517534 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2dld" event={"ID":"6d17bc51-c5eb-4220-a360-4d32df194105","Type":"ContainerStarted","Data":"081c1557b56122a38d49dd39e427f9292fd003966da57b1e4da9dea8b4e29ebe"} Jan 27 09:32:44 crc kubenswrapper[4799]: I0127 09:32:44.532817 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2dld" event={"ID":"6d17bc51-c5eb-4220-a360-4d32df194105","Type":"ContainerStarted","Data":"41a39e3adf5353c509de55d79464bbd4faa7b7034a7b52e00d4d1d17504e5af1"} Jan 27 09:32:45 crc kubenswrapper[4799]: I0127 09:32:45.551119 4799 generic.go:334] "Generic (PLEG): container finished" podID="6d17bc51-c5eb-4220-a360-4d32df194105" containerID="41a39e3adf5353c509de55d79464bbd4faa7b7034a7b52e00d4d1d17504e5af1" exitCode=0 Jan 27 09:32:45 crc kubenswrapper[4799]: I0127 09:32:45.551190 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2dld" event={"ID":"6d17bc51-c5eb-4220-a360-4d32df194105","Type":"ContainerDied","Data":"41a39e3adf5353c509de55d79464bbd4faa7b7034a7b52e00d4d1d17504e5af1"} Jan 27 09:32:46 crc kubenswrapper[4799]: I0127 09:32:46.562172 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2dld" event={"ID":"6d17bc51-c5eb-4220-a360-4d32df194105","Type":"ContainerStarted","Data":"f58270fa5c10e22dc415838319f367c17f3f387603ddf2b0301fbc13ac0b7cc9"} Jan 27 09:32:46 crc kubenswrapper[4799]: I0127 09:32:46.579997 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c2dld" podStartSLOduration=2.924719163 podStartE2EDuration="5.579981256s" podCreationTimestamp="2026-01-27 09:32:41 +0000 UTC" firstStartedPulling="2026-01-27 09:32:43.518975627 +0000 UTC m=+6429.830079692" lastFinishedPulling="2026-01-27 09:32:46.17423769 +0000 UTC m=+6432.485341785" observedRunningTime="2026-01-27 09:32:46.576856142 +0000 UTC m=+6432.887960227" watchObservedRunningTime="2026-01-27 09:32:46.579981256 +0000 UTC m=+6432.891085321" Jan 27 09:32:52 crc kubenswrapper[4799]: I0127 09:32:52.050381 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c2dld" Jan 27 09:32:52 crc kubenswrapper[4799]: I0127 09:32:52.052492 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c2dld" Jan 27 09:32:52 crc kubenswrapper[4799]: I0127 09:32:52.098588 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c2dld" Jan 27 09:32:52 crc kubenswrapper[4799]: I0127 09:32:52.657553 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c2dld" Jan 27 09:32:52 crc kubenswrapper[4799]: I0127 09:32:52.713740 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2dld"] Jan 27 09:32:54 crc kubenswrapper[4799]: I0127 09:32:54.635983 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c2dld" podUID="6d17bc51-c5eb-4220-a360-4d32df194105" containerName="registry-server" containerID="cri-o://f58270fa5c10e22dc415838319f367c17f3f387603ddf2b0301fbc13ac0b7cc9" gracePeriod=2 Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.151676 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2dld" Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.269057 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d17bc51-c5eb-4220-a360-4d32df194105-catalog-content\") pod \"6d17bc51-c5eb-4220-a360-4d32df194105\" (UID: \"6d17bc51-c5eb-4220-a360-4d32df194105\") " Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.269258 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfkts\" (UniqueName: \"kubernetes.io/projected/6d17bc51-c5eb-4220-a360-4d32df194105-kube-api-access-wfkts\") pod \"6d17bc51-c5eb-4220-a360-4d32df194105\" (UID: \"6d17bc51-c5eb-4220-a360-4d32df194105\") " Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.269393 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d17bc51-c5eb-4220-a360-4d32df194105-utilities\") pod \"6d17bc51-c5eb-4220-a360-4d32df194105\" (UID: \"6d17bc51-c5eb-4220-a360-4d32df194105\") " Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.270421 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d17bc51-c5eb-4220-a360-4d32df194105-utilities" (OuterVolumeSpecName: "utilities") pod "6d17bc51-c5eb-4220-a360-4d32df194105" (UID: "6d17bc51-c5eb-4220-a360-4d32df194105"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.276943 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d17bc51-c5eb-4220-a360-4d32df194105-kube-api-access-wfkts" (OuterVolumeSpecName: "kube-api-access-wfkts") pod "6d17bc51-c5eb-4220-a360-4d32df194105" (UID: "6d17bc51-c5eb-4220-a360-4d32df194105"). InnerVolumeSpecName "kube-api-access-wfkts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.292287 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d17bc51-c5eb-4220-a360-4d32df194105-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d17bc51-c5eb-4220-a360-4d32df194105" (UID: "6d17bc51-c5eb-4220-a360-4d32df194105"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.371409 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d17bc51-c5eb-4220-a360-4d32df194105-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.371442 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d17bc51-c5eb-4220-a360-4d32df194105-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.371460 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfkts\" (UniqueName: \"kubernetes.io/projected/6d17bc51-c5eb-4220-a360-4d32df194105-kube-api-access-wfkts\") on node \"crc\" DevicePath \"\"" Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.645127 4799 generic.go:334] "Generic (PLEG): container finished" podID="6d17bc51-c5eb-4220-a360-4d32df194105" containerID="f58270fa5c10e22dc415838319f367c17f3f387603ddf2b0301fbc13ac0b7cc9" exitCode=0 Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.645163 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2dld" event={"ID":"6d17bc51-c5eb-4220-a360-4d32df194105","Type":"ContainerDied","Data":"f58270fa5c10e22dc415838319f367c17f3f387603ddf2b0301fbc13ac0b7cc9"} Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.645195 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2dld" event={"ID":"6d17bc51-c5eb-4220-a360-4d32df194105","Type":"ContainerDied","Data":"081c1557b56122a38d49dd39e427f9292fd003966da57b1e4da9dea8b4e29ebe"} Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.645215 4799 scope.go:117] "RemoveContainer" containerID="f58270fa5c10e22dc415838319f367c17f3f387603ddf2b0301fbc13ac0b7cc9" Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.645215 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2dld" Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.674057 4799 scope.go:117] "RemoveContainer" containerID="41a39e3adf5353c509de55d79464bbd4faa7b7034a7b52e00d4d1d17504e5af1" Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.688566 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2dld"] Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.704685 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2dld"] Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.715550 4799 scope.go:117] "RemoveContainer" containerID="c0378aa42f0619fc30f134453c875fb68165379c14f50e59b5ac0f5e8688b02c" Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.751004 4799 scope.go:117] "RemoveContainer" containerID="f58270fa5c10e22dc415838319f367c17f3f387603ddf2b0301fbc13ac0b7cc9" Jan 27 09:32:55 crc kubenswrapper[4799]: E0127 09:32:55.751566 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f58270fa5c10e22dc415838319f367c17f3f387603ddf2b0301fbc13ac0b7cc9\": container with ID starting with f58270fa5c10e22dc415838319f367c17f3f387603ddf2b0301fbc13ac0b7cc9 not found: ID does not exist" containerID="f58270fa5c10e22dc415838319f367c17f3f387603ddf2b0301fbc13ac0b7cc9" Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.751614 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f58270fa5c10e22dc415838319f367c17f3f387603ddf2b0301fbc13ac0b7cc9"} err="failed to get container status \"f58270fa5c10e22dc415838319f367c17f3f387603ddf2b0301fbc13ac0b7cc9\": rpc error: code = NotFound desc = could not find container \"f58270fa5c10e22dc415838319f367c17f3f387603ddf2b0301fbc13ac0b7cc9\": container with ID starting with f58270fa5c10e22dc415838319f367c17f3f387603ddf2b0301fbc13ac0b7cc9 not found: ID does not exist" Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.751642 4799 scope.go:117] "RemoveContainer" containerID="41a39e3adf5353c509de55d79464bbd4faa7b7034a7b52e00d4d1d17504e5af1" Jan 27 09:32:55 crc kubenswrapper[4799]: E0127 09:32:55.751899 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41a39e3adf5353c509de55d79464bbd4faa7b7034a7b52e00d4d1d17504e5af1\": container with ID starting with 41a39e3adf5353c509de55d79464bbd4faa7b7034a7b52e00d4d1d17504e5af1 not found: ID does not exist" containerID="41a39e3adf5353c509de55d79464bbd4faa7b7034a7b52e00d4d1d17504e5af1" Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.751922 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41a39e3adf5353c509de55d79464bbd4faa7b7034a7b52e00d4d1d17504e5af1"} err="failed to get container status \"41a39e3adf5353c509de55d79464bbd4faa7b7034a7b52e00d4d1d17504e5af1\": rpc error: code = NotFound desc = could not find container \"41a39e3adf5353c509de55d79464bbd4faa7b7034a7b52e00d4d1d17504e5af1\": container with ID starting with 41a39e3adf5353c509de55d79464bbd4faa7b7034a7b52e00d4d1d17504e5af1 not found: ID does not exist" Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.751934 4799 scope.go:117] "RemoveContainer" containerID="c0378aa42f0619fc30f134453c875fb68165379c14f50e59b5ac0f5e8688b02c" Jan 27 09:32:55 crc kubenswrapper[4799]: E0127 09:32:55.752190 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0378aa42f0619fc30f134453c875fb68165379c14f50e59b5ac0f5e8688b02c\": container with ID starting with c0378aa42f0619fc30f134453c875fb68165379c14f50e59b5ac0f5e8688b02c not found: ID does not exist" containerID="c0378aa42f0619fc30f134453c875fb68165379c14f50e59b5ac0f5e8688b02c" Jan 27 09:32:55 crc kubenswrapper[4799]: I0127 09:32:55.752210 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0378aa42f0619fc30f134453c875fb68165379c14f50e59b5ac0f5e8688b02c"} err="failed to get container status \"c0378aa42f0619fc30f134453c875fb68165379c14f50e59b5ac0f5e8688b02c\": rpc error: code = NotFound desc = could not find container \"c0378aa42f0619fc30f134453c875fb68165379c14f50e59b5ac0f5e8688b02c\": container with ID starting with c0378aa42f0619fc30f134453c875fb68165379c14f50e59b5ac0f5e8688b02c not found: ID does not exist" Jan 27 09:32:56 crc kubenswrapper[4799]: I0127 09:32:56.474051 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d17bc51-c5eb-4220-a360-4d32df194105" path="/var/lib/kubelet/pods/6d17bc51-c5eb-4220-a360-4d32df194105/volumes" Jan 27 09:33:23 crc kubenswrapper[4799]: I0127 09:33:23.731253 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:33:23 crc kubenswrapper[4799]: I0127 09:33:23.731816 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:33:53 crc kubenswrapper[4799]: I0127 09:33:53.731663 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:33:53 crc kubenswrapper[4799]: I0127 09:33:53.732263 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:34:12 crc kubenswrapper[4799]: I0127 09:34:12.069885 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-76ccs"] Jan 27 09:34:12 crc kubenswrapper[4799]: I0127 09:34:12.084783 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-76ccs"] Jan 27 09:34:12 crc kubenswrapper[4799]: I0127 09:34:12.475473 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb2d4eca-17ab-4ead-8e06-cd9d2c197577" path="/var/lib/kubelet/pods/cb2d4eca-17ab-4ead-8e06-cd9d2c197577/volumes" Jan 27 09:34:13 crc kubenswrapper[4799]: I0127 09:34:13.044501 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-ec2b-account-create-update-l5zz2"] Jan 27 09:34:13 crc kubenswrapper[4799]: I0127 09:34:13.062081 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-ec2b-account-create-update-l5zz2"] Jan 27 09:34:14 crc kubenswrapper[4799]: I0127 09:34:14.466942 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f41bc52-606a-49b5-bdc8-161692d0c525" path="/var/lib/kubelet/pods/0f41bc52-606a-49b5-bdc8-161692d0c525/volumes" Jan 27 09:34:18 crc kubenswrapper[4799]: I0127 09:34:18.035350 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-cx4gb"] Jan 27 09:34:18 crc kubenswrapper[4799]: I0127 09:34:18.046961 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-cx4gb"] Jan 27 09:34:18 crc kubenswrapper[4799]: I0127 09:34:18.466018 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71d3a837-3194-4f39-b5b5-129fd1881f24" path="/var/lib/kubelet/pods/71d3a837-3194-4f39-b5b5-129fd1881f24/volumes" Jan 27 09:34:19 crc kubenswrapper[4799]: I0127 09:34:19.035927 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-383d-account-create-update-928q5"] Jan 27 09:34:19 crc kubenswrapper[4799]: I0127 09:34:19.049218 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-383d-account-create-update-928q5"] Jan 27 09:34:20 crc kubenswrapper[4799]: I0127 09:34:20.463353 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad9bee34-4a93-4e3f-bf8c-ed07be0400f3" path="/var/lib/kubelet/pods/ad9bee34-4a93-4e3f-bf8c-ed07be0400f3/volumes" Jan 27 09:34:23 crc kubenswrapper[4799]: I0127 09:34:23.732052 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:34:23 crc kubenswrapper[4799]: I0127 09:34:23.734381 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:34:23 crc kubenswrapper[4799]: I0127 09:34:23.734519 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 09:34:23 crc kubenswrapper[4799]: I0127 09:34:23.735431 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7984a90d4da0fbe6467f9771a20999ac66281203ac9561f330757681a65007b8"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 09:34:23 crc kubenswrapper[4799]: I0127 09:34:23.735652 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://7984a90d4da0fbe6467f9771a20999ac66281203ac9561f330757681a65007b8" gracePeriod=600 Jan 27 09:34:24 crc kubenswrapper[4799]: I0127 09:34:24.599688 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="7984a90d4da0fbe6467f9771a20999ac66281203ac9561f330757681a65007b8" exitCode=0 Jan 27 09:34:24 crc kubenswrapper[4799]: I0127 09:34:24.599881 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"7984a90d4da0fbe6467f9771a20999ac66281203ac9561f330757681a65007b8"} Jan 27 09:34:24 crc kubenswrapper[4799]: I0127 09:34:24.600164 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5"} Jan 27 09:34:24 crc kubenswrapper[4799]: I0127 09:34:24.600190 4799 scope.go:117] "RemoveContainer" containerID="34809bc56b11d41a058363fbb0c10c6c566ea89b182b8289519ec32ee41084e0" Jan 27 09:34:43 crc kubenswrapper[4799]: I0127 09:34:43.430817 4799 scope.go:117] "RemoveContainer" containerID="c115cb34f1882e9319984e8a024ebbab4685a5002b1752653e73038f1cb08ebf" Jan 27 09:34:43 crc kubenswrapper[4799]: I0127 09:34:43.457418 4799 scope.go:117] "RemoveContainer" containerID="b55f486cddfd33c8aa28760ed4b46f0110726e6c0ce9c5bfbfe3025cf7914e1a" Jan 27 09:34:43 crc kubenswrapper[4799]: I0127 09:34:43.510712 4799 scope.go:117] "RemoveContainer" containerID="4936bfb27642b13f48eb8712017bd7b243ccdce6ef350a596c03138849bcf02f" Jan 27 09:34:43 crc kubenswrapper[4799]: I0127 09:34:43.551966 4799 scope.go:117] "RemoveContainer" containerID="fe155a79035691eb092677750b72cba05ed0a916a56f82840dc382a08c95efad" Jan 27 09:35:03 crc kubenswrapper[4799]: I0127 09:35:03.059833 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-gvt2q"] Jan 27 09:35:03 crc kubenswrapper[4799]: I0127 09:35:03.071641 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-gvt2q"] Jan 27 09:35:04 crc kubenswrapper[4799]: I0127 09:35:04.483716 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1672179-ba8a-4842-aebd-cf496ff726e4" path="/var/lib/kubelet/pods/d1672179-ba8a-4842-aebd-cf496ff726e4/volumes" Jan 27 09:35:38 crc kubenswrapper[4799]: I0127 09:35:38.696761 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p2b8p"] Jan 27 09:35:38 crc kubenswrapper[4799]: E0127 09:35:38.697777 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d17bc51-c5eb-4220-a360-4d32df194105" containerName="registry-server" Jan 27 09:35:38 crc kubenswrapper[4799]: I0127 09:35:38.697798 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d17bc51-c5eb-4220-a360-4d32df194105" containerName="registry-server" Jan 27 09:35:38 crc kubenswrapper[4799]: E0127 09:35:38.697837 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d17bc51-c5eb-4220-a360-4d32df194105" containerName="extract-content" Jan 27 09:35:38 crc kubenswrapper[4799]: I0127 09:35:38.697846 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d17bc51-c5eb-4220-a360-4d32df194105" containerName="extract-content" Jan 27 09:35:38 crc kubenswrapper[4799]: E0127 09:35:38.697856 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d17bc51-c5eb-4220-a360-4d32df194105" containerName="extract-utilities" Jan 27 09:35:38 crc kubenswrapper[4799]: I0127 09:35:38.697864 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d17bc51-c5eb-4220-a360-4d32df194105" containerName="extract-utilities" Jan 27 09:35:38 crc kubenswrapper[4799]: I0127 09:35:38.698361 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d17bc51-c5eb-4220-a360-4d32df194105" containerName="registry-server" Jan 27 09:35:38 crc kubenswrapper[4799]: I0127 09:35:38.700483 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2b8p" Jan 27 09:35:38 crc kubenswrapper[4799]: I0127 09:35:38.764293 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p2b8p"] Jan 27 09:35:38 crc kubenswrapper[4799]: I0127 09:35:38.852846 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/183b7fff-8db9-428f-942b-dc58bd36dfcf-utilities\") pod \"certified-operators-p2b8p\" (UID: \"183b7fff-8db9-428f-942b-dc58bd36dfcf\") " pod="openshift-marketplace/certified-operators-p2b8p" Jan 27 09:35:38 crc kubenswrapper[4799]: I0127 09:35:38.853605 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/183b7fff-8db9-428f-942b-dc58bd36dfcf-catalog-content\") pod \"certified-operators-p2b8p\" (UID: \"183b7fff-8db9-428f-942b-dc58bd36dfcf\") " pod="openshift-marketplace/certified-operators-p2b8p" Jan 27 09:35:38 crc kubenswrapper[4799]: I0127 09:35:38.853729 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pq2p\" (UniqueName: \"kubernetes.io/projected/183b7fff-8db9-428f-942b-dc58bd36dfcf-kube-api-access-6pq2p\") pod \"certified-operators-p2b8p\" (UID: \"183b7fff-8db9-428f-942b-dc58bd36dfcf\") " pod="openshift-marketplace/certified-operators-p2b8p" Jan 27 09:35:38 crc kubenswrapper[4799]: I0127 09:35:38.956675 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/183b7fff-8db9-428f-942b-dc58bd36dfcf-utilities\") pod \"certified-operators-p2b8p\" (UID: \"183b7fff-8db9-428f-942b-dc58bd36dfcf\") " pod="openshift-marketplace/certified-operators-p2b8p" Jan 27 09:35:38 crc kubenswrapper[4799]: I0127 09:35:38.956805 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/183b7fff-8db9-428f-942b-dc58bd36dfcf-catalog-content\") pod \"certified-operators-p2b8p\" (UID: \"183b7fff-8db9-428f-942b-dc58bd36dfcf\") " pod="openshift-marketplace/certified-operators-p2b8p" Jan 27 09:35:38 crc kubenswrapper[4799]: I0127 09:35:38.956834 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pq2p\" (UniqueName: \"kubernetes.io/projected/183b7fff-8db9-428f-942b-dc58bd36dfcf-kube-api-access-6pq2p\") pod \"certified-operators-p2b8p\" (UID: \"183b7fff-8db9-428f-942b-dc58bd36dfcf\") " pod="openshift-marketplace/certified-operators-p2b8p" Jan 27 09:35:38 crc kubenswrapper[4799]: I0127 09:35:38.957364 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/183b7fff-8db9-428f-942b-dc58bd36dfcf-utilities\") pod \"certified-operators-p2b8p\" (UID: \"183b7fff-8db9-428f-942b-dc58bd36dfcf\") " pod="openshift-marketplace/certified-operators-p2b8p" Jan 27 09:35:38 crc kubenswrapper[4799]: I0127 09:35:38.957423 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/183b7fff-8db9-428f-942b-dc58bd36dfcf-catalog-content\") pod \"certified-operators-p2b8p\" (UID: \"183b7fff-8db9-428f-942b-dc58bd36dfcf\") " pod="openshift-marketplace/certified-operators-p2b8p" Jan 27 09:35:38 crc kubenswrapper[4799]: I0127 09:35:38.976970 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pq2p\" (UniqueName: \"kubernetes.io/projected/183b7fff-8db9-428f-942b-dc58bd36dfcf-kube-api-access-6pq2p\") pod \"certified-operators-p2b8p\" (UID: \"183b7fff-8db9-428f-942b-dc58bd36dfcf\") " pod="openshift-marketplace/certified-operators-p2b8p" Jan 27 09:35:39 crc kubenswrapper[4799]: I0127 09:35:39.051040 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2b8p" Jan 27 09:35:39 crc kubenswrapper[4799]: I0127 09:35:39.415441 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p2b8p"] Jan 27 09:35:40 crc kubenswrapper[4799]: I0127 09:35:40.408203 4799 generic.go:334] "Generic (PLEG): container finished" podID="183b7fff-8db9-428f-942b-dc58bd36dfcf" containerID="7d306eff7228a89cf53d785232b8b48b6bc39c171cfbfc77b832649fd9109b10" exitCode=0 Jan 27 09:35:40 crc kubenswrapper[4799]: I0127 09:35:40.408268 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2b8p" event={"ID":"183b7fff-8db9-428f-942b-dc58bd36dfcf","Type":"ContainerDied","Data":"7d306eff7228a89cf53d785232b8b48b6bc39c171cfbfc77b832649fd9109b10"} Jan 27 09:35:40 crc kubenswrapper[4799]: I0127 09:35:40.408560 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2b8p" event={"ID":"183b7fff-8db9-428f-942b-dc58bd36dfcf","Type":"ContainerStarted","Data":"6b545ee7a6e5585e4c7fe3c4b09c45fed9efb91b12b14f6befe76e44b9d11fbe"} Jan 27 09:35:40 crc kubenswrapper[4799]: I0127 09:35:40.410775 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 09:35:41 crc kubenswrapper[4799]: I0127 09:35:41.436735 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2b8p" event={"ID":"183b7fff-8db9-428f-942b-dc58bd36dfcf","Type":"ContainerStarted","Data":"9670bc5e253a7d11fd187cc2a2215e55532a7d1260602d1af9e14f2876b5c301"} Jan 27 09:35:42 crc kubenswrapper[4799]: I0127 09:35:42.536584 4799 generic.go:334] "Generic (PLEG): container finished" podID="183b7fff-8db9-428f-942b-dc58bd36dfcf" containerID="9670bc5e253a7d11fd187cc2a2215e55532a7d1260602d1af9e14f2876b5c301" exitCode=0 Jan 27 09:35:42 crc kubenswrapper[4799]: I0127 09:35:42.536649 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2b8p" event={"ID":"183b7fff-8db9-428f-942b-dc58bd36dfcf","Type":"ContainerDied","Data":"9670bc5e253a7d11fd187cc2a2215e55532a7d1260602d1af9e14f2876b5c301"} Jan 27 09:35:43 crc kubenswrapper[4799]: I0127 09:35:43.679553 4799 scope.go:117] "RemoveContainer" containerID="ee2651b2e4df67e32a7b6a299a1baf0dcbdfed4e1ea625868216aa59c8820b92" Jan 27 09:35:43 crc kubenswrapper[4799]: I0127 09:35:43.707498 4799 scope.go:117] "RemoveContainer" containerID="4e005bb135414f50cfa85509a0ab5ff9446852027bec38ce811e4b58559aca9e" Jan 27 09:35:44 crc kubenswrapper[4799]: I0127 09:35:44.561612 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2b8p" event={"ID":"183b7fff-8db9-428f-942b-dc58bd36dfcf","Type":"ContainerStarted","Data":"f86844a2ebd449ae5256e2f962bcfcc07d4060e01db19b059ea9fcd0d0ca8f65"} Jan 27 09:35:44 crc kubenswrapper[4799]: I0127 09:35:44.605026 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p2b8p" podStartSLOduration=3.596533267 podStartE2EDuration="6.604991172s" podCreationTimestamp="2026-01-27 09:35:38 +0000 UTC" firstStartedPulling="2026-01-27 09:35:40.410507433 +0000 UTC m=+6606.721611508" lastFinishedPulling="2026-01-27 09:35:43.418965348 +0000 UTC m=+6609.730069413" observedRunningTime="2026-01-27 09:35:44.588788693 +0000 UTC m=+6610.899892798" watchObservedRunningTime="2026-01-27 09:35:44.604991172 +0000 UTC m=+6610.916095277" Jan 27 09:35:49 crc kubenswrapper[4799]: I0127 09:35:49.051852 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p2b8p" Jan 27 09:35:49 crc kubenswrapper[4799]: I0127 09:35:49.052321 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p2b8p" Jan 27 09:35:49 crc kubenswrapper[4799]: I0127 09:35:49.118829 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p2b8p" Jan 27 09:35:49 crc kubenswrapper[4799]: I0127 09:35:49.695574 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p2b8p" Jan 27 09:35:49 crc kubenswrapper[4799]: I0127 09:35:49.758158 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p2b8p"] Jan 27 09:35:51 crc kubenswrapper[4799]: I0127 09:35:51.648576 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p2b8p" podUID="183b7fff-8db9-428f-942b-dc58bd36dfcf" containerName="registry-server" containerID="cri-o://f86844a2ebd449ae5256e2f962bcfcc07d4060e01db19b059ea9fcd0d0ca8f65" gracePeriod=2 Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.235255 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2b8p" Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.360443 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/183b7fff-8db9-428f-942b-dc58bd36dfcf-utilities\") pod \"183b7fff-8db9-428f-942b-dc58bd36dfcf\" (UID: \"183b7fff-8db9-428f-942b-dc58bd36dfcf\") " Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.361107 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pq2p\" (UniqueName: \"kubernetes.io/projected/183b7fff-8db9-428f-942b-dc58bd36dfcf-kube-api-access-6pq2p\") pod \"183b7fff-8db9-428f-942b-dc58bd36dfcf\" (UID: \"183b7fff-8db9-428f-942b-dc58bd36dfcf\") " Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.361221 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/183b7fff-8db9-428f-942b-dc58bd36dfcf-catalog-content\") pod \"183b7fff-8db9-428f-942b-dc58bd36dfcf\" (UID: \"183b7fff-8db9-428f-942b-dc58bd36dfcf\") " Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.361620 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/183b7fff-8db9-428f-942b-dc58bd36dfcf-utilities" (OuterVolumeSpecName: "utilities") pod "183b7fff-8db9-428f-942b-dc58bd36dfcf" (UID: "183b7fff-8db9-428f-942b-dc58bd36dfcf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.363539 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/183b7fff-8db9-428f-942b-dc58bd36dfcf-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.369335 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/183b7fff-8db9-428f-942b-dc58bd36dfcf-kube-api-access-6pq2p" (OuterVolumeSpecName: "kube-api-access-6pq2p") pod "183b7fff-8db9-428f-942b-dc58bd36dfcf" (UID: "183b7fff-8db9-428f-942b-dc58bd36dfcf"). InnerVolumeSpecName "kube-api-access-6pq2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.429393 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/183b7fff-8db9-428f-942b-dc58bd36dfcf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "183b7fff-8db9-428f-942b-dc58bd36dfcf" (UID: "183b7fff-8db9-428f-942b-dc58bd36dfcf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.465326 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pq2p\" (UniqueName: \"kubernetes.io/projected/183b7fff-8db9-428f-942b-dc58bd36dfcf-kube-api-access-6pq2p\") on node \"crc\" DevicePath \"\"" Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.465367 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/183b7fff-8db9-428f-942b-dc58bd36dfcf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.663039 4799 generic.go:334] "Generic (PLEG): container finished" podID="183b7fff-8db9-428f-942b-dc58bd36dfcf" containerID="f86844a2ebd449ae5256e2f962bcfcc07d4060e01db19b059ea9fcd0d0ca8f65" exitCode=0 Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.663093 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2b8p" event={"ID":"183b7fff-8db9-428f-942b-dc58bd36dfcf","Type":"ContainerDied","Data":"f86844a2ebd449ae5256e2f962bcfcc07d4060e01db19b059ea9fcd0d0ca8f65"} Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.663140 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2b8p" Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.663173 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2b8p" event={"ID":"183b7fff-8db9-428f-942b-dc58bd36dfcf","Type":"ContainerDied","Data":"6b545ee7a6e5585e4c7fe3c4b09c45fed9efb91b12b14f6befe76e44b9d11fbe"} Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.663209 4799 scope.go:117] "RemoveContainer" containerID="f86844a2ebd449ae5256e2f962bcfcc07d4060e01db19b059ea9fcd0d0ca8f65" Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.698908 4799 scope.go:117] "RemoveContainer" containerID="9670bc5e253a7d11fd187cc2a2215e55532a7d1260602d1af9e14f2876b5c301" Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.709326 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p2b8p"] Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.727481 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p2b8p"] Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.738358 4799 scope.go:117] "RemoveContainer" containerID="7d306eff7228a89cf53d785232b8b48b6bc39c171cfbfc77b832649fd9109b10" Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.784330 4799 scope.go:117] "RemoveContainer" containerID="f86844a2ebd449ae5256e2f962bcfcc07d4060e01db19b059ea9fcd0d0ca8f65" Jan 27 09:35:52 crc kubenswrapper[4799]: E0127 09:35:52.785054 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f86844a2ebd449ae5256e2f962bcfcc07d4060e01db19b059ea9fcd0d0ca8f65\": container with ID starting with f86844a2ebd449ae5256e2f962bcfcc07d4060e01db19b059ea9fcd0d0ca8f65 not found: ID does not exist" containerID="f86844a2ebd449ae5256e2f962bcfcc07d4060e01db19b059ea9fcd0d0ca8f65" Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.785111 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f86844a2ebd449ae5256e2f962bcfcc07d4060e01db19b059ea9fcd0d0ca8f65"} err="failed to get container status \"f86844a2ebd449ae5256e2f962bcfcc07d4060e01db19b059ea9fcd0d0ca8f65\": rpc error: code = NotFound desc = could not find container \"f86844a2ebd449ae5256e2f962bcfcc07d4060e01db19b059ea9fcd0d0ca8f65\": container with ID starting with f86844a2ebd449ae5256e2f962bcfcc07d4060e01db19b059ea9fcd0d0ca8f65 not found: ID does not exist" Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.785150 4799 scope.go:117] "RemoveContainer" containerID="9670bc5e253a7d11fd187cc2a2215e55532a7d1260602d1af9e14f2876b5c301" Jan 27 09:35:52 crc kubenswrapper[4799]: E0127 09:35:52.785897 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9670bc5e253a7d11fd187cc2a2215e55532a7d1260602d1af9e14f2876b5c301\": container with ID starting with 9670bc5e253a7d11fd187cc2a2215e55532a7d1260602d1af9e14f2876b5c301 not found: ID does not exist" containerID="9670bc5e253a7d11fd187cc2a2215e55532a7d1260602d1af9e14f2876b5c301" Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.785945 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9670bc5e253a7d11fd187cc2a2215e55532a7d1260602d1af9e14f2876b5c301"} err="failed to get container status \"9670bc5e253a7d11fd187cc2a2215e55532a7d1260602d1af9e14f2876b5c301\": rpc error: code = NotFound desc = could not find container \"9670bc5e253a7d11fd187cc2a2215e55532a7d1260602d1af9e14f2876b5c301\": container with ID starting with 9670bc5e253a7d11fd187cc2a2215e55532a7d1260602d1af9e14f2876b5c301 not found: ID does not exist" Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.786001 4799 scope.go:117] "RemoveContainer" containerID="7d306eff7228a89cf53d785232b8b48b6bc39c171cfbfc77b832649fd9109b10" Jan 27 09:35:52 crc kubenswrapper[4799]: E0127 09:35:52.786541 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d306eff7228a89cf53d785232b8b48b6bc39c171cfbfc77b832649fd9109b10\": container with ID starting with 7d306eff7228a89cf53d785232b8b48b6bc39c171cfbfc77b832649fd9109b10 not found: ID does not exist" containerID="7d306eff7228a89cf53d785232b8b48b6bc39c171cfbfc77b832649fd9109b10" Jan 27 09:35:52 crc kubenswrapper[4799]: I0127 09:35:52.786584 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d306eff7228a89cf53d785232b8b48b6bc39c171cfbfc77b832649fd9109b10"} err="failed to get container status \"7d306eff7228a89cf53d785232b8b48b6bc39c171cfbfc77b832649fd9109b10\": rpc error: code = NotFound desc = could not find container \"7d306eff7228a89cf53d785232b8b48b6bc39c171cfbfc77b832649fd9109b10\": container with ID starting with 7d306eff7228a89cf53d785232b8b48b6bc39c171cfbfc77b832649fd9109b10 not found: ID does not exist" Jan 27 09:35:54 crc kubenswrapper[4799]: I0127 09:35:54.472068 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="183b7fff-8db9-428f-942b-dc58bd36dfcf" path="/var/lib/kubelet/pods/183b7fff-8db9-428f-942b-dc58bd36dfcf/volumes" Jan 27 09:36:53 crc kubenswrapper[4799]: I0127 09:36:53.731854 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:36:53 crc kubenswrapper[4799]: I0127 09:36:53.732454 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:37:23 crc kubenswrapper[4799]: I0127 09:37:23.731347 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:37:23 crc kubenswrapper[4799]: I0127 09:37:23.732070 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:37:53 crc kubenswrapper[4799]: I0127 09:37:53.731148 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:37:53 crc kubenswrapper[4799]: I0127 09:37:53.732124 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:37:53 crc kubenswrapper[4799]: I0127 09:37:53.732193 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 09:37:53 crc kubenswrapper[4799]: I0127 09:37:53.733586 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 09:37:53 crc kubenswrapper[4799]: I0127 09:37:53.733689 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" gracePeriod=600 Jan 27 09:37:53 crc kubenswrapper[4799]: E0127 09:37:53.863770 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:37:54 crc kubenswrapper[4799]: I0127 09:37:54.068416 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" exitCode=0 Jan 27 09:37:54 crc kubenswrapper[4799]: I0127 09:37:54.068573 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5"} Jan 27 09:37:54 crc kubenswrapper[4799]: I0127 09:37:54.068789 4799 scope.go:117] "RemoveContainer" containerID="7984a90d4da0fbe6467f9771a20999ac66281203ac9561f330757681a65007b8" Jan 27 09:37:54 crc kubenswrapper[4799]: I0127 09:37:54.069849 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:37:54 crc kubenswrapper[4799]: E0127 09:37:54.070470 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:38:05 crc kubenswrapper[4799]: I0127 09:38:05.453872 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:38:05 crc kubenswrapper[4799]: E0127 09:38:05.454571 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:38:16 crc kubenswrapper[4799]: I0127 09:38:16.451993 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:38:16 crc kubenswrapper[4799]: E0127 09:38:16.452802 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:38:30 crc kubenswrapper[4799]: I0127 09:38:30.451714 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:38:30 crc kubenswrapper[4799]: E0127 09:38:30.452652 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:38:45 crc kubenswrapper[4799]: I0127 09:38:45.453013 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:38:45 crc kubenswrapper[4799]: E0127 09:38:45.454178 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:38:57 crc kubenswrapper[4799]: I0127 09:38:57.453269 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:38:57 crc kubenswrapper[4799]: E0127 09:38:57.455811 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:39:11 crc kubenswrapper[4799]: I0127 09:39:11.451393 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:39:11 crc kubenswrapper[4799]: E0127 09:39:11.452443 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:39:22 crc kubenswrapper[4799]: I0127 09:39:22.451633 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:39:22 crc kubenswrapper[4799]: E0127 09:39:22.452670 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:39:35 crc kubenswrapper[4799]: I0127 09:39:35.803806 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g5vdh"] Jan 27 09:39:35 crc kubenswrapper[4799]: E0127 09:39:35.805243 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="183b7fff-8db9-428f-942b-dc58bd36dfcf" containerName="extract-utilities" Jan 27 09:39:35 crc kubenswrapper[4799]: I0127 09:39:35.805261 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="183b7fff-8db9-428f-942b-dc58bd36dfcf" containerName="extract-utilities" Jan 27 09:39:35 crc kubenswrapper[4799]: E0127 09:39:35.805288 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="183b7fff-8db9-428f-942b-dc58bd36dfcf" containerName="registry-server" Jan 27 09:39:35 crc kubenswrapper[4799]: I0127 09:39:35.805297 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="183b7fff-8db9-428f-942b-dc58bd36dfcf" containerName="registry-server" Jan 27 09:39:35 crc kubenswrapper[4799]: E0127 09:39:35.805362 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="183b7fff-8db9-428f-942b-dc58bd36dfcf" containerName="extract-content" Jan 27 09:39:35 crc kubenswrapper[4799]: I0127 09:39:35.805371 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="183b7fff-8db9-428f-942b-dc58bd36dfcf" containerName="extract-content" Jan 27 09:39:35 crc kubenswrapper[4799]: I0127 09:39:35.805589 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="183b7fff-8db9-428f-942b-dc58bd36dfcf" containerName="registry-server" Jan 27 09:39:35 crc kubenswrapper[4799]: I0127 09:39:35.807397 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g5vdh" Jan 27 09:39:35 crc kubenswrapper[4799]: I0127 09:39:35.827804 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g5vdh"] Jan 27 09:39:35 crc kubenswrapper[4799]: I0127 09:39:35.853389 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b00a453-3a86-47dc-a441-841dea91140f-utilities\") pod \"redhat-operators-g5vdh\" (UID: \"3b00a453-3a86-47dc-a441-841dea91140f\") " pod="openshift-marketplace/redhat-operators-g5vdh" Jan 27 09:39:35 crc kubenswrapper[4799]: I0127 09:39:35.853426 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbcfs\" (UniqueName: \"kubernetes.io/projected/3b00a453-3a86-47dc-a441-841dea91140f-kube-api-access-vbcfs\") pod \"redhat-operators-g5vdh\" (UID: \"3b00a453-3a86-47dc-a441-841dea91140f\") " pod="openshift-marketplace/redhat-operators-g5vdh" Jan 27 09:39:35 crc kubenswrapper[4799]: I0127 09:39:35.853469 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b00a453-3a86-47dc-a441-841dea91140f-catalog-content\") pod \"redhat-operators-g5vdh\" (UID: \"3b00a453-3a86-47dc-a441-841dea91140f\") " pod="openshift-marketplace/redhat-operators-g5vdh" Jan 27 09:39:35 crc kubenswrapper[4799]: I0127 09:39:35.956090 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b00a453-3a86-47dc-a441-841dea91140f-catalog-content\") pod \"redhat-operators-g5vdh\" (UID: \"3b00a453-3a86-47dc-a441-841dea91140f\") " pod="openshift-marketplace/redhat-operators-g5vdh" Jan 27 09:39:35 crc kubenswrapper[4799]: I0127 09:39:35.956405 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b00a453-3a86-47dc-a441-841dea91140f-utilities\") pod \"redhat-operators-g5vdh\" (UID: \"3b00a453-3a86-47dc-a441-841dea91140f\") " pod="openshift-marketplace/redhat-operators-g5vdh" Jan 27 09:39:35 crc kubenswrapper[4799]: I0127 09:39:35.956459 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbcfs\" (UniqueName: \"kubernetes.io/projected/3b00a453-3a86-47dc-a441-841dea91140f-kube-api-access-vbcfs\") pod \"redhat-operators-g5vdh\" (UID: \"3b00a453-3a86-47dc-a441-841dea91140f\") " pod="openshift-marketplace/redhat-operators-g5vdh" Jan 27 09:39:35 crc kubenswrapper[4799]: I0127 09:39:35.957086 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b00a453-3a86-47dc-a441-841dea91140f-utilities\") pod \"redhat-operators-g5vdh\" (UID: \"3b00a453-3a86-47dc-a441-841dea91140f\") " pod="openshift-marketplace/redhat-operators-g5vdh" Jan 27 09:39:35 crc kubenswrapper[4799]: I0127 09:39:35.957084 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b00a453-3a86-47dc-a441-841dea91140f-catalog-content\") pod \"redhat-operators-g5vdh\" (UID: \"3b00a453-3a86-47dc-a441-841dea91140f\") " pod="openshift-marketplace/redhat-operators-g5vdh" Jan 27 09:39:35 crc kubenswrapper[4799]: I0127 09:39:35.979870 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbcfs\" (UniqueName: \"kubernetes.io/projected/3b00a453-3a86-47dc-a441-841dea91140f-kube-api-access-vbcfs\") pod \"redhat-operators-g5vdh\" (UID: \"3b00a453-3a86-47dc-a441-841dea91140f\") " pod="openshift-marketplace/redhat-operators-g5vdh" Jan 27 09:39:36 crc kubenswrapper[4799]: I0127 09:39:36.145975 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g5vdh" Jan 27 09:39:36 crc kubenswrapper[4799]: I0127 09:39:36.452928 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:39:36 crc kubenswrapper[4799]: E0127 09:39:36.453667 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:39:36 crc kubenswrapper[4799]: I0127 09:39:36.607889 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g5vdh"] Jan 27 09:39:37 crc kubenswrapper[4799]: I0127 09:39:37.285422 4799 generic.go:334] "Generic (PLEG): container finished" podID="3b00a453-3a86-47dc-a441-841dea91140f" containerID="b383830164d74d373aa0ef1bec2631b5bd99c799117cc270d1f6bf6407cb2b47" exitCode=0 Jan 27 09:39:37 crc kubenswrapper[4799]: I0127 09:39:37.285590 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5vdh" event={"ID":"3b00a453-3a86-47dc-a441-841dea91140f","Type":"ContainerDied","Data":"b383830164d74d373aa0ef1bec2631b5bd99c799117cc270d1f6bf6407cb2b47"} Jan 27 09:39:37 crc kubenswrapper[4799]: I0127 09:39:37.286936 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5vdh" event={"ID":"3b00a453-3a86-47dc-a441-841dea91140f","Type":"ContainerStarted","Data":"478c6ba4f61a5adc34e463c052c147b0746bcf314e75466c1968ee490afc4c1e"} Jan 27 09:39:38 crc kubenswrapper[4799]: I0127 09:39:38.296718 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5vdh" event={"ID":"3b00a453-3a86-47dc-a441-841dea91140f","Type":"ContainerStarted","Data":"6d13a75dadc57db3d14ca2cd9c0cbdcfd91e91250cd6713e1c773d7f2b0ff6bc"} Jan 27 09:39:39 crc kubenswrapper[4799]: I0127 09:39:39.311966 4799 generic.go:334] "Generic (PLEG): container finished" podID="3b00a453-3a86-47dc-a441-841dea91140f" containerID="6d13a75dadc57db3d14ca2cd9c0cbdcfd91e91250cd6713e1c773d7f2b0ff6bc" exitCode=0 Jan 27 09:39:39 crc kubenswrapper[4799]: I0127 09:39:39.312363 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5vdh" event={"ID":"3b00a453-3a86-47dc-a441-841dea91140f","Type":"ContainerDied","Data":"6d13a75dadc57db3d14ca2cd9c0cbdcfd91e91250cd6713e1c773d7f2b0ff6bc"} Jan 27 09:39:40 crc kubenswrapper[4799]: I0127 09:39:40.334760 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5vdh" event={"ID":"3b00a453-3a86-47dc-a441-841dea91140f","Type":"ContainerStarted","Data":"19e4ef46459124fdbc8cbddbcf7336e251c7e15fda371c2786a316091b4105f8"} Jan 27 09:39:40 crc kubenswrapper[4799]: I0127 09:39:40.368215 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g5vdh" podStartSLOduration=2.898110071 podStartE2EDuration="5.368196267s" podCreationTimestamp="2026-01-27 09:39:35 +0000 UTC" firstStartedPulling="2026-01-27 09:39:37.287933745 +0000 UTC m=+6843.599037810" lastFinishedPulling="2026-01-27 09:39:39.758019941 +0000 UTC m=+6846.069124006" observedRunningTime="2026-01-27 09:39:40.365577006 +0000 UTC m=+6846.676681101" watchObservedRunningTime="2026-01-27 09:39:40.368196267 +0000 UTC m=+6846.679300342" Jan 27 09:39:46 crc kubenswrapper[4799]: I0127 09:39:46.146770 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g5vdh" Jan 27 09:39:46 crc kubenswrapper[4799]: I0127 09:39:46.147402 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g5vdh" Jan 27 09:39:47 crc kubenswrapper[4799]: I0127 09:39:47.207477 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g5vdh" podUID="3b00a453-3a86-47dc-a441-841dea91140f" containerName="registry-server" probeResult="failure" output=< Jan 27 09:39:47 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Jan 27 09:39:47 crc kubenswrapper[4799]: > Jan 27 09:39:47 crc kubenswrapper[4799]: I0127 09:39:47.452177 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:39:47 crc kubenswrapper[4799]: E0127 09:39:47.452662 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:39:56 crc kubenswrapper[4799]: I0127 09:39:56.224759 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g5vdh" Jan 27 09:39:56 crc kubenswrapper[4799]: I0127 09:39:56.302197 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g5vdh" Jan 27 09:39:56 crc kubenswrapper[4799]: I0127 09:39:56.473374 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g5vdh"] Jan 27 09:39:57 crc kubenswrapper[4799]: I0127 09:39:57.557846 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g5vdh" podUID="3b00a453-3a86-47dc-a441-841dea91140f" containerName="registry-server" containerID="cri-o://19e4ef46459124fdbc8cbddbcf7336e251c7e15fda371c2786a316091b4105f8" gracePeriod=2 Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.031038 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g5vdh" Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.046616 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b00a453-3a86-47dc-a441-841dea91140f-catalog-content\") pod \"3b00a453-3a86-47dc-a441-841dea91140f\" (UID: \"3b00a453-3a86-47dc-a441-841dea91140f\") " Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.046683 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b00a453-3a86-47dc-a441-841dea91140f-utilities\") pod \"3b00a453-3a86-47dc-a441-841dea91140f\" (UID: \"3b00a453-3a86-47dc-a441-841dea91140f\") " Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.046726 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbcfs\" (UniqueName: \"kubernetes.io/projected/3b00a453-3a86-47dc-a441-841dea91140f-kube-api-access-vbcfs\") pod \"3b00a453-3a86-47dc-a441-841dea91140f\" (UID: \"3b00a453-3a86-47dc-a441-841dea91140f\") " Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.047562 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b00a453-3a86-47dc-a441-841dea91140f-utilities" (OuterVolumeSpecName: "utilities") pod "3b00a453-3a86-47dc-a441-841dea91140f" (UID: "3b00a453-3a86-47dc-a441-841dea91140f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.052662 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b00a453-3a86-47dc-a441-841dea91140f-kube-api-access-vbcfs" (OuterVolumeSpecName: "kube-api-access-vbcfs") pod "3b00a453-3a86-47dc-a441-841dea91140f" (UID: "3b00a453-3a86-47dc-a441-841dea91140f"). InnerVolumeSpecName "kube-api-access-vbcfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.053289 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b00a453-3a86-47dc-a441-841dea91140f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.053447 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbcfs\" (UniqueName: \"kubernetes.io/projected/3b00a453-3a86-47dc-a441-841dea91140f-kube-api-access-vbcfs\") on node \"crc\" DevicePath \"\"" Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.186284 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b00a453-3a86-47dc-a441-841dea91140f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b00a453-3a86-47dc-a441-841dea91140f" (UID: "3b00a453-3a86-47dc-a441-841dea91140f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.257153 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b00a453-3a86-47dc-a441-841dea91140f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.569119 4799 generic.go:334] "Generic (PLEG): container finished" podID="3b00a453-3a86-47dc-a441-841dea91140f" containerID="19e4ef46459124fdbc8cbddbcf7336e251c7e15fda371c2786a316091b4105f8" exitCode=0 Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.569166 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5vdh" event={"ID":"3b00a453-3a86-47dc-a441-841dea91140f","Type":"ContainerDied","Data":"19e4ef46459124fdbc8cbddbcf7336e251c7e15fda371c2786a316091b4105f8"} Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.569193 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5vdh" event={"ID":"3b00a453-3a86-47dc-a441-841dea91140f","Type":"ContainerDied","Data":"478c6ba4f61a5adc34e463c052c147b0746bcf314e75466c1968ee490afc4c1e"} Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.569193 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g5vdh" Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.569210 4799 scope.go:117] "RemoveContainer" containerID="19e4ef46459124fdbc8cbddbcf7336e251c7e15fda371c2786a316091b4105f8" Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.594493 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g5vdh"] Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.598325 4799 scope.go:117] "RemoveContainer" containerID="6d13a75dadc57db3d14ca2cd9c0cbdcfd91e91250cd6713e1c773d7f2b0ff6bc" Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.601235 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g5vdh"] Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.619459 4799 scope.go:117] "RemoveContainer" containerID="b383830164d74d373aa0ef1bec2631b5bd99c799117cc270d1f6bf6407cb2b47" Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.658690 4799 scope.go:117] "RemoveContainer" containerID="19e4ef46459124fdbc8cbddbcf7336e251c7e15fda371c2786a316091b4105f8" Jan 27 09:39:58 crc kubenswrapper[4799]: E0127 09:39:58.659379 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19e4ef46459124fdbc8cbddbcf7336e251c7e15fda371c2786a316091b4105f8\": container with ID starting with 19e4ef46459124fdbc8cbddbcf7336e251c7e15fda371c2786a316091b4105f8 not found: ID does not exist" containerID="19e4ef46459124fdbc8cbddbcf7336e251c7e15fda371c2786a316091b4105f8" Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.659434 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19e4ef46459124fdbc8cbddbcf7336e251c7e15fda371c2786a316091b4105f8"} err="failed to get container status \"19e4ef46459124fdbc8cbddbcf7336e251c7e15fda371c2786a316091b4105f8\": rpc error: code = NotFound desc = could not find container \"19e4ef46459124fdbc8cbddbcf7336e251c7e15fda371c2786a316091b4105f8\": container with ID starting with 19e4ef46459124fdbc8cbddbcf7336e251c7e15fda371c2786a316091b4105f8 not found: ID does not exist" Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.659464 4799 scope.go:117] "RemoveContainer" containerID="6d13a75dadc57db3d14ca2cd9c0cbdcfd91e91250cd6713e1c773d7f2b0ff6bc" Jan 27 09:39:58 crc kubenswrapper[4799]: E0127 09:39:58.659962 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d13a75dadc57db3d14ca2cd9c0cbdcfd91e91250cd6713e1c773d7f2b0ff6bc\": container with ID starting with 6d13a75dadc57db3d14ca2cd9c0cbdcfd91e91250cd6713e1c773d7f2b0ff6bc not found: ID does not exist" containerID="6d13a75dadc57db3d14ca2cd9c0cbdcfd91e91250cd6713e1c773d7f2b0ff6bc" Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.659992 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d13a75dadc57db3d14ca2cd9c0cbdcfd91e91250cd6713e1c773d7f2b0ff6bc"} err="failed to get container status \"6d13a75dadc57db3d14ca2cd9c0cbdcfd91e91250cd6713e1c773d7f2b0ff6bc\": rpc error: code = NotFound desc = could not find container \"6d13a75dadc57db3d14ca2cd9c0cbdcfd91e91250cd6713e1c773d7f2b0ff6bc\": container with ID starting with 6d13a75dadc57db3d14ca2cd9c0cbdcfd91e91250cd6713e1c773d7f2b0ff6bc not found: ID does not exist" Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.660008 4799 scope.go:117] "RemoveContainer" containerID="b383830164d74d373aa0ef1bec2631b5bd99c799117cc270d1f6bf6407cb2b47" Jan 27 09:39:58 crc kubenswrapper[4799]: E0127 09:39:58.660397 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b383830164d74d373aa0ef1bec2631b5bd99c799117cc270d1f6bf6407cb2b47\": container with ID starting with b383830164d74d373aa0ef1bec2631b5bd99c799117cc270d1f6bf6407cb2b47 not found: ID does not exist" containerID="b383830164d74d373aa0ef1bec2631b5bd99c799117cc270d1f6bf6407cb2b47" Jan 27 09:39:58 crc kubenswrapper[4799]: I0127 09:39:58.660429 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b383830164d74d373aa0ef1bec2631b5bd99c799117cc270d1f6bf6407cb2b47"} err="failed to get container status \"b383830164d74d373aa0ef1bec2631b5bd99c799117cc270d1f6bf6407cb2b47\": rpc error: code = NotFound desc = could not find container \"b383830164d74d373aa0ef1bec2631b5bd99c799117cc270d1f6bf6407cb2b47\": container with ID starting with b383830164d74d373aa0ef1bec2631b5bd99c799117cc270d1f6bf6407cb2b47 not found: ID does not exist" Jan 27 09:40:00 crc kubenswrapper[4799]: I0127 09:40:00.467899 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b00a453-3a86-47dc-a441-841dea91140f" path="/var/lib/kubelet/pods/3b00a453-3a86-47dc-a441-841dea91140f/volumes" Jan 27 09:40:02 crc kubenswrapper[4799]: I0127 09:40:02.452210 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:40:02 crc kubenswrapper[4799]: E0127 09:40:02.452893 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:40:16 crc kubenswrapper[4799]: I0127 09:40:16.451537 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:40:16 crc kubenswrapper[4799]: E0127 09:40:16.452685 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:40:28 crc kubenswrapper[4799]: I0127 09:40:28.451020 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:40:28 crc kubenswrapper[4799]: E0127 09:40:28.451896 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:40:41 crc kubenswrapper[4799]: I0127 09:40:41.456569 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:40:41 crc kubenswrapper[4799]: E0127 09:40:41.460125 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:40:55 crc kubenswrapper[4799]: I0127 09:40:55.451105 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:40:55 crc kubenswrapper[4799]: E0127 09:40:55.452074 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:41:07 crc kubenswrapper[4799]: I0127 09:41:07.451838 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:41:07 crc kubenswrapper[4799]: E0127 09:41:07.452596 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:41:19 crc kubenswrapper[4799]: I0127 09:41:19.452320 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:41:19 crc kubenswrapper[4799]: E0127 09:41:19.453856 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:41:30 crc kubenswrapper[4799]: I0127 09:41:30.452952 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:41:30 crc kubenswrapper[4799]: E0127 09:41:30.454202 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:41:45 crc kubenswrapper[4799]: I0127 09:41:45.452565 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:41:45 crc kubenswrapper[4799]: E0127 09:41:45.453246 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:42:00 crc kubenswrapper[4799]: I0127 09:42:00.452467 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:42:00 crc kubenswrapper[4799]: E0127 09:42:00.453491 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:42:11 crc kubenswrapper[4799]: I0127 09:42:11.452333 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:42:11 crc kubenswrapper[4799]: E0127 09:42:11.453150 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:42:24 crc kubenswrapper[4799]: I0127 09:42:24.469078 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:42:24 crc kubenswrapper[4799]: E0127 09:42:24.470853 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:42:38 crc kubenswrapper[4799]: I0127 09:42:38.452554 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:42:38 crc kubenswrapper[4799]: E0127 09:42:38.454362 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:42:50 crc kubenswrapper[4799]: I0127 09:42:50.452605 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:42:50 crc kubenswrapper[4799]: E0127 09:42:50.453678 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:42:55 crc kubenswrapper[4799]: I0127 09:42:55.492139 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b2gjw"] Jan 27 09:42:55 crc kubenswrapper[4799]: E0127 09:42:55.494676 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b00a453-3a86-47dc-a441-841dea91140f" containerName="extract-utilities" Jan 27 09:42:55 crc kubenswrapper[4799]: I0127 09:42:55.494721 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b00a453-3a86-47dc-a441-841dea91140f" containerName="extract-utilities" Jan 27 09:42:55 crc kubenswrapper[4799]: E0127 09:42:55.494786 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b00a453-3a86-47dc-a441-841dea91140f" containerName="extract-content" Jan 27 09:42:55 crc kubenswrapper[4799]: I0127 09:42:55.494809 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b00a453-3a86-47dc-a441-841dea91140f" containerName="extract-content" Jan 27 09:42:55 crc kubenswrapper[4799]: E0127 09:42:55.494867 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b00a453-3a86-47dc-a441-841dea91140f" containerName="registry-server" Jan 27 09:42:55 crc kubenswrapper[4799]: I0127 09:42:55.494880 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b00a453-3a86-47dc-a441-841dea91140f" containerName="registry-server" Jan 27 09:42:55 crc kubenswrapper[4799]: I0127 09:42:55.495271 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b00a453-3a86-47dc-a441-841dea91140f" containerName="registry-server" Jan 27 09:42:55 crc kubenswrapper[4799]: I0127 09:42:55.498604 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2gjw" Jan 27 09:42:55 crc kubenswrapper[4799]: I0127 09:42:55.511421 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2gjw"] Jan 27 09:42:55 crc kubenswrapper[4799]: I0127 09:42:55.596363 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad6173e-dca3-4904-ba76-f78ef29bb94b-catalog-content\") pod \"redhat-marketplace-b2gjw\" (UID: \"dad6173e-dca3-4904-ba76-f78ef29bb94b\") " pod="openshift-marketplace/redhat-marketplace-b2gjw" Jan 27 09:42:55 crc kubenswrapper[4799]: I0127 09:42:55.596428 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad6173e-dca3-4904-ba76-f78ef29bb94b-utilities\") pod \"redhat-marketplace-b2gjw\" (UID: \"dad6173e-dca3-4904-ba76-f78ef29bb94b\") " pod="openshift-marketplace/redhat-marketplace-b2gjw" Jan 27 09:42:55 crc kubenswrapper[4799]: I0127 09:42:55.596492 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khkkv\" (UniqueName: \"kubernetes.io/projected/dad6173e-dca3-4904-ba76-f78ef29bb94b-kube-api-access-khkkv\") pod \"redhat-marketplace-b2gjw\" (UID: \"dad6173e-dca3-4904-ba76-f78ef29bb94b\") " pod="openshift-marketplace/redhat-marketplace-b2gjw" Jan 27 09:42:55 crc kubenswrapper[4799]: I0127 09:42:55.699006 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad6173e-dca3-4904-ba76-f78ef29bb94b-catalog-content\") pod \"redhat-marketplace-b2gjw\" (UID: \"dad6173e-dca3-4904-ba76-f78ef29bb94b\") " pod="openshift-marketplace/redhat-marketplace-b2gjw" Jan 27 09:42:55 crc kubenswrapper[4799]: I0127 09:42:55.699071 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad6173e-dca3-4904-ba76-f78ef29bb94b-utilities\") pod \"redhat-marketplace-b2gjw\" (UID: \"dad6173e-dca3-4904-ba76-f78ef29bb94b\") " pod="openshift-marketplace/redhat-marketplace-b2gjw" Jan 27 09:42:55 crc kubenswrapper[4799]: I0127 09:42:55.699127 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khkkv\" (UniqueName: \"kubernetes.io/projected/dad6173e-dca3-4904-ba76-f78ef29bb94b-kube-api-access-khkkv\") pod \"redhat-marketplace-b2gjw\" (UID: \"dad6173e-dca3-4904-ba76-f78ef29bb94b\") " pod="openshift-marketplace/redhat-marketplace-b2gjw" Jan 27 09:42:55 crc kubenswrapper[4799]: I0127 09:42:55.700068 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad6173e-dca3-4904-ba76-f78ef29bb94b-catalog-content\") pod \"redhat-marketplace-b2gjw\" (UID: \"dad6173e-dca3-4904-ba76-f78ef29bb94b\") " pod="openshift-marketplace/redhat-marketplace-b2gjw" Jan 27 09:42:55 crc kubenswrapper[4799]: I0127 09:42:55.700373 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad6173e-dca3-4904-ba76-f78ef29bb94b-utilities\") pod \"redhat-marketplace-b2gjw\" (UID: \"dad6173e-dca3-4904-ba76-f78ef29bb94b\") " pod="openshift-marketplace/redhat-marketplace-b2gjw" Jan 27 09:42:55 crc kubenswrapper[4799]: I0127 09:42:55.721751 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khkkv\" (UniqueName: \"kubernetes.io/projected/dad6173e-dca3-4904-ba76-f78ef29bb94b-kube-api-access-khkkv\") pod \"redhat-marketplace-b2gjw\" (UID: \"dad6173e-dca3-4904-ba76-f78ef29bb94b\") " pod="openshift-marketplace/redhat-marketplace-b2gjw" Jan 27 09:42:55 crc kubenswrapper[4799]: I0127 09:42:55.849562 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2gjw" Jan 27 09:42:56 crc kubenswrapper[4799]: I0127 09:42:56.340635 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2gjw"] Jan 27 09:42:56 crc kubenswrapper[4799]: I0127 09:42:56.604509 4799 generic.go:334] "Generic (PLEG): container finished" podID="dad6173e-dca3-4904-ba76-f78ef29bb94b" containerID="7223310eacf47728fc9ebcb6bbc4fd8787a0dcb524b04532bfc113c2d8661975" exitCode=0 Jan 27 09:42:56 crc kubenswrapper[4799]: I0127 09:42:56.604558 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2gjw" event={"ID":"dad6173e-dca3-4904-ba76-f78ef29bb94b","Type":"ContainerDied","Data":"7223310eacf47728fc9ebcb6bbc4fd8787a0dcb524b04532bfc113c2d8661975"} Jan 27 09:42:56 crc kubenswrapper[4799]: I0127 09:42:56.604590 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2gjw" event={"ID":"dad6173e-dca3-4904-ba76-f78ef29bb94b","Type":"ContainerStarted","Data":"440eb95c6e314a3b0dced4f047db499449036fb61ae94b77e5970429d0989585"} Jan 27 09:42:56 crc kubenswrapper[4799]: I0127 09:42:56.611916 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 09:42:57 crc kubenswrapper[4799]: I0127 09:42:57.623448 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2gjw" event={"ID":"dad6173e-dca3-4904-ba76-f78ef29bb94b","Type":"ContainerStarted","Data":"e7057d8c604a12c54a073b82c5e2caba6fe3f537e6cd82931b75f6be74b51795"} Jan 27 09:42:58 crc kubenswrapper[4799]: I0127 09:42:58.637525 4799 generic.go:334] "Generic (PLEG): container finished" podID="dad6173e-dca3-4904-ba76-f78ef29bb94b" containerID="e7057d8c604a12c54a073b82c5e2caba6fe3f537e6cd82931b75f6be74b51795" exitCode=0 Jan 27 09:42:58 crc kubenswrapper[4799]: I0127 09:42:58.637671 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2gjw" event={"ID":"dad6173e-dca3-4904-ba76-f78ef29bb94b","Type":"ContainerDied","Data":"e7057d8c604a12c54a073b82c5e2caba6fe3f537e6cd82931b75f6be74b51795"} Jan 27 09:42:59 crc kubenswrapper[4799]: I0127 09:42:59.652455 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2gjw" event={"ID":"dad6173e-dca3-4904-ba76-f78ef29bb94b","Type":"ContainerStarted","Data":"9b0b489671ed33f4196916c54ae228a6b7fa8d5a379d44c67ed3b39a9c9853bf"} Jan 27 09:42:59 crc kubenswrapper[4799]: I0127 09:42:59.695765 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b2gjw" podStartSLOduration=2.244095541 podStartE2EDuration="4.695738195s" podCreationTimestamp="2026-01-27 09:42:55 +0000 UTC" firstStartedPulling="2026-01-27 09:42:56.609753669 +0000 UTC m=+7042.920857774" lastFinishedPulling="2026-01-27 09:42:59.061396353 +0000 UTC m=+7045.372500428" observedRunningTime="2026-01-27 09:42:59.681211732 +0000 UTC m=+7045.992315807" watchObservedRunningTime="2026-01-27 09:42:59.695738195 +0000 UTC m=+7046.006842290" Jan 27 09:43:03 crc kubenswrapper[4799]: I0127 09:43:03.454789 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:43:04 crc kubenswrapper[4799]: I0127 09:43:04.703158 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"0956df9e92c28626e5fb71647d959dd7567b268f85e0a129e8953e54243d197e"} Jan 27 09:43:05 crc kubenswrapper[4799]: I0127 09:43:05.849894 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b2gjw" Jan 27 09:43:05 crc kubenswrapper[4799]: I0127 09:43:05.850661 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b2gjw" Jan 27 09:43:05 crc kubenswrapper[4799]: I0127 09:43:05.936005 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b2gjw" Jan 27 09:43:06 crc kubenswrapper[4799]: I0127 09:43:06.788404 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b2gjw" Jan 27 09:43:06 crc kubenswrapper[4799]: I0127 09:43:06.866765 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2gjw"] Jan 27 09:43:08 crc kubenswrapper[4799]: I0127 09:43:08.758317 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b2gjw" podUID="dad6173e-dca3-4904-ba76-f78ef29bb94b" containerName="registry-server" containerID="cri-o://9b0b489671ed33f4196916c54ae228a6b7fa8d5a379d44c67ed3b39a9c9853bf" gracePeriod=2 Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.289055 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2gjw" Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.457933 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad6173e-dca3-4904-ba76-f78ef29bb94b-utilities\") pod \"dad6173e-dca3-4904-ba76-f78ef29bb94b\" (UID: \"dad6173e-dca3-4904-ba76-f78ef29bb94b\") " Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.458101 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khkkv\" (UniqueName: \"kubernetes.io/projected/dad6173e-dca3-4904-ba76-f78ef29bb94b-kube-api-access-khkkv\") pod \"dad6173e-dca3-4904-ba76-f78ef29bb94b\" (UID: \"dad6173e-dca3-4904-ba76-f78ef29bb94b\") " Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.458248 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad6173e-dca3-4904-ba76-f78ef29bb94b-catalog-content\") pod \"dad6173e-dca3-4904-ba76-f78ef29bb94b\" (UID: \"dad6173e-dca3-4904-ba76-f78ef29bb94b\") " Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.459176 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dad6173e-dca3-4904-ba76-f78ef29bb94b-utilities" (OuterVolumeSpecName: "utilities") pod "dad6173e-dca3-4904-ba76-f78ef29bb94b" (UID: "dad6173e-dca3-4904-ba76-f78ef29bb94b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.466233 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad6173e-dca3-4904-ba76-f78ef29bb94b-kube-api-access-khkkv" (OuterVolumeSpecName: "kube-api-access-khkkv") pod "dad6173e-dca3-4904-ba76-f78ef29bb94b" (UID: "dad6173e-dca3-4904-ba76-f78ef29bb94b"). InnerVolumeSpecName "kube-api-access-khkkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.492009 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dad6173e-dca3-4904-ba76-f78ef29bb94b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dad6173e-dca3-4904-ba76-f78ef29bb94b" (UID: "dad6173e-dca3-4904-ba76-f78ef29bb94b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.560551 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad6173e-dca3-4904-ba76-f78ef29bb94b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.560585 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad6173e-dca3-4904-ba76-f78ef29bb94b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.560595 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khkkv\" (UniqueName: \"kubernetes.io/projected/dad6173e-dca3-4904-ba76-f78ef29bb94b-kube-api-access-khkkv\") on node \"crc\" DevicePath \"\"" Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.772188 4799 generic.go:334] "Generic (PLEG): container finished" podID="dad6173e-dca3-4904-ba76-f78ef29bb94b" containerID="9b0b489671ed33f4196916c54ae228a6b7fa8d5a379d44c67ed3b39a9c9853bf" exitCode=0 Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.772235 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2gjw" event={"ID":"dad6173e-dca3-4904-ba76-f78ef29bb94b","Type":"ContainerDied","Data":"9b0b489671ed33f4196916c54ae228a6b7fa8d5a379d44c67ed3b39a9c9853bf"} Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.772278 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2gjw" event={"ID":"dad6173e-dca3-4904-ba76-f78ef29bb94b","Type":"ContainerDied","Data":"440eb95c6e314a3b0dced4f047db499449036fb61ae94b77e5970429d0989585"} Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.772313 4799 scope.go:117] "RemoveContainer" containerID="9b0b489671ed33f4196916c54ae228a6b7fa8d5a379d44c67ed3b39a9c9853bf" Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.772351 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2gjw" Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.800772 4799 scope.go:117] "RemoveContainer" containerID="e7057d8c604a12c54a073b82c5e2caba6fe3f537e6cd82931b75f6be74b51795" Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.836136 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2gjw"] Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.845404 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2gjw"] Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.845741 4799 scope.go:117] "RemoveContainer" containerID="7223310eacf47728fc9ebcb6bbc4fd8787a0dcb524b04532bfc113c2d8661975" Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.876524 4799 scope.go:117] "RemoveContainer" containerID="9b0b489671ed33f4196916c54ae228a6b7fa8d5a379d44c67ed3b39a9c9853bf" Jan 27 09:43:09 crc kubenswrapper[4799]: E0127 09:43:09.877218 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b0b489671ed33f4196916c54ae228a6b7fa8d5a379d44c67ed3b39a9c9853bf\": container with ID starting with 9b0b489671ed33f4196916c54ae228a6b7fa8d5a379d44c67ed3b39a9c9853bf not found: ID does not exist" containerID="9b0b489671ed33f4196916c54ae228a6b7fa8d5a379d44c67ed3b39a9c9853bf" Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.877262 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b0b489671ed33f4196916c54ae228a6b7fa8d5a379d44c67ed3b39a9c9853bf"} err="failed to get container status \"9b0b489671ed33f4196916c54ae228a6b7fa8d5a379d44c67ed3b39a9c9853bf\": rpc error: code = NotFound desc = could not find container \"9b0b489671ed33f4196916c54ae228a6b7fa8d5a379d44c67ed3b39a9c9853bf\": container with ID starting with 9b0b489671ed33f4196916c54ae228a6b7fa8d5a379d44c67ed3b39a9c9853bf not found: ID does not exist" Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.877294 4799 scope.go:117] "RemoveContainer" containerID="e7057d8c604a12c54a073b82c5e2caba6fe3f537e6cd82931b75f6be74b51795" Jan 27 09:43:09 crc kubenswrapper[4799]: E0127 09:43:09.877741 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7057d8c604a12c54a073b82c5e2caba6fe3f537e6cd82931b75f6be74b51795\": container with ID starting with e7057d8c604a12c54a073b82c5e2caba6fe3f537e6cd82931b75f6be74b51795 not found: ID does not exist" containerID="e7057d8c604a12c54a073b82c5e2caba6fe3f537e6cd82931b75f6be74b51795" Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.877765 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7057d8c604a12c54a073b82c5e2caba6fe3f537e6cd82931b75f6be74b51795"} err="failed to get container status \"e7057d8c604a12c54a073b82c5e2caba6fe3f537e6cd82931b75f6be74b51795\": rpc error: code = NotFound desc = could not find container \"e7057d8c604a12c54a073b82c5e2caba6fe3f537e6cd82931b75f6be74b51795\": container with ID starting with e7057d8c604a12c54a073b82c5e2caba6fe3f537e6cd82931b75f6be74b51795 not found: ID does not exist" Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.877782 4799 scope.go:117] "RemoveContainer" containerID="7223310eacf47728fc9ebcb6bbc4fd8787a0dcb524b04532bfc113c2d8661975" Jan 27 09:43:09 crc kubenswrapper[4799]: E0127 09:43:09.878241 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7223310eacf47728fc9ebcb6bbc4fd8787a0dcb524b04532bfc113c2d8661975\": container with ID starting with 7223310eacf47728fc9ebcb6bbc4fd8787a0dcb524b04532bfc113c2d8661975 not found: ID does not exist" containerID="7223310eacf47728fc9ebcb6bbc4fd8787a0dcb524b04532bfc113c2d8661975" Jan 27 09:43:09 crc kubenswrapper[4799]: I0127 09:43:09.878275 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7223310eacf47728fc9ebcb6bbc4fd8787a0dcb524b04532bfc113c2d8661975"} err="failed to get container status \"7223310eacf47728fc9ebcb6bbc4fd8787a0dcb524b04532bfc113c2d8661975\": rpc error: code = NotFound desc = could not find container \"7223310eacf47728fc9ebcb6bbc4fd8787a0dcb524b04532bfc113c2d8661975\": container with ID starting with 7223310eacf47728fc9ebcb6bbc4fd8787a0dcb524b04532bfc113c2d8661975 not found: ID does not exist" Jan 27 09:43:10 crc kubenswrapper[4799]: I0127 09:43:10.464357 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dad6173e-dca3-4904-ba76-f78ef29bb94b" path="/var/lib/kubelet/pods/dad6173e-dca3-4904-ba76-f78ef29bb94b/volumes" Jan 27 09:45:00 crc kubenswrapper[4799]: I0127 09:45:00.187248 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491785-5kvll"] Jan 27 09:45:00 crc kubenswrapper[4799]: E0127 09:45:00.188624 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad6173e-dca3-4904-ba76-f78ef29bb94b" containerName="registry-server" Jan 27 09:45:00 crc kubenswrapper[4799]: I0127 09:45:00.188647 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad6173e-dca3-4904-ba76-f78ef29bb94b" containerName="registry-server" Jan 27 09:45:00 crc kubenswrapper[4799]: E0127 09:45:00.188671 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad6173e-dca3-4904-ba76-f78ef29bb94b" containerName="extract-content" Jan 27 09:45:00 crc kubenswrapper[4799]: I0127 09:45:00.188686 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad6173e-dca3-4904-ba76-f78ef29bb94b" containerName="extract-content" Jan 27 09:45:00 crc kubenswrapper[4799]: E0127 09:45:00.188749 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad6173e-dca3-4904-ba76-f78ef29bb94b" containerName="extract-utilities" Jan 27 09:45:00 crc kubenswrapper[4799]: I0127 09:45:00.188764 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad6173e-dca3-4904-ba76-f78ef29bb94b" containerName="extract-utilities" Jan 27 09:45:00 crc kubenswrapper[4799]: I0127 09:45:00.189118 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="dad6173e-dca3-4904-ba76-f78ef29bb94b" containerName="registry-server" Jan 27 09:45:00 crc kubenswrapper[4799]: I0127 09:45:00.190269 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-5kvll" Jan 27 09:45:00 crc kubenswrapper[4799]: I0127 09:45:00.228149 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 09:45:00 crc kubenswrapper[4799]: I0127 09:45:00.228282 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 09:45:00 crc kubenswrapper[4799]: I0127 09:45:00.239959 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491785-5kvll"] Jan 27 09:45:00 crc kubenswrapper[4799]: I0127 09:45:00.331224 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e-secret-volume\") pod \"collect-profiles-29491785-5kvll\" (UID: \"8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-5kvll" Jan 27 09:45:00 crc kubenswrapper[4799]: I0127 09:45:00.331462 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e-config-volume\") pod \"collect-profiles-29491785-5kvll\" (UID: \"8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-5kvll" Jan 27 09:45:00 crc kubenswrapper[4799]: I0127 09:45:00.331517 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq5b5\" (UniqueName: \"kubernetes.io/projected/8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e-kube-api-access-fq5b5\") pod \"collect-profiles-29491785-5kvll\" (UID: \"8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-5kvll" Jan 27 09:45:00 crc kubenswrapper[4799]: I0127 09:45:00.433613 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e-config-volume\") pod \"collect-profiles-29491785-5kvll\" (UID: \"8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-5kvll" Jan 27 09:45:00 crc kubenswrapper[4799]: I0127 09:45:00.433677 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq5b5\" (UniqueName: \"kubernetes.io/projected/8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e-kube-api-access-fq5b5\") pod \"collect-profiles-29491785-5kvll\" (UID: \"8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-5kvll" Jan 27 09:45:00 crc kubenswrapper[4799]: I0127 09:45:00.433782 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e-secret-volume\") pod \"collect-profiles-29491785-5kvll\" (UID: \"8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-5kvll" Jan 27 09:45:00 crc kubenswrapper[4799]: I0127 09:45:00.434852 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e-config-volume\") pod \"collect-profiles-29491785-5kvll\" (UID: \"8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-5kvll" Jan 27 09:45:00 crc kubenswrapper[4799]: I0127 09:45:00.440699 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e-secret-volume\") pod \"collect-profiles-29491785-5kvll\" (UID: \"8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-5kvll" Jan 27 09:45:00 crc kubenswrapper[4799]: I0127 09:45:00.450256 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq5b5\" (UniqueName: \"kubernetes.io/projected/8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e-kube-api-access-fq5b5\") pod \"collect-profiles-29491785-5kvll\" (UID: \"8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-5kvll" Jan 27 09:45:00 crc kubenswrapper[4799]: I0127 09:45:00.559888 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-5kvll" Jan 27 09:45:01 crc kubenswrapper[4799]: W0127 09:45:01.031645 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e9ba4d4_4efb_4cd8_856d_760fd0a7c52e.slice/crio-5eb50e8a31c6d9aacf64d579b55808562f38f80ca997a057ee435701b1dc323a WatchSource:0}: Error finding container 5eb50e8a31c6d9aacf64d579b55808562f38f80ca997a057ee435701b1dc323a: Status 404 returned error can't find the container with id 5eb50e8a31c6d9aacf64d579b55808562f38f80ca997a057ee435701b1dc323a Jan 27 09:45:01 crc kubenswrapper[4799]: I0127 09:45:01.032927 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491785-5kvll"] Jan 27 09:45:02 crc kubenswrapper[4799]: I0127 09:45:02.029066 4799 generic.go:334] "Generic (PLEG): container finished" podID="8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e" containerID="b0a2abb695ac227783cf064ac4326e466d5f64e58ac2e54c59af4d6f46881140" exitCode=0 Jan 27 09:45:02 crc kubenswrapper[4799]: I0127 09:45:02.029173 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-5kvll" event={"ID":"8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e","Type":"ContainerDied","Data":"b0a2abb695ac227783cf064ac4326e466d5f64e58ac2e54c59af4d6f46881140"} Jan 27 09:45:02 crc kubenswrapper[4799]: I0127 09:45:02.029469 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-5kvll" event={"ID":"8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e","Type":"ContainerStarted","Data":"5eb50e8a31c6d9aacf64d579b55808562f38f80ca997a057ee435701b1dc323a"} Jan 27 09:45:03 crc kubenswrapper[4799]: I0127 09:45:03.384142 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-5kvll" Jan 27 09:45:03 crc kubenswrapper[4799]: I0127 09:45:03.443390 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e-config-volume\") pod \"8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e\" (UID: \"8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e\") " Jan 27 09:45:03 crc kubenswrapper[4799]: I0127 09:45:03.443476 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e-secret-volume\") pod \"8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e\" (UID: \"8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e\") " Jan 27 09:45:03 crc kubenswrapper[4799]: I0127 09:45:03.443685 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq5b5\" (UniqueName: \"kubernetes.io/projected/8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e-kube-api-access-fq5b5\") pod \"8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e\" (UID: \"8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e\") " Jan 27 09:45:03 crc kubenswrapper[4799]: I0127 09:45:03.444470 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e-config-volume" (OuterVolumeSpecName: "config-volume") pod "8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e" (UID: "8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:45:03 crc kubenswrapper[4799]: I0127 09:45:03.449513 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e-kube-api-access-fq5b5" (OuterVolumeSpecName: "kube-api-access-fq5b5") pod "8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e" (UID: "8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e"). InnerVolumeSpecName "kube-api-access-fq5b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:45:03 crc kubenswrapper[4799]: I0127 09:45:03.449585 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e" (UID: "8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:45:03 crc kubenswrapper[4799]: I0127 09:45:03.544920 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq5b5\" (UniqueName: \"kubernetes.io/projected/8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e-kube-api-access-fq5b5\") on node \"crc\" DevicePath \"\"" Jan 27 09:45:03 crc kubenswrapper[4799]: I0127 09:45:03.545928 4799 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 09:45:03 crc kubenswrapper[4799]: I0127 09:45:03.545938 4799 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 09:45:04 crc kubenswrapper[4799]: I0127 09:45:04.049518 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-5kvll" event={"ID":"8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e","Type":"ContainerDied","Data":"5eb50e8a31c6d9aacf64d579b55808562f38f80ca997a057ee435701b1dc323a"} Jan 27 09:45:04 crc kubenswrapper[4799]: I0127 09:45:04.049854 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eb50e8a31c6d9aacf64d579b55808562f38f80ca997a057ee435701b1dc323a" Jan 27 09:45:04 crc kubenswrapper[4799]: I0127 09:45:04.049589 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-5kvll" Jan 27 09:45:04 crc kubenswrapper[4799]: I0127 09:45:04.468574 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491740-gj79t"] Jan 27 09:45:04 crc kubenswrapper[4799]: I0127 09:45:04.473698 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491740-gj79t"] Jan 27 09:45:06 crc kubenswrapper[4799]: I0127 09:45:06.473018 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3fb7cb9-2bed-4322-bef1-13c066775faf" path="/var/lib/kubelet/pods/c3fb7cb9-2bed-4322-bef1-13c066775faf/volumes" Jan 27 09:45:07 crc kubenswrapper[4799]: I0127 09:45:07.860897 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pzjz8"] Jan 27 09:45:07 crc kubenswrapper[4799]: E0127 09:45:07.862363 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e" containerName="collect-profiles" Jan 27 09:45:07 crc kubenswrapper[4799]: I0127 09:45:07.862926 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e" containerName="collect-profiles" Jan 27 09:45:07 crc kubenswrapper[4799]: I0127 09:45:07.863573 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e" containerName="collect-profiles" Jan 27 09:45:07 crc kubenswrapper[4799]: I0127 09:45:07.866724 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzjz8" Jan 27 09:45:07 crc kubenswrapper[4799]: I0127 09:45:07.904605 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pzjz8"] Jan 27 09:45:07 crc kubenswrapper[4799]: I0127 09:45:07.961900 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dab5ee2-18f7-4da5-9211-09d3021749a7-utilities\") pod \"community-operators-pzjz8\" (UID: \"2dab5ee2-18f7-4da5-9211-09d3021749a7\") " pod="openshift-marketplace/community-operators-pzjz8" Jan 27 09:45:07 crc kubenswrapper[4799]: I0127 09:45:07.961988 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdp9d\" (UniqueName: \"kubernetes.io/projected/2dab5ee2-18f7-4da5-9211-09d3021749a7-kube-api-access-bdp9d\") pod \"community-operators-pzjz8\" (UID: \"2dab5ee2-18f7-4da5-9211-09d3021749a7\") " pod="openshift-marketplace/community-operators-pzjz8" Jan 27 09:45:07 crc kubenswrapper[4799]: I0127 09:45:07.962069 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dab5ee2-18f7-4da5-9211-09d3021749a7-catalog-content\") pod \"community-operators-pzjz8\" (UID: \"2dab5ee2-18f7-4da5-9211-09d3021749a7\") " pod="openshift-marketplace/community-operators-pzjz8" Jan 27 09:45:08 crc kubenswrapper[4799]: I0127 09:45:08.064007 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dab5ee2-18f7-4da5-9211-09d3021749a7-catalog-content\") pod \"community-operators-pzjz8\" (UID: \"2dab5ee2-18f7-4da5-9211-09d3021749a7\") " pod="openshift-marketplace/community-operators-pzjz8" Jan 27 09:45:08 crc kubenswrapper[4799]: I0127 09:45:08.064157 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dab5ee2-18f7-4da5-9211-09d3021749a7-utilities\") pod \"community-operators-pzjz8\" (UID: \"2dab5ee2-18f7-4da5-9211-09d3021749a7\") " pod="openshift-marketplace/community-operators-pzjz8" Jan 27 09:45:08 crc kubenswrapper[4799]: I0127 09:45:08.064208 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdp9d\" (UniqueName: \"kubernetes.io/projected/2dab5ee2-18f7-4da5-9211-09d3021749a7-kube-api-access-bdp9d\") pod \"community-operators-pzjz8\" (UID: \"2dab5ee2-18f7-4da5-9211-09d3021749a7\") " pod="openshift-marketplace/community-operators-pzjz8" Jan 27 09:45:08 crc kubenswrapper[4799]: I0127 09:45:08.064628 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dab5ee2-18f7-4da5-9211-09d3021749a7-catalog-content\") pod \"community-operators-pzjz8\" (UID: \"2dab5ee2-18f7-4da5-9211-09d3021749a7\") " pod="openshift-marketplace/community-operators-pzjz8" Jan 27 09:45:08 crc kubenswrapper[4799]: I0127 09:45:08.064673 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dab5ee2-18f7-4da5-9211-09d3021749a7-utilities\") pod \"community-operators-pzjz8\" (UID: \"2dab5ee2-18f7-4da5-9211-09d3021749a7\") " pod="openshift-marketplace/community-operators-pzjz8" Jan 27 09:45:08 crc kubenswrapper[4799]: I0127 09:45:08.083250 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdp9d\" (UniqueName: \"kubernetes.io/projected/2dab5ee2-18f7-4da5-9211-09d3021749a7-kube-api-access-bdp9d\") pod \"community-operators-pzjz8\" (UID: \"2dab5ee2-18f7-4da5-9211-09d3021749a7\") " pod="openshift-marketplace/community-operators-pzjz8" Jan 27 09:45:08 crc kubenswrapper[4799]: I0127 09:45:08.200924 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzjz8" Jan 27 09:45:08 crc kubenswrapper[4799]: I0127 09:45:08.749046 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pzjz8"] Jan 27 09:45:08 crc kubenswrapper[4799]: W0127 09:45:08.757722 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dab5ee2_18f7_4da5_9211_09d3021749a7.slice/crio-178ead264f7b7fca5e536e3a5823259fff53046073bc219317dbc517cfe31c5e WatchSource:0}: Error finding container 178ead264f7b7fca5e536e3a5823259fff53046073bc219317dbc517cfe31c5e: Status 404 returned error can't find the container with id 178ead264f7b7fca5e536e3a5823259fff53046073bc219317dbc517cfe31c5e Jan 27 09:45:09 crc kubenswrapper[4799]: I0127 09:45:09.113967 4799 generic.go:334] "Generic (PLEG): container finished" podID="2dab5ee2-18f7-4da5-9211-09d3021749a7" containerID="a727490866395e0639b396d232e6afd14625f114467485f8264cb5dfabbb22b4" exitCode=0 Jan 27 09:45:09 crc kubenswrapper[4799]: I0127 09:45:09.114193 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzjz8" event={"ID":"2dab5ee2-18f7-4da5-9211-09d3021749a7","Type":"ContainerDied","Data":"a727490866395e0639b396d232e6afd14625f114467485f8264cb5dfabbb22b4"} Jan 27 09:45:09 crc kubenswrapper[4799]: I0127 09:45:09.114477 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzjz8" event={"ID":"2dab5ee2-18f7-4da5-9211-09d3021749a7","Type":"ContainerStarted","Data":"178ead264f7b7fca5e536e3a5823259fff53046073bc219317dbc517cfe31c5e"} Jan 27 09:45:10 crc kubenswrapper[4799]: I0127 09:45:10.124326 4799 generic.go:334] "Generic (PLEG): container finished" podID="2dab5ee2-18f7-4da5-9211-09d3021749a7" containerID="670deda8a25f3ad2eb6255b43e586a8516edbb0a2ece076f24e98b3498203e0b" exitCode=0 Jan 27 09:45:10 crc kubenswrapper[4799]: I0127 09:45:10.124713 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzjz8" event={"ID":"2dab5ee2-18f7-4da5-9211-09d3021749a7","Type":"ContainerDied","Data":"670deda8a25f3ad2eb6255b43e586a8516edbb0a2ece076f24e98b3498203e0b"} Jan 27 09:45:11 crc kubenswrapper[4799]: I0127 09:45:11.138521 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzjz8" event={"ID":"2dab5ee2-18f7-4da5-9211-09d3021749a7","Type":"ContainerStarted","Data":"feaa95a5426c5ddeb12357f7a2f29606a0fae5c6a17779a8e841fffc30f483e1"} Jan 27 09:45:11 crc kubenswrapper[4799]: I0127 09:45:11.158522 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pzjz8" podStartSLOduration=2.649939711 podStartE2EDuration="4.158498205s" podCreationTimestamp="2026-01-27 09:45:07 +0000 UTC" firstStartedPulling="2026-01-27 09:45:09.116071132 +0000 UTC m=+7175.427175197" lastFinishedPulling="2026-01-27 09:45:10.624629616 +0000 UTC m=+7176.935733691" observedRunningTime="2026-01-27 09:45:11.156864611 +0000 UTC m=+7177.467968696" watchObservedRunningTime="2026-01-27 09:45:11.158498205 +0000 UTC m=+7177.469602280" Jan 27 09:45:18 crc kubenswrapper[4799]: I0127 09:45:18.201958 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pzjz8" Jan 27 09:45:18 crc kubenswrapper[4799]: I0127 09:45:18.202833 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pzjz8" Jan 27 09:45:18 crc kubenswrapper[4799]: I0127 09:45:18.292958 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pzjz8" Jan 27 09:45:19 crc kubenswrapper[4799]: I0127 09:45:19.270958 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pzjz8" Jan 27 09:45:19 crc kubenswrapper[4799]: I0127 09:45:19.328587 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pzjz8"] Jan 27 09:45:21 crc kubenswrapper[4799]: I0127 09:45:21.241093 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pzjz8" podUID="2dab5ee2-18f7-4da5-9211-09d3021749a7" containerName="registry-server" containerID="cri-o://feaa95a5426c5ddeb12357f7a2f29606a0fae5c6a17779a8e841fffc30f483e1" gracePeriod=2 Jan 27 09:45:21 crc kubenswrapper[4799]: I0127 09:45:21.894428 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzjz8" Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.019809 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dab5ee2-18f7-4da5-9211-09d3021749a7-catalog-content\") pod \"2dab5ee2-18f7-4da5-9211-09d3021749a7\" (UID: \"2dab5ee2-18f7-4da5-9211-09d3021749a7\") " Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.019946 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdp9d\" (UniqueName: \"kubernetes.io/projected/2dab5ee2-18f7-4da5-9211-09d3021749a7-kube-api-access-bdp9d\") pod \"2dab5ee2-18f7-4da5-9211-09d3021749a7\" (UID: \"2dab5ee2-18f7-4da5-9211-09d3021749a7\") " Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.020267 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dab5ee2-18f7-4da5-9211-09d3021749a7-utilities\") pod \"2dab5ee2-18f7-4da5-9211-09d3021749a7\" (UID: \"2dab5ee2-18f7-4da5-9211-09d3021749a7\") " Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.022776 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dab5ee2-18f7-4da5-9211-09d3021749a7-utilities" (OuterVolumeSpecName: "utilities") pod "2dab5ee2-18f7-4da5-9211-09d3021749a7" (UID: "2dab5ee2-18f7-4da5-9211-09d3021749a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.029372 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dab5ee2-18f7-4da5-9211-09d3021749a7-kube-api-access-bdp9d" (OuterVolumeSpecName: "kube-api-access-bdp9d") pod "2dab5ee2-18f7-4da5-9211-09d3021749a7" (UID: "2dab5ee2-18f7-4da5-9211-09d3021749a7"). InnerVolumeSpecName "kube-api-access-bdp9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.097017 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dab5ee2-18f7-4da5-9211-09d3021749a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2dab5ee2-18f7-4da5-9211-09d3021749a7" (UID: "2dab5ee2-18f7-4da5-9211-09d3021749a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.123192 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dab5ee2-18f7-4da5-9211-09d3021749a7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.123261 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dab5ee2-18f7-4da5-9211-09d3021749a7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.123286 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdp9d\" (UniqueName: \"kubernetes.io/projected/2dab5ee2-18f7-4da5-9211-09d3021749a7-kube-api-access-bdp9d\") on node \"crc\" DevicePath \"\"" Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.255755 4799 generic.go:334] "Generic (PLEG): container finished" podID="2dab5ee2-18f7-4da5-9211-09d3021749a7" containerID="feaa95a5426c5ddeb12357f7a2f29606a0fae5c6a17779a8e841fffc30f483e1" exitCode=0 Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.255824 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzjz8" event={"ID":"2dab5ee2-18f7-4da5-9211-09d3021749a7","Type":"ContainerDied","Data":"feaa95a5426c5ddeb12357f7a2f29606a0fae5c6a17779a8e841fffc30f483e1"} Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.255904 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzjz8" event={"ID":"2dab5ee2-18f7-4da5-9211-09d3021749a7","Type":"ContainerDied","Data":"178ead264f7b7fca5e536e3a5823259fff53046073bc219317dbc517cfe31c5e"} Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.255941 4799 scope.go:117] "RemoveContainer" containerID="feaa95a5426c5ddeb12357f7a2f29606a0fae5c6a17779a8e841fffc30f483e1" Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.256513 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzjz8" Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.294521 4799 scope.go:117] "RemoveContainer" containerID="670deda8a25f3ad2eb6255b43e586a8516edbb0a2ece076f24e98b3498203e0b" Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.302050 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pzjz8"] Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.308717 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pzjz8"] Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.323032 4799 scope.go:117] "RemoveContainer" containerID="a727490866395e0639b396d232e6afd14625f114467485f8264cb5dfabbb22b4" Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.382269 4799 scope.go:117] "RemoveContainer" containerID="feaa95a5426c5ddeb12357f7a2f29606a0fae5c6a17779a8e841fffc30f483e1" Jan 27 09:45:22 crc kubenswrapper[4799]: E0127 09:45:22.382743 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feaa95a5426c5ddeb12357f7a2f29606a0fae5c6a17779a8e841fffc30f483e1\": container with ID starting with feaa95a5426c5ddeb12357f7a2f29606a0fae5c6a17779a8e841fffc30f483e1 not found: ID does not exist" containerID="feaa95a5426c5ddeb12357f7a2f29606a0fae5c6a17779a8e841fffc30f483e1" Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.382799 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feaa95a5426c5ddeb12357f7a2f29606a0fae5c6a17779a8e841fffc30f483e1"} err="failed to get container status \"feaa95a5426c5ddeb12357f7a2f29606a0fae5c6a17779a8e841fffc30f483e1\": rpc error: code = NotFound desc = could not find container \"feaa95a5426c5ddeb12357f7a2f29606a0fae5c6a17779a8e841fffc30f483e1\": container with ID starting with feaa95a5426c5ddeb12357f7a2f29606a0fae5c6a17779a8e841fffc30f483e1 not found: ID does not exist" Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.382826 4799 scope.go:117] "RemoveContainer" containerID="670deda8a25f3ad2eb6255b43e586a8516edbb0a2ece076f24e98b3498203e0b" Jan 27 09:45:22 crc kubenswrapper[4799]: E0127 09:45:22.383325 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"670deda8a25f3ad2eb6255b43e586a8516edbb0a2ece076f24e98b3498203e0b\": container with ID starting with 670deda8a25f3ad2eb6255b43e586a8516edbb0a2ece076f24e98b3498203e0b not found: ID does not exist" containerID="670deda8a25f3ad2eb6255b43e586a8516edbb0a2ece076f24e98b3498203e0b" Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.383360 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"670deda8a25f3ad2eb6255b43e586a8516edbb0a2ece076f24e98b3498203e0b"} err="failed to get container status \"670deda8a25f3ad2eb6255b43e586a8516edbb0a2ece076f24e98b3498203e0b\": rpc error: code = NotFound desc = could not find container \"670deda8a25f3ad2eb6255b43e586a8516edbb0a2ece076f24e98b3498203e0b\": container with ID starting with 670deda8a25f3ad2eb6255b43e586a8516edbb0a2ece076f24e98b3498203e0b not found: ID does not exist" Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.383383 4799 scope.go:117] "RemoveContainer" containerID="a727490866395e0639b396d232e6afd14625f114467485f8264cb5dfabbb22b4" Jan 27 09:45:22 crc kubenswrapper[4799]: E0127 09:45:22.383676 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a727490866395e0639b396d232e6afd14625f114467485f8264cb5dfabbb22b4\": container with ID starting with a727490866395e0639b396d232e6afd14625f114467485f8264cb5dfabbb22b4 not found: ID does not exist" containerID="a727490866395e0639b396d232e6afd14625f114467485f8264cb5dfabbb22b4" Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.383732 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a727490866395e0639b396d232e6afd14625f114467485f8264cb5dfabbb22b4"} err="failed to get container status \"a727490866395e0639b396d232e6afd14625f114467485f8264cb5dfabbb22b4\": rpc error: code = NotFound desc = could not find container \"a727490866395e0639b396d232e6afd14625f114467485f8264cb5dfabbb22b4\": container with ID starting with a727490866395e0639b396d232e6afd14625f114467485f8264cb5dfabbb22b4 not found: ID does not exist" Jan 27 09:45:22 crc kubenswrapper[4799]: I0127 09:45:22.480221 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dab5ee2-18f7-4da5-9211-09d3021749a7" path="/var/lib/kubelet/pods/2dab5ee2-18f7-4da5-9211-09d3021749a7/volumes" Jan 27 09:45:23 crc kubenswrapper[4799]: I0127 09:45:23.731710 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:45:23 crc kubenswrapper[4799]: I0127 09:45:23.732426 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:45:44 crc kubenswrapper[4799]: I0127 09:45:44.050628 4799 scope.go:117] "RemoveContainer" containerID="522e8dfd621985c71cbf9bfe5a0795f5a5eb1eb421d64baa7c496a6824962f7e" Jan 27 09:45:46 crc kubenswrapper[4799]: I0127 09:45:46.216976 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r7v82"] Jan 27 09:45:46 crc kubenswrapper[4799]: E0127 09:45:46.218093 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dab5ee2-18f7-4da5-9211-09d3021749a7" containerName="extract-content" Jan 27 09:45:46 crc kubenswrapper[4799]: I0127 09:45:46.218111 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dab5ee2-18f7-4da5-9211-09d3021749a7" containerName="extract-content" Jan 27 09:45:46 crc kubenswrapper[4799]: E0127 09:45:46.218135 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dab5ee2-18f7-4da5-9211-09d3021749a7" containerName="extract-utilities" Jan 27 09:45:46 crc kubenswrapper[4799]: I0127 09:45:46.218143 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dab5ee2-18f7-4da5-9211-09d3021749a7" containerName="extract-utilities" Jan 27 09:45:46 crc kubenswrapper[4799]: E0127 09:45:46.218188 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dab5ee2-18f7-4da5-9211-09d3021749a7" containerName="registry-server" Jan 27 09:45:46 crc kubenswrapper[4799]: I0127 09:45:46.218196 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dab5ee2-18f7-4da5-9211-09d3021749a7" containerName="registry-server" Jan 27 09:45:46 crc kubenswrapper[4799]: I0127 09:45:46.218582 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dab5ee2-18f7-4da5-9211-09d3021749a7" containerName="registry-server" Jan 27 09:45:46 crc kubenswrapper[4799]: I0127 09:45:46.220661 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7v82" Jan 27 09:45:46 crc kubenswrapper[4799]: I0127 09:45:46.253810 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r7v82"] Jan 27 09:45:46 crc kubenswrapper[4799]: I0127 09:45:46.404348 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ffd01d-e2eb-453c-942d-d69890717531-catalog-content\") pod \"certified-operators-r7v82\" (UID: \"24ffd01d-e2eb-453c-942d-d69890717531\") " pod="openshift-marketplace/certified-operators-r7v82" Jan 27 09:45:46 crc kubenswrapper[4799]: I0127 09:45:46.404420 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbkp5\" (UniqueName: \"kubernetes.io/projected/24ffd01d-e2eb-453c-942d-d69890717531-kube-api-access-lbkp5\") pod \"certified-operators-r7v82\" (UID: \"24ffd01d-e2eb-453c-942d-d69890717531\") " pod="openshift-marketplace/certified-operators-r7v82" Jan 27 09:45:46 crc kubenswrapper[4799]: I0127 09:45:46.404457 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ffd01d-e2eb-453c-942d-d69890717531-utilities\") pod \"certified-operators-r7v82\" (UID: \"24ffd01d-e2eb-453c-942d-d69890717531\") " pod="openshift-marketplace/certified-operators-r7v82" Jan 27 09:45:46 crc kubenswrapper[4799]: I0127 09:45:46.506088 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ffd01d-e2eb-453c-942d-d69890717531-catalog-content\") pod \"certified-operators-r7v82\" (UID: \"24ffd01d-e2eb-453c-942d-d69890717531\") " pod="openshift-marketplace/certified-operators-r7v82" Jan 27 09:45:46 crc kubenswrapper[4799]: I0127 09:45:46.506166 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbkp5\" (UniqueName: \"kubernetes.io/projected/24ffd01d-e2eb-453c-942d-d69890717531-kube-api-access-lbkp5\") pod \"certified-operators-r7v82\" (UID: \"24ffd01d-e2eb-453c-942d-d69890717531\") " pod="openshift-marketplace/certified-operators-r7v82" Jan 27 09:45:46 crc kubenswrapper[4799]: I0127 09:45:46.506207 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ffd01d-e2eb-453c-942d-d69890717531-utilities\") pod \"certified-operators-r7v82\" (UID: \"24ffd01d-e2eb-453c-942d-d69890717531\") " pod="openshift-marketplace/certified-operators-r7v82" Jan 27 09:45:46 crc kubenswrapper[4799]: I0127 09:45:46.506697 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ffd01d-e2eb-453c-942d-d69890717531-catalog-content\") pod \"certified-operators-r7v82\" (UID: \"24ffd01d-e2eb-453c-942d-d69890717531\") " pod="openshift-marketplace/certified-operators-r7v82" Jan 27 09:45:46 crc kubenswrapper[4799]: I0127 09:45:46.507006 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ffd01d-e2eb-453c-942d-d69890717531-utilities\") pod \"certified-operators-r7v82\" (UID: \"24ffd01d-e2eb-453c-942d-d69890717531\") " pod="openshift-marketplace/certified-operators-r7v82" Jan 27 09:45:46 crc kubenswrapper[4799]: I0127 09:45:46.538568 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbkp5\" (UniqueName: \"kubernetes.io/projected/24ffd01d-e2eb-453c-942d-d69890717531-kube-api-access-lbkp5\") pod \"certified-operators-r7v82\" (UID: \"24ffd01d-e2eb-453c-942d-d69890717531\") " pod="openshift-marketplace/certified-operators-r7v82" Jan 27 09:45:46 crc kubenswrapper[4799]: I0127 09:45:46.548223 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7v82" Jan 27 09:45:47 crc kubenswrapper[4799]: I0127 09:45:47.057517 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r7v82"] Jan 27 09:45:47 crc kubenswrapper[4799]: I0127 09:45:47.532974 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7v82" event={"ID":"24ffd01d-e2eb-453c-942d-d69890717531","Type":"ContainerDied","Data":"a827f61cc709d10074674601274f2737eb00722eafc740fececcd6448e7fc669"} Jan 27 09:45:47 crc kubenswrapper[4799]: I0127 09:45:47.532736 4799 generic.go:334] "Generic (PLEG): container finished" podID="24ffd01d-e2eb-453c-942d-d69890717531" containerID="a827f61cc709d10074674601274f2737eb00722eafc740fececcd6448e7fc669" exitCode=0 Jan 27 09:45:47 crc kubenswrapper[4799]: I0127 09:45:47.533483 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7v82" event={"ID":"24ffd01d-e2eb-453c-942d-d69890717531","Type":"ContainerStarted","Data":"c89b73311b07b0dcd2e955c224d9e2b7f58193693a064c1ebad6e1be8c19e57d"} Jan 27 09:45:49 crc kubenswrapper[4799]: I0127 09:45:49.562262 4799 generic.go:334] "Generic (PLEG): container finished" podID="24ffd01d-e2eb-453c-942d-d69890717531" containerID="f3a9af1a63b77186f040861e3f57c940b04e58a372d2156fe2d79dffb8f5909d" exitCode=0 Jan 27 09:45:49 crc kubenswrapper[4799]: I0127 09:45:49.562456 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7v82" event={"ID":"24ffd01d-e2eb-453c-942d-d69890717531","Type":"ContainerDied","Data":"f3a9af1a63b77186f040861e3f57c940b04e58a372d2156fe2d79dffb8f5909d"} Jan 27 09:45:50 crc kubenswrapper[4799]: I0127 09:45:50.582486 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7v82" event={"ID":"24ffd01d-e2eb-453c-942d-d69890717531","Type":"ContainerStarted","Data":"1416f7be91a07431a5c35bd3e6f0b1d23282e02d506bb4186907e8d0cc110587"} Jan 27 09:45:50 crc kubenswrapper[4799]: I0127 09:45:50.645082 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r7v82" podStartSLOduration=2.100729657 podStartE2EDuration="4.645064904s" podCreationTimestamp="2026-01-27 09:45:46 +0000 UTC" firstStartedPulling="2026-01-27 09:45:47.534943593 +0000 UTC m=+7213.846047668" lastFinishedPulling="2026-01-27 09:45:50.07927885 +0000 UTC m=+7216.390382915" observedRunningTime="2026-01-27 09:45:50.641499338 +0000 UTC m=+7216.952603403" watchObservedRunningTime="2026-01-27 09:45:50.645064904 +0000 UTC m=+7216.956168979" Jan 27 09:45:53 crc kubenswrapper[4799]: I0127 09:45:53.731402 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:45:53 crc kubenswrapper[4799]: I0127 09:45:53.732408 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:45:56 crc kubenswrapper[4799]: I0127 09:45:56.549673 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r7v82" Jan 27 09:45:56 crc kubenswrapper[4799]: I0127 09:45:56.550216 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r7v82" Jan 27 09:45:56 crc kubenswrapper[4799]: I0127 09:45:56.615195 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r7v82" Jan 27 09:45:57 crc kubenswrapper[4799]: I0127 09:45:57.022188 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r7v82" Jan 27 09:45:57 crc kubenswrapper[4799]: I0127 09:45:57.092201 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r7v82"] Jan 27 09:45:58 crc kubenswrapper[4799]: I0127 09:45:58.981945 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r7v82" podUID="24ffd01d-e2eb-453c-942d-d69890717531" containerName="registry-server" containerID="cri-o://1416f7be91a07431a5c35bd3e6f0b1d23282e02d506bb4186907e8d0cc110587" gracePeriod=2 Jan 27 09:45:59 crc kubenswrapper[4799]: I0127 09:45:59.520660 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7v82" Jan 27 09:45:59 crc kubenswrapper[4799]: I0127 09:45:59.667232 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbkp5\" (UniqueName: \"kubernetes.io/projected/24ffd01d-e2eb-453c-942d-d69890717531-kube-api-access-lbkp5\") pod \"24ffd01d-e2eb-453c-942d-d69890717531\" (UID: \"24ffd01d-e2eb-453c-942d-d69890717531\") " Jan 27 09:45:59 crc kubenswrapper[4799]: I0127 09:45:59.667312 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ffd01d-e2eb-453c-942d-d69890717531-catalog-content\") pod \"24ffd01d-e2eb-453c-942d-d69890717531\" (UID: \"24ffd01d-e2eb-453c-942d-d69890717531\") " Jan 27 09:45:59 crc kubenswrapper[4799]: I0127 09:45:59.667583 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ffd01d-e2eb-453c-942d-d69890717531-utilities\") pod \"24ffd01d-e2eb-453c-942d-d69890717531\" (UID: \"24ffd01d-e2eb-453c-942d-d69890717531\") " Jan 27 09:45:59 crc kubenswrapper[4799]: I0127 09:45:59.669714 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24ffd01d-e2eb-453c-942d-d69890717531-utilities" (OuterVolumeSpecName: "utilities") pod "24ffd01d-e2eb-453c-942d-d69890717531" (UID: "24ffd01d-e2eb-453c-942d-d69890717531"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:45:59 crc kubenswrapper[4799]: I0127 09:45:59.677578 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24ffd01d-e2eb-453c-942d-d69890717531-kube-api-access-lbkp5" (OuterVolumeSpecName: "kube-api-access-lbkp5") pod "24ffd01d-e2eb-453c-942d-d69890717531" (UID: "24ffd01d-e2eb-453c-942d-d69890717531"). InnerVolumeSpecName "kube-api-access-lbkp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:45:59 crc kubenswrapper[4799]: I0127 09:45:59.729514 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24ffd01d-e2eb-453c-942d-d69890717531-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24ffd01d-e2eb-453c-942d-d69890717531" (UID: "24ffd01d-e2eb-453c-942d-d69890717531"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:45:59 crc kubenswrapper[4799]: I0127 09:45:59.770900 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ffd01d-e2eb-453c-942d-d69890717531-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:45:59 crc kubenswrapper[4799]: I0127 09:45:59.770983 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbkp5\" (UniqueName: \"kubernetes.io/projected/24ffd01d-e2eb-453c-942d-d69890717531-kube-api-access-lbkp5\") on node \"crc\" DevicePath \"\"" Jan 27 09:45:59 crc kubenswrapper[4799]: I0127 09:45:59.771002 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ffd01d-e2eb-453c-942d-d69890717531-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:46:00 crc kubenswrapper[4799]: I0127 09:46:00.000772 4799 generic.go:334] "Generic (PLEG): container finished" podID="24ffd01d-e2eb-453c-942d-d69890717531" containerID="1416f7be91a07431a5c35bd3e6f0b1d23282e02d506bb4186907e8d0cc110587" exitCode=0 Jan 27 09:46:00 crc kubenswrapper[4799]: I0127 09:46:00.000844 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7v82" event={"ID":"24ffd01d-e2eb-453c-942d-d69890717531","Type":"ContainerDied","Data":"1416f7be91a07431a5c35bd3e6f0b1d23282e02d506bb4186907e8d0cc110587"} Jan 27 09:46:00 crc kubenswrapper[4799]: I0127 09:46:00.000857 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7v82" Jan 27 09:46:00 crc kubenswrapper[4799]: I0127 09:46:00.000892 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7v82" event={"ID":"24ffd01d-e2eb-453c-942d-d69890717531","Type":"ContainerDied","Data":"c89b73311b07b0dcd2e955c224d9e2b7f58193693a064c1ebad6e1be8c19e57d"} Jan 27 09:46:00 crc kubenswrapper[4799]: I0127 09:46:00.000921 4799 scope.go:117] "RemoveContainer" containerID="1416f7be91a07431a5c35bd3e6f0b1d23282e02d506bb4186907e8d0cc110587" Jan 27 09:46:00 crc kubenswrapper[4799]: I0127 09:46:00.034634 4799 scope.go:117] "RemoveContainer" containerID="f3a9af1a63b77186f040861e3f57c940b04e58a372d2156fe2d79dffb8f5909d" Jan 27 09:46:00 crc kubenswrapper[4799]: I0127 09:46:00.063689 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r7v82"] Jan 27 09:46:00 crc kubenswrapper[4799]: I0127 09:46:00.072426 4799 scope.go:117] "RemoveContainer" containerID="a827f61cc709d10074674601274f2737eb00722eafc740fececcd6448e7fc669" Jan 27 09:46:00 crc kubenswrapper[4799]: I0127 09:46:00.081527 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r7v82"] Jan 27 09:46:00 crc kubenswrapper[4799]: I0127 09:46:00.123465 4799 scope.go:117] "RemoveContainer" containerID="1416f7be91a07431a5c35bd3e6f0b1d23282e02d506bb4186907e8d0cc110587" Jan 27 09:46:00 crc kubenswrapper[4799]: E0127 09:46:00.123897 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1416f7be91a07431a5c35bd3e6f0b1d23282e02d506bb4186907e8d0cc110587\": container with ID starting with 1416f7be91a07431a5c35bd3e6f0b1d23282e02d506bb4186907e8d0cc110587 not found: ID does not exist" containerID="1416f7be91a07431a5c35bd3e6f0b1d23282e02d506bb4186907e8d0cc110587" Jan 27 09:46:00 crc kubenswrapper[4799]: I0127 09:46:00.123929 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1416f7be91a07431a5c35bd3e6f0b1d23282e02d506bb4186907e8d0cc110587"} err="failed to get container status \"1416f7be91a07431a5c35bd3e6f0b1d23282e02d506bb4186907e8d0cc110587\": rpc error: code = NotFound desc = could not find container \"1416f7be91a07431a5c35bd3e6f0b1d23282e02d506bb4186907e8d0cc110587\": container with ID starting with 1416f7be91a07431a5c35bd3e6f0b1d23282e02d506bb4186907e8d0cc110587 not found: ID does not exist" Jan 27 09:46:00 crc kubenswrapper[4799]: I0127 09:46:00.123949 4799 scope.go:117] "RemoveContainer" containerID="f3a9af1a63b77186f040861e3f57c940b04e58a372d2156fe2d79dffb8f5909d" Jan 27 09:46:00 crc kubenswrapper[4799]: E0127 09:46:00.124429 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3a9af1a63b77186f040861e3f57c940b04e58a372d2156fe2d79dffb8f5909d\": container with ID starting with f3a9af1a63b77186f040861e3f57c940b04e58a372d2156fe2d79dffb8f5909d not found: ID does not exist" containerID="f3a9af1a63b77186f040861e3f57c940b04e58a372d2156fe2d79dffb8f5909d" Jan 27 09:46:00 crc kubenswrapper[4799]: I0127 09:46:00.124454 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3a9af1a63b77186f040861e3f57c940b04e58a372d2156fe2d79dffb8f5909d"} err="failed to get container status \"f3a9af1a63b77186f040861e3f57c940b04e58a372d2156fe2d79dffb8f5909d\": rpc error: code = NotFound desc = could not find container \"f3a9af1a63b77186f040861e3f57c940b04e58a372d2156fe2d79dffb8f5909d\": container with ID starting with f3a9af1a63b77186f040861e3f57c940b04e58a372d2156fe2d79dffb8f5909d not found: ID does not exist" Jan 27 09:46:00 crc kubenswrapper[4799]: I0127 09:46:00.124467 4799 scope.go:117] "RemoveContainer" containerID="a827f61cc709d10074674601274f2737eb00722eafc740fececcd6448e7fc669" Jan 27 09:46:00 crc kubenswrapper[4799]: E0127 09:46:00.124683 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a827f61cc709d10074674601274f2737eb00722eafc740fececcd6448e7fc669\": container with ID starting with a827f61cc709d10074674601274f2737eb00722eafc740fececcd6448e7fc669 not found: ID does not exist" containerID="a827f61cc709d10074674601274f2737eb00722eafc740fececcd6448e7fc669" Jan 27 09:46:00 crc kubenswrapper[4799]: I0127 09:46:00.124707 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a827f61cc709d10074674601274f2737eb00722eafc740fececcd6448e7fc669"} err="failed to get container status \"a827f61cc709d10074674601274f2737eb00722eafc740fececcd6448e7fc669\": rpc error: code = NotFound desc = could not find container \"a827f61cc709d10074674601274f2737eb00722eafc740fececcd6448e7fc669\": container with ID starting with a827f61cc709d10074674601274f2737eb00722eafc740fececcd6448e7fc669 not found: ID does not exist" Jan 27 09:46:00 crc kubenswrapper[4799]: I0127 09:46:00.467839 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24ffd01d-e2eb-453c-942d-d69890717531" path="/var/lib/kubelet/pods/24ffd01d-e2eb-453c-942d-d69890717531/volumes" Jan 27 09:46:23 crc kubenswrapper[4799]: I0127 09:46:23.732495 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:46:23 crc kubenswrapper[4799]: I0127 09:46:23.733391 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:46:23 crc kubenswrapper[4799]: I0127 09:46:23.733454 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 09:46:23 crc kubenswrapper[4799]: I0127 09:46:23.734505 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0956df9e92c28626e5fb71647d959dd7567b268f85e0a129e8953e54243d197e"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 09:46:23 crc kubenswrapper[4799]: I0127 09:46:23.734573 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://0956df9e92c28626e5fb71647d959dd7567b268f85e0a129e8953e54243d197e" gracePeriod=600 Jan 27 09:46:24 crc kubenswrapper[4799]: I0127 09:46:24.261733 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="0956df9e92c28626e5fb71647d959dd7567b268f85e0a129e8953e54243d197e" exitCode=0 Jan 27 09:46:24 crc kubenswrapper[4799]: I0127 09:46:24.261811 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"0956df9e92c28626e5fb71647d959dd7567b268f85e0a129e8953e54243d197e"} Jan 27 09:46:24 crc kubenswrapper[4799]: I0127 09:46:24.262201 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02"} Jan 27 09:46:24 crc kubenswrapper[4799]: I0127 09:46:24.262238 4799 scope.go:117] "RemoveContainer" containerID="f82e3db0698ded1eb70afb3e9da0b443fad9d38a638022a0cd178127405912f5" Jan 27 09:48:53 crc kubenswrapper[4799]: I0127 09:48:53.731109 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:48:53 crc kubenswrapper[4799]: I0127 09:48:53.732167 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:49:23 crc kubenswrapper[4799]: I0127 09:49:23.730813 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:49:23 crc kubenswrapper[4799]: I0127 09:49:23.731581 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:49:53 crc kubenswrapper[4799]: I0127 09:49:53.731518 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:49:53 crc kubenswrapper[4799]: I0127 09:49:53.733854 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:49:53 crc kubenswrapper[4799]: I0127 09:49:53.734064 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 09:49:53 crc kubenswrapper[4799]: I0127 09:49:53.735156 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 09:49:53 crc kubenswrapper[4799]: I0127 09:49:53.735366 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" gracePeriod=600 Jan 27 09:49:53 crc kubenswrapper[4799]: E0127 09:49:53.870944 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:49:54 crc kubenswrapper[4799]: I0127 09:49:54.802816 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" exitCode=0 Jan 27 09:49:54 crc kubenswrapper[4799]: I0127 09:49:54.802864 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02"} Jan 27 09:49:54 crc kubenswrapper[4799]: I0127 09:49:54.803357 4799 scope.go:117] "RemoveContainer" containerID="0956df9e92c28626e5fb71647d959dd7567b268f85e0a129e8953e54243d197e" Jan 27 09:49:54 crc kubenswrapper[4799]: I0127 09:49:54.804143 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:49:54 crc kubenswrapper[4799]: E0127 09:49:54.804403 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:50:08 crc kubenswrapper[4799]: I0127 09:50:08.452040 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:50:08 crc kubenswrapper[4799]: E0127 09:50:08.452884 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:50:22 crc kubenswrapper[4799]: I0127 09:50:22.451958 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:50:22 crc kubenswrapper[4799]: E0127 09:50:22.452970 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:50:33 crc kubenswrapper[4799]: I0127 09:50:33.452540 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:50:33 crc kubenswrapper[4799]: E0127 09:50:33.453354 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:50:35 crc kubenswrapper[4799]: I0127 09:50:35.533916 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bcfsq"] Jan 27 09:50:35 crc kubenswrapper[4799]: E0127 09:50:35.534639 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ffd01d-e2eb-453c-942d-d69890717531" containerName="extract-content" Jan 27 09:50:35 crc kubenswrapper[4799]: I0127 09:50:35.534651 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ffd01d-e2eb-453c-942d-d69890717531" containerName="extract-content" Jan 27 09:50:35 crc kubenswrapper[4799]: E0127 09:50:35.534669 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ffd01d-e2eb-453c-942d-d69890717531" containerName="extract-utilities" Jan 27 09:50:35 crc kubenswrapper[4799]: I0127 09:50:35.534677 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ffd01d-e2eb-453c-942d-d69890717531" containerName="extract-utilities" Jan 27 09:50:35 crc kubenswrapper[4799]: E0127 09:50:35.534703 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ffd01d-e2eb-453c-942d-d69890717531" containerName="registry-server" Jan 27 09:50:35 crc kubenswrapper[4799]: I0127 09:50:35.534709 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ffd01d-e2eb-453c-942d-d69890717531" containerName="registry-server" Jan 27 09:50:35 crc kubenswrapper[4799]: I0127 09:50:35.534881 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ffd01d-e2eb-453c-942d-d69890717531" containerName="registry-server" Jan 27 09:50:35 crc kubenswrapper[4799]: I0127 09:50:35.536360 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcfsq" Jan 27 09:50:35 crc kubenswrapper[4799]: I0127 09:50:35.612539 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bcfsq"] Jan 27 09:50:35 crc kubenswrapper[4799]: I0127 09:50:35.621855 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e66b41d3-9596-463e-a666-95a49e189f34-catalog-content\") pod \"redhat-operators-bcfsq\" (UID: \"e66b41d3-9596-463e-a666-95a49e189f34\") " pod="openshift-marketplace/redhat-operators-bcfsq" Jan 27 09:50:35 crc kubenswrapper[4799]: I0127 09:50:35.622025 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e66b41d3-9596-463e-a666-95a49e189f34-utilities\") pod \"redhat-operators-bcfsq\" (UID: \"e66b41d3-9596-463e-a666-95a49e189f34\") " pod="openshift-marketplace/redhat-operators-bcfsq" Jan 27 09:50:35 crc kubenswrapper[4799]: I0127 09:50:35.622051 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bbzq\" (UniqueName: \"kubernetes.io/projected/e66b41d3-9596-463e-a666-95a49e189f34-kube-api-access-6bbzq\") pod \"redhat-operators-bcfsq\" (UID: \"e66b41d3-9596-463e-a666-95a49e189f34\") " pod="openshift-marketplace/redhat-operators-bcfsq" Jan 27 09:50:35 crc kubenswrapper[4799]: I0127 09:50:35.724364 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e66b41d3-9596-463e-a666-95a49e189f34-utilities\") pod \"redhat-operators-bcfsq\" (UID: \"e66b41d3-9596-463e-a666-95a49e189f34\") " pod="openshift-marketplace/redhat-operators-bcfsq" Jan 27 09:50:35 crc kubenswrapper[4799]: I0127 09:50:35.724638 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbzq\" (UniqueName: \"kubernetes.io/projected/e66b41d3-9596-463e-a666-95a49e189f34-kube-api-access-6bbzq\") pod \"redhat-operators-bcfsq\" (UID: \"e66b41d3-9596-463e-a666-95a49e189f34\") " pod="openshift-marketplace/redhat-operators-bcfsq" Jan 27 09:50:35 crc kubenswrapper[4799]: I0127 09:50:35.724673 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e66b41d3-9596-463e-a666-95a49e189f34-catalog-content\") pod \"redhat-operators-bcfsq\" (UID: \"e66b41d3-9596-463e-a666-95a49e189f34\") " pod="openshift-marketplace/redhat-operators-bcfsq" Jan 27 09:50:35 crc kubenswrapper[4799]: I0127 09:50:35.725123 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e66b41d3-9596-463e-a666-95a49e189f34-utilities\") pod \"redhat-operators-bcfsq\" (UID: \"e66b41d3-9596-463e-a666-95a49e189f34\") " pod="openshift-marketplace/redhat-operators-bcfsq" Jan 27 09:50:35 crc kubenswrapper[4799]: I0127 09:50:35.725159 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e66b41d3-9596-463e-a666-95a49e189f34-catalog-content\") pod \"redhat-operators-bcfsq\" (UID: \"e66b41d3-9596-463e-a666-95a49e189f34\") " pod="openshift-marketplace/redhat-operators-bcfsq" Jan 27 09:50:35 crc kubenswrapper[4799]: I0127 09:50:35.747487 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bbzq\" (UniqueName: \"kubernetes.io/projected/e66b41d3-9596-463e-a666-95a49e189f34-kube-api-access-6bbzq\") pod \"redhat-operators-bcfsq\" (UID: \"e66b41d3-9596-463e-a666-95a49e189f34\") " pod="openshift-marketplace/redhat-operators-bcfsq" Jan 27 09:50:35 crc kubenswrapper[4799]: I0127 09:50:35.882074 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcfsq" Jan 27 09:50:36 crc kubenswrapper[4799]: I0127 09:50:36.358476 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bcfsq"] Jan 27 09:50:36 crc kubenswrapper[4799]: W0127 09:50:36.362538 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode66b41d3_9596_463e_a666_95a49e189f34.slice/crio-8dba49612aedb4a9eacaf47c0059c7f9c3f000ad5496108c30ebbd587be68f0e WatchSource:0}: Error finding container 8dba49612aedb4a9eacaf47c0059c7f9c3f000ad5496108c30ebbd587be68f0e: Status 404 returned error can't find the container with id 8dba49612aedb4a9eacaf47c0059c7f9c3f000ad5496108c30ebbd587be68f0e Jan 27 09:50:37 crc kubenswrapper[4799]: I0127 09:50:37.222221 4799 generic.go:334] "Generic (PLEG): container finished" podID="e66b41d3-9596-463e-a666-95a49e189f34" containerID="8c02ed9ce88f0e78af8933eb0bcb70c56dd8e5d3ee0a6afd38e73f1af556f551" exitCode=0 Jan 27 09:50:37 crc kubenswrapper[4799]: I0127 09:50:37.222433 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcfsq" event={"ID":"e66b41d3-9596-463e-a666-95a49e189f34","Type":"ContainerDied","Data":"8c02ed9ce88f0e78af8933eb0bcb70c56dd8e5d3ee0a6afd38e73f1af556f551"} Jan 27 09:50:37 crc kubenswrapper[4799]: I0127 09:50:37.222628 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcfsq" event={"ID":"e66b41d3-9596-463e-a666-95a49e189f34","Type":"ContainerStarted","Data":"8dba49612aedb4a9eacaf47c0059c7f9c3f000ad5496108c30ebbd587be68f0e"} Jan 27 09:50:37 crc kubenswrapper[4799]: I0127 09:50:37.224938 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 09:50:39 crc kubenswrapper[4799]: I0127 09:50:39.242979 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcfsq" event={"ID":"e66b41d3-9596-463e-a666-95a49e189f34","Type":"ContainerStarted","Data":"95129e0be673a7925ddf261d70a2ee1c0f105718b1e258557aa7752241f035c9"} Jan 27 09:50:40 crc kubenswrapper[4799]: I0127 09:50:40.256355 4799 generic.go:334] "Generic (PLEG): container finished" podID="e66b41d3-9596-463e-a666-95a49e189f34" containerID="95129e0be673a7925ddf261d70a2ee1c0f105718b1e258557aa7752241f035c9" exitCode=0 Jan 27 09:50:40 crc kubenswrapper[4799]: I0127 09:50:40.256580 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcfsq" event={"ID":"e66b41d3-9596-463e-a666-95a49e189f34","Type":"ContainerDied","Data":"95129e0be673a7925ddf261d70a2ee1c0f105718b1e258557aa7752241f035c9"} Jan 27 09:50:42 crc kubenswrapper[4799]: I0127 09:50:42.273547 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcfsq" event={"ID":"e66b41d3-9596-463e-a666-95a49e189f34","Type":"ContainerStarted","Data":"c4e9b1d1199764dacab635770e14a8bfea9cb641782872c47fa5d938660f3a13"} Jan 27 09:50:42 crc kubenswrapper[4799]: I0127 09:50:42.305202 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bcfsq" podStartSLOduration=3.419796269 podStartE2EDuration="7.305183536s" podCreationTimestamp="2026-01-27 09:50:35 +0000 UTC" firstStartedPulling="2026-01-27 09:50:37.224653589 +0000 UTC m=+7503.535757654" lastFinishedPulling="2026-01-27 09:50:41.110040846 +0000 UTC m=+7507.421144921" observedRunningTime="2026-01-27 09:50:42.29559022 +0000 UTC m=+7508.606694325" watchObservedRunningTime="2026-01-27 09:50:42.305183536 +0000 UTC m=+7508.616287591" Jan 27 09:50:44 crc kubenswrapper[4799]: I0127 09:50:44.459217 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:50:44 crc kubenswrapper[4799]: E0127 09:50:44.459763 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:50:45 crc kubenswrapper[4799]: I0127 09:50:45.884260 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bcfsq" Jan 27 09:50:45 crc kubenswrapper[4799]: I0127 09:50:45.884693 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bcfsq" Jan 27 09:50:46 crc kubenswrapper[4799]: I0127 09:50:46.934624 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bcfsq" podUID="e66b41d3-9596-463e-a666-95a49e189f34" containerName="registry-server" probeResult="failure" output=< Jan 27 09:50:46 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Jan 27 09:50:46 crc kubenswrapper[4799]: > Jan 27 09:50:55 crc kubenswrapper[4799]: I0127 09:50:55.452262 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:50:55 crc kubenswrapper[4799]: E0127 09:50:55.452993 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:50:55 crc kubenswrapper[4799]: I0127 09:50:55.960614 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bcfsq" Jan 27 09:50:56 crc kubenswrapper[4799]: I0127 09:50:56.045200 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bcfsq" Jan 27 09:50:56 crc kubenswrapper[4799]: I0127 09:50:56.204286 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bcfsq"] Jan 27 09:50:57 crc kubenswrapper[4799]: I0127 09:50:57.425636 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bcfsq" podUID="e66b41d3-9596-463e-a666-95a49e189f34" containerName="registry-server" containerID="cri-o://c4e9b1d1199764dacab635770e14a8bfea9cb641782872c47fa5d938660f3a13" gracePeriod=2 Jan 27 09:50:58 crc kubenswrapper[4799]: I0127 09:50:58.436825 4799 generic.go:334] "Generic (PLEG): container finished" podID="e66b41d3-9596-463e-a666-95a49e189f34" containerID="c4e9b1d1199764dacab635770e14a8bfea9cb641782872c47fa5d938660f3a13" exitCode=0 Jan 27 09:50:58 crc kubenswrapper[4799]: I0127 09:50:58.436900 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcfsq" event={"ID":"e66b41d3-9596-463e-a666-95a49e189f34","Type":"ContainerDied","Data":"c4e9b1d1199764dacab635770e14a8bfea9cb641782872c47fa5d938660f3a13"} Jan 27 09:50:58 crc kubenswrapper[4799]: I0127 09:50:58.913324 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcfsq" Jan 27 09:50:58 crc kubenswrapper[4799]: I0127 09:50:58.988154 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e66b41d3-9596-463e-a666-95a49e189f34-utilities\") pod \"e66b41d3-9596-463e-a666-95a49e189f34\" (UID: \"e66b41d3-9596-463e-a666-95a49e189f34\") " Jan 27 09:50:58 crc kubenswrapper[4799]: I0127 09:50:58.988675 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bbzq\" (UniqueName: \"kubernetes.io/projected/e66b41d3-9596-463e-a666-95a49e189f34-kube-api-access-6bbzq\") pod \"e66b41d3-9596-463e-a666-95a49e189f34\" (UID: \"e66b41d3-9596-463e-a666-95a49e189f34\") " Jan 27 09:50:58 crc kubenswrapper[4799]: I0127 09:50:58.988770 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e66b41d3-9596-463e-a666-95a49e189f34-catalog-content\") pod \"e66b41d3-9596-463e-a666-95a49e189f34\" (UID: \"e66b41d3-9596-463e-a666-95a49e189f34\") " Jan 27 09:50:58 crc kubenswrapper[4799]: I0127 09:50:58.989028 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e66b41d3-9596-463e-a666-95a49e189f34-utilities" (OuterVolumeSpecName: "utilities") pod "e66b41d3-9596-463e-a666-95a49e189f34" (UID: "e66b41d3-9596-463e-a666-95a49e189f34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:50:58 crc kubenswrapper[4799]: I0127 09:50:58.989433 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e66b41d3-9596-463e-a666-95a49e189f34-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:50:58 crc kubenswrapper[4799]: I0127 09:50:58.996060 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e66b41d3-9596-463e-a666-95a49e189f34-kube-api-access-6bbzq" (OuterVolumeSpecName: "kube-api-access-6bbzq") pod "e66b41d3-9596-463e-a666-95a49e189f34" (UID: "e66b41d3-9596-463e-a666-95a49e189f34"). InnerVolumeSpecName "kube-api-access-6bbzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:50:59 crc kubenswrapper[4799]: I0127 09:50:59.094875 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bbzq\" (UniqueName: \"kubernetes.io/projected/e66b41d3-9596-463e-a666-95a49e189f34-kube-api-access-6bbzq\") on node \"crc\" DevicePath \"\"" Jan 27 09:50:59 crc kubenswrapper[4799]: I0127 09:50:59.108218 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e66b41d3-9596-463e-a666-95a49e189f34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e66b41d3-9596-463e-a666-95a49e189f34" (UID: "e66b41d3-9596-463e-a666-95a49e189f34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:50:59 crc kubenswrapper[4799]: I0127 09:50:59.197271 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e66b41d3-9596-463e-a666-95a49e189f34-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:50:59 crc kubenswrapper[4799]: I0127 09:50:59.445856 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcfsq" event={"ID":"e66b41d3-9596-463e-a666-95a49e189f34","Type":"ContainerDied","Data":"8dba49612aedb4a9eacaf47c0059c7f9c3f000ad5496108c30ebbd587be68f0e"} Jan 27 09:50:59 crc kubenswrapper[4799]: I0127 09:50:59.445903 4799 scope.go:117] "RemoveContainer" containerID="c4e9b1d1199764dacab635770e14a8bfea9cb641782872c47fa5d938660f3a13" Jan 27 09:50:59 crc kubenswrapper[4799]: I0127 09:50:59.446042 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcfsq" Jan 27 09:50:59 crc kubenswrapper[4799]: I0127 09:50:59.469880 4799 scope.go:117] "RemoveContainer" containerID="95129e0be673a7925ddf261d70a2ee1c0f105718b1e258557aa7752241f035c9" Jan 27 09:50:59 crc kubenswrapper[4799]: I0127 09:50:59.495427 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bcfsq"] Jan 27 09:50:59 crc kubenswrapper[4799]: I0127 09:50:59.508957 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bcfsq"] Jan 27 09:50:59 crc kubenswrapper[4799]: I0127 09:50:59.520131 4799 scope.go:117] "RemoveContainer" containerID="8c02ed9ce88f0e78af8933eb0bcb70c56dd8e5d3ee0a6afd38e73f1af556f551" Jan 27 09:51:00 crc kubenswrapper[4799]: I0127 09:51:00.462668 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e66b41d3-9596-463e-a666-95a49e189f34" path="/var/lib/kubelet/pods/e66b41d3-9596-463e-a666-95a49e189f34/volumes" Jan 27 09:51:07 crc kubenswrapper[4799]: I0127 09:51:07.451362 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:51:07 crc kubenswrapper[4799]: E0127 09:51:07.452143 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:51:19 crc kubenswrapper[4799]: I0127 09:51:19.461248 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:51:19 crc kubenswrapper[4799]: E0127 09:51:19.462474 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:51:33 crc kubenswrapper[4799]: I0127 09:51:33.452513 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:51:33 crc kubenswrapper[4799]: E0127 09:51:33.456064 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:51:44 crc kubenswrapper[4799]: I0127 09:51:44.459585 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:51:44 crc kubenswrapper[4799]: E0127 09:51:44.460613 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:51:57 crc kubenswrapper[4799]: I0127 09:51:57.451766 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:51:57 crc kubenswrapper[4799]: E0127 09:51:57.452998 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:52:12 crc kubenswrapper[4799]: I0127 09:52:12.452123 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:52:12 crc kubenswrapper[4799]: E0127 09:52:12.453285 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:52:26 crc kubenswrapper[4799]: I0127 09:52:26.451936 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:52:26 crc kubenswrapper[4799]: E0127 09:52:26.453121 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:52:40 crc kubenswrapper[4799]: I0127 09:52:40.452433 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:52:40 crc kubenswrapper[4799]: E0127 09:52:40.453720 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:52:54 crc kubenswrapper[4799]: I0127 09:52:54.452521 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:52:54 crc kubenswrapper[4799]: E0127 09:52:54.453916 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:53:09 crc kubenswrapper[4799]: I0127 09:53:09.450965 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:53:09 crc kubenswrapper[4799]: E0127 09:53:09.451811 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:53:20 crc kubenswrapper[4799]: I0127 09:53:20.452917 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:53:20 crc kubenswrapper[4799]: E0127 09:53:20.453683 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:53:35 crc kubenswrapper[4799]: I0127 09:53:35.452027 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:53:35 crc kubenswrapper[4799]: E0127 09:53:35.453365 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:53:46 crc kubenswrapper[4799]: I0127 09:53:46.452210 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:53:46 crc kubenswrapper[4799]: E0127 09:53:46.463568 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:54:00 crc kubenswrapper[4799]: I0127 09:54:00.452508 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:54:00 crc kubenswrapper[4799]: E0127 09:54:00.454038 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:54:13 crc kubenswrapper[4799]: I0127 09:54:13.451860 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:54:13 crc kubenswrapper[4799]: E0127 09:54:13.452755 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:54:27 crc kubenswrapper[4799]: I0127 09:54:27.452113 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:54:27 crc kubenswrapper[4799]: E0127 09:54:27.453028 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:54:42 crc kubenswrapper[4799]: I0127 09:54:42.451230 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:54:42 crc kubenswrapper[4799]: E0127 09:54:42.452016 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 09:54:57 crc kubenswrapper[4799]: I0127 09:54:57.452728 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 09:54:58 crc kubenswrapper[4799]: I0127 09:54:58.208935 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"f88c137ef2024972774ec7c8064c7864efdbbe6c60771858757cf36afeb37ddf"} Jan 27 09:55:18 crc kubenswrapper[4799]: I0127 09:55:18.129876 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rcbv8"] Jan 27 09:55:18 crc kubenswrapper[4799]: E0127 09:55:18.130763 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e66b41d3-9596-463e-a666-95a49e189f34" containerName="extract-content" Jan 27 09:55:18 crc kubenswrapper[4799]: I0127 09:55:18.130777 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e66b41d3-9596-463e-a666-95a49e189f34" containerName="extract-content" Jan 27 09:55:18 crc kubenswrapper[4799]: E0127 09:55:18.130805 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e66b41d3-9596-463e-a666-95a49e189f34" containerName="extract-utilities" Jan 27 09:55:18 crc kubenswrapper[4799]: I0127 09:55:18.130813 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e66b41d3-9596-463e-a666-95a49e189f34" containerName="extract-utilities" Jan 27 09:55:18 crc kubenswrapper[4799]: E0127 09:55:18.130829 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e66b41d3-9596-463e-a666-95a49e189f34" containerName="registry-server" Jan 27 09:55:18 crc kubenswrapper[4799]: I0127 09:55:18.130837 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e66b41d3-9596-463e-a666-95a49e189f34" containerName="registry-server" Jan 27 09:55:18 crc kubenswrapper[4799]: I0127 09:55:18.131045 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e66b41d3-9596-463e-a666-95a49e189f34" containerName="registry-server" Jan 27 09:55:18 crc kubenswrapper[4799]: I0127 09:55:18.132397 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rcbv8" Jan 27 09:55:18 crc kubenswrapper[4799]: I0127 09:55:18.145611 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rcbv8"] Jan 27 09:55:18 crc kubenswrapper[4799]: I0127 09:55:18.230555 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94b25278-7417-40b6-beea-2640e1fadd55-catalog-content\") pod \"community-operators-rcbv8\" (UID: \"94b25278-7417-40b6-beea-2640e1fadd55\") " pod="openshift-marketplace/community-operators-rcbv8" Jan 27 09:55:18 crc kubenswrapper[4799]: I0127 09:55:18.230636 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ftxl\" (UniqueName: \"kubernetes.io/projected/94b25278-7417-40b6-beea-2640e1fadd55-kube-api-access-5ftxl\") pod \"community-operators-rcbv8\" (UID: \"94b25278-7417-40b6-beea-2640e1fadd55\") " pod="openshift-marketplace/community-operators-rcbv8" Jan 27 09:55:18 crc kubenswrapper[4799]: I0127 09:55:18.230710 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94b25278-7417-40b6-beea-2640e1fadd55-utilities\") pod \"community-operators-rcbv8\" (UID: \"94b25278-7417-40b6-beea-2640e1fadd55\") " pod="openshift-marketplace/community-operators-rcbv8" Jan 27 09:55:18 crc kubenswrapper[4799]: I0127 09:55:18.332083 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94b25278-7417-40b6-beea-2640e1fadd55-catalog-content\") pod \"community-operators-rcbv8\" (UID: \"94b25278-7417-40b6-beea-2640e1fadd55\") " pod="openshift-marketplace/community-operators-rcbv8" Jan 27 09:55:18 crc kubenswrapper[4799]: I0127 09:55:18.332450 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ftxl\" (UniqueName: \"kubernetes.io/projected/94b25278-7417-40b6-beea-2640e1fadd55-kube-api-access-5ftxl\") pod \"community-operators-rcbv8\" (UID: \"94b25278-7417-40b6-beea-2640e1fadd55\") " pod="openshift-marketplace/community-operators-rcbv8" Jan 27 09:55:18 crc kubenswrapper[4799]: I0127 09:55:18.332496 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94b25278-7417-40b6-beea-2640e1fadd55-utilities\") pod \"community-operators-rcbv8\" (UID: \"94b25278-7417-40b6-beea-2640e1fadd55\") " pod="openshift-marketplace/community-operators-rcbv8" Jan 27 09:55:18 crc kubenswrapper[4799]: I0127 09:55:18.333056 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94b25278-7417-40b6-beea-2640e1fadd55-utilities\") pod \"community-operators-rcbv8\" (UID: \"94b25278-7417-40b6-beea-2640e1fadd55\") " pod="openshift-marketplace/community-operators-rcbv8" Jan 27 09:55:18 crc kubenswrapper[4799]: I0127 09:55:18.333420 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94b25278-7417-40b6-beea-2640e1fadd55-catalog-content\") pod \"community-operators-rcbv8\" (UID: \"94b25278-7417-40b6-beea-2640e1fadd55\") " pod="openshift-marketplace/community-operators-rcbv8" Jan 27 09:55:18 crc kubenswrapper[4799]: I0127 09:55:18.361363 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ftxl\" (UniqueName: \"kubernetes.io/projected/94b25278-7417-40b6-beea-2640e1fadd55-kube-api-access-5ftxl\") pod \"community-operators-rcbv8\" (UID: \"94b25278-7417-40b6-beea-2640e1fadd55\") " pod="openshift-marketplace/community-operators-rcbv8" Jan 27 09:55:18 crc kubenswrapper[4799]: I0127 09:55:18.458691 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rcbv8" Jan 27 09:55:19 crc kubenswrapper[4799]: I0127 09:55:19.114402 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rcbv8"] Jan 27 09:55:19 crc kubenswrapper[4799]: I0127 09:55:19.451798 4799 generic.go:334] "Generic (PLEG): container finished" podID="94b25278-7417-40b6-beea-2640e1fadd55" containerID="16b7627d66f5268f963ef45ba6d0c4bdbbe73909a490ce90df1c403ba4197df0" exitCode=0 Jan 27 09:55:19 crc kubenswrapper[4799]: I0127 09:55:19.451862 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcbv8" event={"ID":"94b25278-7417-40b6-beea-2640e1fadd55","Type":"ContainerDied","Data":"16b7627d66f5268f963ef45ba6d0c4bdbbe73909a490ce90df1c403ba4197df0"} Jan 27 09:55:19 crc kubenswrapper[4799]: I0127 09:55:19.452275 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcbv8" event={"ID":"94b25278-7417-40b6-beea-2640e1fadd55","Type":"ContainerStarted","Data":"7c464dd34960b082cd96ea7d8e2c48438ab83a4f61fdd212389af2c563a87438"} Jan 27 09:55:23 crc kubenswrapper[4799]: I0127 09:55:23.493217 4799 generic.go:334] "Generic (PLEG): container finished" podID="94b25278-7417-40b6-beea-2640e1fadd55" containerID="4ace08b68844a9083f2a814fe24e484f9b5659035fc9256d61e312e60a1168f6" exitCode=0 Jan 27 09:55:23 crc kubenswrapper[4799]: I0127 09:55:23.493326 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcbv8" event={"ID":"94b25278-7417-40b6-beea-2640e1fadd55","Type":"ContainerDied","Data":"4ace08b68844a9083f2a814fe24e484f9b5659035fc9256d61e312e60a1168f6"} Jan 27 09:55:24 crc kubenswrapper[4799]: I0127 09:55:24.515385 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcbv8" event={"ID":"94b25278-7417-40b6-beea-2640e1fadd55","Type":"ContainerStarted","Data":"05ffb26f03a6f8f75fb8409058142157030bc5496562b68fa3bdf0203984b9a9"} Jan 27 09:55:24 crc kubenswrapper[4799]: I0127 09:55:24.556858 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rcbv8" podStartSLOduration=1.987756782 podStartE2EDuration="6.556833037s" podCreationTimestamp="2026-01-27 09:55:18 +0000 UTC" firstStartedPulling="2026-01-27 09:55:19.454216412 +0000 UTC m=+7785.765320517" lastFinishedPulling="2026-01-27 09:55:24.023292707 +0000 UTC m=+7790.334396772" observedRunningTime="2026-01-27 09:55:24.542804843 +0000 UTC m=+7790.853908948" watchObservedRunningTime="2026-01-27 09:55:24.556833037 +0000 UTC m=+7790.867937102" Jan 27 09:55:28 crc kubenswrapper[4799]: I0127 09:55:28.473561 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rcbv8" Jan 27 09:55:28 crc kubenswrapper[4799]: I0127 09:55:28.474395 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rcbv8" Jan 27 09:55:28 crc kubenswrapper[4799]: I0127 09:55:28.539327 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rcbv8" Jan 27 09:55:38 crc kubenswrapper[4799]: I0127 09:55:38.519050 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rcbv8" Jan 27 09:55:38 crc kubenswrapper[4799]: I0127 09:55:38.613791 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rcbv8"] Jan 27 09:55:38 crc kubenswrapper[4799]: I0127 09:55:38.658828 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6tjvj"] Jan 27 09:55:38 crc kubenswrapper[4799]: I0127 09:55:38.660394 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6tjvj" podUID="6646c613-e0d7-42d3-b170-c2768b718f02" containerName="registry-server" containerID="cri-o://32ad983dc9779fab1ca8f61998a36d0dac913ba985d8fc3200aabae250aca053" gracePeriod=2 Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.170537 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tjvj" Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.217541 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6646c613-e0d7-42d3-b170-c2768b718f02-utilities\") pod \"6646c613-e0d7-42d3-b170-c2768b718f02\" (UID: \"6646c613-e0d7-42d3-b170-c2768b718f02\") " Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.217666 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6646c613-e0d7-42d3-b170-c2768b718f02-catalog-content\") pod \"6646c613-e0d7-42d3-b170-c2768b718f02\" (UID: \"6646c613-e0d7-42d3-b170-c2768b718f02\") " Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.217733 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl7jn\" (UniqueName: \"kubernetes.io/projected/6646c613-e0d7-42d3-b170-c2768b718f02-kube-api-access-rl7jn\") pod \"6646c613-e0d7-42d3-b170-c2768b718f02\" (UID: \"6646c613-e0d7-42d3-b170-c2768b718f02\") " Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.221644 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6646c613-e0d7-42d3-b170-c2768b718f02-utilities" (OuterVolumeSpecName: "utilities") pod "6646c613-e0d7-42d3-b170-c2768b718f02" (UID: "6646c613-e0d7-42d3-b170-c2768b718f02"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.227450 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6646c613-e0d7-42d3-b170-c2768b718f02-kube-api-access-rl7jn" (OuterVolumeSpecName: "kube-api-access-rl7jn") pod "6646c613-e0d7-42d3-b170-c2768b718f02" (UID: "6646c613-e0d7-42d3-b170-c2768b718f02"). InnerVolumeSpecName "kube-api-access-rl7jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.304931 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6646c613-e0d7-42d3-b170-c2768b718f02-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6646c613-e0d7-42d3-b170-c2768b718f02" (UID: "6646c613-e0d7-42d3-b170-c2768b718f02"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.320107 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6646c613-e0d7-42d3-b170-c2768b718f02-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.320140 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6646c613-e0d7-42d3-b170-c2768b718f02-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.320151 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl7jn\" (UniqueName: \"kubernetes.io/projected/6646c613-e0d7-42d3-b170-c2768b718f02-kube-api-access-rl7jn\") on node \"crc\" DevicePath \"\"" Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.702756 4799 generic.go:334] "Generic (PLEG): container finished" podID="6646c613-e0d7-42d3-b170-c2768b718f02" containerID="32ad983dc9779fab1ca8f61998a36d0dac913ba985d8fc3200aabae250aca053" exitCode=0 Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.702795 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tjvj" event={"ID":"6646c613-e0d7-42d3-b170-c2768b718f02","Type":"ContainerDied","Data":"32ad983dc9779fab1ca8f61998a36d0dac913ba985d8fc3200aabae250aca053"} Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.702822 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tjvj" event={"ID":"6646c613-e0d7-42d3-b170-c2768b718f02","Type":"ContainerDied","Data":"b66036f2b95b7bdb26e89f68c695e6ba7a48bdca3b53ffddea8ec9b5e8990c83"} Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.702826 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tjvj" Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.702842 4799 scope.go:117] "RemoveContainer" containerID="32ad983dc9779fab1ca8f61998a36d0dac913ba985d8fc3200aabae250aca053" Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.742090 4799 scope.go:117] "RemoveContainer" containerID="6d203025920e8db0b36826296be6d6e8f68d3dcc5739a5095058fc7165c2c112" Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.760357 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6tjvj"] Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.779077 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6tjvj"] Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.820148 4799 scope.go:117] "RemoveContainer" containerID="3be4dc3be2a5b289499b91a2438190b25f3350991f87792d1a5e953d98c6c60f" Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.854812 4799 scope.go:117] "RemoveContainer" containerID="32ad983dc9779fab1ca8f61998a36d0dac913ba985d8fc3200aabae250aca053" Jan 27 09:55:39 crc kubenswrapper[4799]: E0127 09:55:39.856198 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32ad983dc9779fab1ca8f61998a36d0dac913ba985d8fc3200aabae250aca053\": container with ID starting with 32ad983dc9779fab1ca8f61998a36d0dac913ba985d8fc3200aabae250aca053 not found: ID does not exist" containerID="32ad983dc9779fab1ca8f61998a36d0dac913ba985d8fc3200aabae250aca053" Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.856242 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32ad983dc9779fab1ca8f61998a36d0dac913ba985d8fc3200aabae250aca053"} err="failed to get container status \"32ad983dc9779fab1ca8f61998a36d0dac913ba985d8fc3200aabae250aca053\": rpc error: code = NotFound desc = could not find container \"32ad983dc9779fab1ca8f61998a36d0dac913ba985d8fc3200aabae250aca053\": container with ID starting with 32ad983dc9779fab1ca8f61998a36d0dac913ba985d8fc3200aabae250aca053 not found: ID does not exist" Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.856267 4799 scope.go:117] "RemoveContainer" containerID="6d203025920e8db0b36826296be6d6e8f68d3dcc5739a5095058fc7165c2c112" Jan 27 09:55:39 crc kubenswrapper[4799]: E0127 09:55:39.859049 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d203025920e8db0b36826296be6d6e8f68d3dcc5739a5095058fc7165c2c112\": container with ID starting with 6d203025920e8db0b36826296be6d6e8f68d3dcc5739a5095058fc7165c2c112 not found: ID does not exist" containerID="6d203025920e8db0b36826296be6d6e8f68d3dcc5739a5095058fc7165c2c112" Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.859075 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d203025920e8db0b36826296be6d6e8f68d3dcc5739a5095058fc7165c2c112"} err="failed to get container status \"6d203025920e8db0b36826296be6d6e8f68d3dcc5739a5095058fc7165c2c112\": rpc error: code = NotFound desc = could not find container \"6d203025920e8db0b36826296be6d6e8f68d3dcc5739a5095058fc7165c2c112\": container with ID starting with 6d203025920e8db0b36826296be6d6e8f68d3dcc5739a5095058fc7165c2c112 not found: ID does not exist" Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.859090 4799 scope.go:117] "RemoveContainer" containerID="3be4dc3be2a5b289499b91a2438190b25f3350991f87792d1a5e953d98c6c60f" Jan 27 09:55:39 crc kubenswrapper[4799]: E0127 09:55:39.865640 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3be4dc3be2a5b289499b91a2438190b25f3350991f87792d1a5e953d98c6c60f\": container with ID starting with 3be4dc3be2a5b289499b91a2438190b25f3350991f87792d1a5e953d98c6c60f not found: ID does not exist" containerID="3be4dc3be2a5b289499b91a2438190b25f3350991f87792d1a5e953d98c6c60f" Jan 27 09:55:39 crc kubenswrapper[4799]: I0127 09:55:39.865673 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3be4dc3be2a5b289499b91a2438190b25f3350991f87792d1a5e953d98c6c60f"} err="failed to get container status \"3be4dc3be2a5b289499b91a2438190b25f3350991f87792d1a5e953d98c6c60f\": rpc error: code = NotFound desc = could not find container \"3be4dc3be2a5b289499b91a2438190b25f3350991f87792d1a5e953d98c6c60f\": container with ID starting with 3be4dc3be2a5b289499b91a2438190b25f3350991f87792d1a5e953d98c6c60f not found: ID does not exist" Jan 27 09:55:40 crc kubenswrapper[4799]: I0127 09:55:40.463235 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6646c613-e0d7-42d3-b170-c2768b718f02" path="/var/lib/kubelet/pods/6646c613-e0d7-42d3-b170-c2768b718f02/volumes" Jan 27 09:56:26 crc kubenswrapper[4799]: I0127 09:56:26.676480 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kbnxx"] Jan 27 09:56:26 crc kubenswrapper[4799]: E0127 09:56:26.677660 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6646c613-e0d7-42d3-b170-c2768b718f02" containerName="extract-utilities" Jan 27 09:56:26 crc kubenswrapper[4799]: I0127 09:56:26.677682 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6646c613-e0d7-42d3-b170-c2768b718f02" containerName="extract-utilities" Jan 27 09:56:26 crc kubenswrapper[4799]: E0127 09:56:26.677722 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6646c613-e0d7-42d3-b170-c2768b718f02" containerName="extract-content" Jan 27 09:56:26 crc kubenswrapper[4799]: I0127 09:56:26.677734 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6646c613-e0d7-42d3-b170-c2768b718f02" containerName="extract-content" Jan 27 09:56:26 crc kubenswrapper[4799]: E0127 09:56:26.677760 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6646c613-e0d7-42d3-b170-c2768b718f02" containerName="registry-server" Jan 27 09:56:26 crc kubenswrapper[4799]: I0127 09:56:26.677774 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6646c613-e0d7-42d3-b170-c2768b718f02" containerName="registry-server" Jan 27 09:56:26 crc kubenswrapper[4799]: I0127 09:56:26.678115 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6646c613-e0d7-42d3-b170-c2768b718f02" containerName="registry-server" Jan 27 09:56:26 crc kubenswrapper[4799]: I0127 09:56:26.680758 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kbnxx" Jan 27 09:56:26 crc kubenswrapper[4799]: I0127 09:56:26.691988 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kbnxx"] Jan 27 09:56:26 crc kubenswrapper[4799]: I0127 09:56:26.781814 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a8992e-e9f8-491e-84ba-f2c330d2ab3d-utilities\") pod \"certified-operators-kbnxx\" (UID: \"37a8992e-e9f8-491e-84ba-f2c330d2ab3d\") " pod="openshift-marketplace/certified-operators-kbnxx" Jan 27 09:56:26 crc kubenswrapper[4799]: I0127 09:56:26.781865 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4jkk\" (UniqueName: \"kubernetes.io/projected/37a8992e-e9f8-491e-84ba-f2c330d2ab3d-kube-api-access-n4jkk\") pod \"certified-operators-kbnxx\" (UID: \"37a8992e-e9f8-491e-84ba-f2c330d2ab3d\") " pod="openshift-marketplace/certified-operators-kbnxx" Jan 27 09:56:26 crc kubenswrapper[4799]: I0127 09:56:26.782603 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a8992e-e9f8-491e-84ba-f2c330d2ab3d-catalog-content\") pod \"certified-operators-kbnxx\" (UID: \"37a8992e-e9f8-491e-84ba-f2c330d2ab3d\") " pod="openshift-marketplace/certified-operators-kbnxx" Jan 27 09:56:26 crc kubenswrapper[4799]: I0127 09:56:26.884944 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a8992e-e9f8-491e-84ba-f2c330d2ab3d-catalog-content\") pod \"certified-operators-kbnxx\" (UID: \"37a8992e-e9f8-491e-84ba-f2c330d2ab3d\") " pod="openshift-marketplace/certified-operators-kbnxx" Jan 27 09:56:26 crc kubenswrapper[4799]: I0127 09:56:26.885402 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a8992e-e9f8-491e-84ba-f2c330d2ab3d-utilities\") pod \"certified-operators-kbnxx\" (UID: \"37a8992e-e9f8-491e-84ba-f2c330d2ab3d\") " pod="openshift-marketplace/certified-operators-kbnxx" Jan 27 09:56:26 crc kubenswrapper[4799]: I0127 09:56:26.885430 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4jkk\" (UniqueName: \"kubernetes.io/projected/37a8992e-e9f8-491e-84ba-f2c330d2ab3d-kube-api-access-n4jkk\") pod \"certified-operators-kbnxx\" (UID: \"37a8992e-e9f8-491e-84ba-f2c330d2ab3d\") " pod="openshift-marketplace/certified-operators-kbnxx" Jan 27 09:56:26 crc kubenswrapper[4799]: I0127 09:56:26.885610 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a8992e-e9f8-491e-84ba-f2c330d2ab3d-catalog-content\") pod \"certified-operators-kbnxx\" (UID: \"37a8992e-e9f8-491e-84ba-f2c330d2ab3d\") " pod="openshift-marketplace/certified-operators-kbnxx" Jan 27 09:56:26 crc kubenswrapper[4799]: I0127 09:56:26.885875 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a8992e-e9f8-491e-84ba-f2c330d2ab3d-utilities\") pod \"certified-operators-kbnxx\" (UID: \"37a8992e-e9f8-491e-84ba-f2c330d2ab3d\") " pod="openshift-marketplace/certified-operators-kbnxx" Jan 27 09:56:26 crc kubenswrapper[4799]: I0127 09:56:26.907687 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4jkk\" (UniqueName: \"kubernetes.io/projected/37a8992e-e9f8-491e-84ba-f2c330d2ab3d-kube-api-access-n4jkk\") pod \"certified-operators-kbnxx\" (UID: \"37a8992e-e9f8-491e-84ba-f2c330d2ab3d\") " pod="openshift-marketplace/certified-operators-kbnxx" Jan 27 09:56:27 crc kubenswrapper[4799]: I0127 09:56:27.003459 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kbnxx" Jan 27 09:56:27 crc kubenswrapper[4799]: I0127 09:56:27.494841 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kbnxx"] Jan 27 09:56:28 crc kubenswrapper[4799]: I0127 09:56:28.193473 4799 generic.go:334] "Generic (PLEG): container finished" podID="37a8992e-e9f8-491e-84ba-f2c330d2ab3d" containerID="c56bd974891485cef5827db3a4aa90b3b1d06b30ffec9a63592c9e483fc0ecb3" exitCode=0 Jan 27 09:56:28 crc kubenswrapper[4799]: I0127 09:56:28.193537 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kbnxx" event={"ID":"37a8992e-e9f8-491e-84ba-f2c330d2ab3d","Type":"ContainerDied","Data":"c56bd974891485cef5827db3a4aa90b3b1d06b30ffec9a63592c9e483fc0ecb3"} Jan 27 09:56:28 crc kubenswrapper[4799]: I0127 09:56:28.193823 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kbnxx" event={"ID":"37a8992e-e9f8-491e-84ba-f2c330d2ab3d","Type":"ContainerStarted","Data":"bd1f36fd5ba4a50a1274b325f5b6b701e90551c9eb20759cfda4ff652f971a61"} Jan 27 09:56:28 crc kubenswrapper[4799]: I0127 09:56:28.196664 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 09:56:30 crc kubenswrapper[4799]: I0127 09:56:30.221075 4799 generic.go:334] "Generic (PLEG): container finished" podID="37a8992e-e9f8-491e-84ba-f2c330d2ab3d" containerID="308366a21aec939a9cb60a26dc880ab4c8880d18367f9e6aea22e2434faf219e" exitCode=0 Jan 27 09:56:30 crc kubenswrapper[4799]: I0127 09:56:30.221207 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kbnxx" event={"ID":"37a8992e-e9f8-491e-84ba-f2c330d2ab3d","Type":"ContainerDied","Data":"308366a21aec939a9cb60a26dc880ab4c8880d18367f9e6aea22e2434faf219e"} Jan 27 09:56:31 crc kubenswrapper[4799]: I0127 09:56:31.231160 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kbnxx" event={"ID":"37a8992e-e9f8-491e-84ba-f2c330d2ab3d","Type":"ContainerStarted","Data":"3428de4d8bf480ca466f93ac5f7e928f7992a5958e988b9bbecf479e33859c4b"} Jan 27 09:56:31 crc kubenswrapper[4799]: I0127 09:56:31.252635 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kbnxx" podStartSLOduration=2.569464155 podStartE2EDuration="5.252614823s" podCreationTimestamp="2026-01-27 09:56:26 +0000 UTC" firstStartedPulling="2026-01-27 09:56:28.196081101 +0000 UTC m=+7854.507185196" lastFinishedPulling="2026-01-27 09:56:30.879231799 +0000 UTC m=+7857.190335864" observedRunningTime="2026-01-27 09:56:31.25170741 +0000 UTC m=+7857.562811485" watchObservedRunningTime="2026-01-27 09:56:31.252614823 +0000 UTC m=+7857.563718888" Jan 27 09:56:37 crc kubenswrapper[4799]: I0127 09:56:37.004839 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kbnxx" Jan 27 09:56:37 crc kubenswrapper[4799]: I0127 09:56:37.005418 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kbnxx" Jan 27 09:56:37 crc kubenswrapper[4799]: I0127 09:56:37.069494 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kbnxx" Jan 27 09:56:37 crc kubenswrapper[4799]: I0127 09:56:37.348713 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kbnxx" Jan 27 09:56:37 crc kubenswrapper[4799]: I0127 09:56:37.400365 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kbnxx"] Jan 27 09:56:39 crc kubenswrapper[4799]: I0127 09:56:39.318065 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kbnxx" podUID="37a8992e-e9f8-491e-84ba-f2c330d2ab3d" containerName="registry-server" containerID="cri-o://3428de4d8bf480ca466f93ac5f7e928f7992a5958e988b9bbecf479e33859c4b" gracePeriod=2 Jan 27 09:56:39 crc kubenswrapper[4799]: I0127 09:56:39.802079 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kbnxx" Jan 27 09:56:39 crc kubenswrapper[4799]: I0127 09:56:39.965341 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4jkk\" (UniqueName: \"kubernetes.io/projected/37a8992e-e9f8-491e-84ba-f2c330d2ab3d-kube-api-access-n4jkk\") pod \"37a8992e-e9f8-491e-84ba-f2c330d2ab3d\" (UID: \"37a8992e-e9f8-491e-84ba-f2c330d2ab3d\") " Jan 27 09:56:39 crc kubenswrapper[4799]: I0127 09:56:39.965443 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a8992e-e9f8-491e-84ba-f2c330d2ab3d-catalog-content\") pod \"37a8992e-e9f8-491e-84ba-f2c330d2ab3d\" (UID: \"37a8992e-e9f8-491e-84ba-f2c330d2ab3d\") " Jan 27 09:56:39 crc kubenswrapper[4799]: I0127 09:56:39.965590 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a8992e-e9f8-491e-84ba-f2c330d2ab3d-utilities\") pod \"37a8992e-e9f8-491e-84ba-f2c330d2ab3d\" (UID: \"37a8992e-e9f8-491e-84ba-f2c330d2ab3d\") " Jan 27 09:56:39 crc kubenswrapper[4799]: I0127 09:56:39.966942 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37a8992e-e9f8-491e-84ba-f2c330d2ab3d-utilities" (OuterVolumeSpecName: "utilities") pod "37a8992e-e9f8-491e-84ba-f2c330d2ab3d" (UID: "37a8992e-e9f8-491e-84ba-f2c330d2ab3d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:56:39 crc kubenswrapper[4799]: I0127 09:56:39.974574 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37a8992e-e9f8-491e-84ba-f2c330d2ab3d-kube-api-access-n4jkk" (OuterVolumeSpecName: "kube-api-access-n4jkk") pod "37a8992e-e9f8-491e-84ba-f2c330d2ab3d" (UID: "37a8992e-e9f8-491e-84ba-f2c330d2ab3d"). InnerVolumeSpecName "kube-api-access-n4jkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:56:40 crc kubenswrapper[4799]: I0127 09:56:40.068483 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4jkk\" (UniqueName: \"kubernetes.io/projected/37a8992e-e9f8-491e-84ba-f2c330d2ab3d-kube-api-access-n4jkk\") on node \"crc\" DevicePath \"\"" Jan 27 09:56:40 crc kubenswrapper[4799]: I0127 09:56:40.068528 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a8992e-e9f8-491e-84ba-f2c330d2ab3d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:56:40 crc kubenswrapper[4799]: I0127 09:56:40.329516 4799 generic.go:334] "Generic (PLEG): container finished" podID="37a8992e-e9f8-491e-84ba-f2c330d2ab3d" containerID="3428de4d8bf480ca466f93ac5f7e928f7992a5958e988b9bbecf479e33859c4b" exitCode=0 Jan 27 09:56:40 crc kubenswrapper[4799]: I0127 09:56:40.329566 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kbnxx" event={"ID":"37a8992e-e9f8-491e-84ba-f2c330d2ab3d","Type":"ContainerDied","Data":"3428de4d8bf480ca466f93ac5f7e928f7992a5958e988b9bbecf479e33859c4b"} Jan 27 09:56:40 crc kubenswrapper[4799]: I0127 09:56:40.329597 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kbnxx" event={"ID":"37a8992e-e9f8-491e-84ba-f2c330d2ab3d","Type":"ContainerDied","Data":"bd1f36fd5ba4a50a1274b325f5b6b701e90551c9eb20759cfda4ff652f971a61"} Jan 27 09:56:40 crc kubenswrapper[4799]: I0127 09:56:40.329617 4799 scope.go:117] "RemoveContainer" containerID="3428de4d8bf480ca466f93ac5f7e928f7992a5958e988b9bbecf479e33859c4b" Jan 27 09:56:40 crc kubenswrapper[4799]: I0127 09:56:40.330080 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kbnxx" Jan 27 09:56:40 crc kubenswrapper[4799]: I0127 09:56:40.354198 4799 scope.go:117] "RemoveContainer" containerID="308366a21aec939a9cb60a26dc880ab4c8880d18367f9e6aea22e2434faf219e" Jan 27 09:56:40 crc kubenswrapper[4799]: I0127 09:56:40.364187 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37a8992e-e9f8-491e-84ba-f2c330d2ab3d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37a8992e-e9f8-491e-84ba-f2c330d2ab3d" (UID: "37a8992e-e9f8-491e-84ba-f2c330d2ab3d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:56:40 crc kubenswrapper[4799]: I0127 09:56:40.374200 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a8992e-e9f8-491e-84ba-f2c330d2ab3d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:56:40 crc kubenswrapper[4799]: I0127 09:56:40.383798 4799 scope.go:117] "RemoveContainer" containerID="c56bd974891485cef5827db3a4aa90b3b1d06b30ffec9a63592c9e483fc0ecb3" Jan 27 09:56:40 crc kubenswrapper[4799]: I0127 09:56:40.419527 4799 scope.go:117] "RemoveContainer" containerID="3428de4d8bf480ca466f93ac5f7e928f7992a5958e988b9bbecf479e33859c4b" Jan 27 09:56:40 crc kubenswrapper[4799]: E0127 09:56:40.421797 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3428de4d8bf480ca466f93ac5f7e928f7992a5958e988b9bbecf479e33859c4b\": container with ID starting with 3428de4d8bf480ca466f93ac5f7e928f7992a5958e988b9bbecf479e33859c4b not found: ID does not exist" containerID="3428de4d8bf480ca466f93ac5f7e928f7992a5958e988b9bbecf479e33859c4b" Jan 27 09:56:40 crc kubenswrapper[4799]: I0127 09:56:40.421874 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3428de4d8bf480ca466f93ac5f7e928f7992a5958e988b9bbecf479e33859c4b"} err="failed to get container status \"3428de4d8bf480ca466f93ac5f7e928f7992a5958e988b9bbecf479e33859c4b\": rpc error: code = NotFound desc = could not find container \"3428de4d8bf480ca466f93ac5f7e928f7992a5958e988b9bbecf479e33859c4b\": container with ID starting with 3428de4d8bf480ca466f93ac5f7e928f7992a5958e988b9bbecf479e33859c4b not found: ID does not exist" Jan 27 09:56:40 crc kubenswrapper[4799]: I0127 09:56:40.421911 4799 scope.go:117] "RemoveContainer" containerID="308366a21aec939a9cb60a26dc880ab4c8880d18367f9e6aea22e2434faf219e" Jan 27 09:56:40 crc kubenswrapper[4799]: E0127 09:56:40.422261 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"308366a21aec939a9cb60a26dc880ab4c8880d18367f9e6aea22e2434faf219e\": container with ID starting with 308366a21aec939a9cb60a26dc880ab4c8880d18367f9e6aea22e2434faf219e not found: ID does not exist" containerID="308366a21aec939a9cb60a26dc880ab4c8880d18367f9e6aea22e2434faf219e" Jan 27 09:56:40 crc kubenswrapper[4799]: I0127 09:56:40.422295 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"308366a21aec939a9cb60a26dc880ab4c8880d18367f9e6aea22e2434faf219e"} err="failed to get container status \"308366a21aec939a9cb60a26dc880ab4c8880d18367f9e6aea22e2434faf219e\": rpc error: code = NotFound desc = could not find container \"308366a21aec939a9cb60a26dc880ab4c8880d18367f9e6aea22e2434faf219e\": container with ID starting with 308366a21aec939a9cb60a26dc880ab4c8880d18367f9e6aea22e2434faf219e not found: ID does not exist" Jan 27 09:56:40 crc kubenswrapper[4799]: I0127 09:56:40.422328 4799 scope.go:117] "RemoveContainer" containerID="c56bd974891485cef5827db3a4aa90b3b1d06b30ffec9a63592c9e483fc0ecb3" Jan 27 09:56:40 crc kubenswrapper[4799]: E0127 09:56:40.422807 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c56bd974891485cef5827db3a4aa90b3b1d06b30ffec9a63592c9e483fc0ecb3\": container with ID starting with c56bd974891485cef5827db3a4aa90b3b1d06b30ffec9a63592c9e483fc0ecb3 not found: ID does not exist" containerID="c56bd974891485cef5827db3a4aa90b3b1d06b30ffec9a63592c9e483fc0ecb3" Jan 27 09:56:40 crc kubenswrapper[4799]: I0127 09:56:40.422832 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c56bd974891485cef5827db3a4aa90b3b1d06b30ffec9a63592c9e483fc0ecb3"} err="failed to get container status \"c56bd974891485cef5827db3a4aa90b3b1d06b30ffec9a63592c9e483fc0ecb3\": rpc error: code = NotFound desc = could not find container \"c56bd974891485cef5827db3a4aa90b3b1d06b30ffec9a63592c9e483fc0ecb3\": container with ID starting with c56bd974891485cef5827db3a4aa90b3b1d06b30ffec9a63592c9e483fc0ecb3 not found: ID does not exist" Jan 27 09:56:40 crc kubenswrapper[4799]: I0127 09:56:40.654342 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kbnxx"] Jan 27 09:56:40 crc kubenswrapper[4799]: I0127 09:56:40.664583 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kbnxx"] Jan 27 09:56:42 crc kubenswrapper[4799]: I0127 09:56:42.466617 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37a8992e-e9f8-491e-84ba-f2c330d2ab3d" path="/var/lib/kubelet/pods/37a8992e-e9f8-491e-84ba-f2c330d2ab3d/volumes" Jan 27 09:57:23 crc kubenswrapper[4799]: I0127 09:57:23.731098 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:57:23 crc kubenswrapper[4799]: I0127 09:57:23.731956 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:57:47 crc kubenswrapper[4799]: I0127 09:57:47.415495 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6x6pp"] Jan 27 09:57:47 crc kubenswrapper[4799]: E0127 09:57:47.417598 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a8992e-e9f8-491e-84ba-f2c330d2ab3d" containerName="extract-content" Jan 27 09:57:47 crc kubenswrapper[4799]: I0127 09:57:47.417632 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a8992e-e9f8-491e-84ba-f2c330d2ab3d" containerName="extract-content" Jan 27 09:57:47 crc kubenswrapper[4799]: E0127 09:57:47.417684 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a8992e-e9f8-491e-84ba-f2c330d2ab3d" containerName="registry-server" Jan 27 09:57:47 crc kubenswrapper[4799]: I0127 09:57:47.417693 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a8992e-e9f8-491e-84ba-f2c330d2ab3d" containerName="registry-server" Jan 27 09:57:47 crc kubenswrapper[4799]: E0127 09:57:47.417725 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a8992e-e9f8-491e-84ba-f2c330d2ab3d" containerName="extract-utilities" Jan 27 09:57:47 crc kubenswrapper[4799]: I0127 09:57:47.417736 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a8992e-e9f8-491e-84ba-f2c330d2ab3d" containerName="extract-utilities" Jan 27 09:57:47 crc kubenswrapper[4799]: I0127 09:57:47.418005 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="37a8992e-e9f8-491e-84ba-f2c330d2ab3d" containerName="registry-server" Jan 27 09:57:47 crc kubenswrapper[4799]: I0127 09:57:47.422421 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6x6pp" Jan 27 09:57:47 crc kubenswrapper[4799]: I0127 09:57:47.451439 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6x6pp"] Jan 27 09:57:47 crc kubenswrapper[4799]: I0127 09:57:47.522522 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkg4b\" (UniqueName: \"kubernetes.io/projected/34db2abd-a7a2-4334-b97b-8647e183412e-kube-api-access-rkg4b\") pod \"redhat-marketplace-6x6pp\" (UID: \"34db2abd-a7a2-4334-b97b-8647e183412e\") " pod="openshift-marketplace/redhat-marketplace-6x6pp" Jan 27 09:57:47 crc kubenswrapper[4799]: I0127 09:57:47.522931 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34db2abd-a7a2-4334-b97b-8647e183412e-utilities\") pod \"redhat-marketplace-6x6pp\" (UID: \"34db2abd-a7a2-4334-b97b-8647e183412e\") " pod="openshift-marketplace/redhat-marketplace-6x6pp" Jan 27 09:57:47 crc kubenswrapper[4799]: I0127 09:57:47.523014 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34db2abd-a7a2-4334-b97b-8647e183412e-catalog-content\") pod \"redhat-marketplace-6x6pp\" (UID: \"34db2abd-a7a2-4334-b97b-8647e183412e\") " pod="openshift-marketplace/redhat-marketplace-6x6pp" Jan 27 09:57:47 crc kubenswrapper[4799]: I0127 09:57:47.625151 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34db2abd-a7a2-4334-b97b-8647e183412e-utilities\") pod \"redhat-marketplace-6x6pp\" (UID: \"34db2abd-a7a2-4334-b97b-8647e183412e\") " pod="openshift-marketplace/redhat-marketplace-6x6pp" Jan 27 09:57:47 crc kubenswrapper[4799]: I0127 09:57:47.625227 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34db2abd-a7a2-4334-b97b-8647e183412e-catalog-content\") pod \"redhat-marketplace-6x6pp\" (UID: \"34db2abd-a7a2-4334-b97b-8647e183412e\") " pod="openshift-marketplace/redhat-marketplace-6x6pp" Jan 27 09:57:47 crc kubenswrapper[4799]: I0127 09:57:47.625645 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkg4b\" (UniqueName: \"kubernetes.io/projected/34db2abd-a7a2-4334-b97b-8647e183412e-kube-api-access-rkg4b\") pod \"redhat-marketplace-6x6pp\" (UID: \"34db2abd-a7a2-4334-b97b-8647e183412e\") " pod="openshift-marketplace/redhat-marketplace-6x6pp" Jan 27 09:57:47 crc kubenswrapper[4799]: I0127 09:57:47.625728 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34db2abd-a7a2-4334-b97b-8647e183412e-utilities\") pod \"redhat-marketplace-6x6pp\" (UID: \"34db2abd-a7a2-4334-b97b-8647e183412e\") " pod="openshift-marketplace/redhat-marketplace-6x6pp" Jan 27 09:57:47 crc kubenswrapper[4799]: I0127 09:57:47.625818 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34db2abd-a7a2-4334-b97b-8647e183412e-catalog-content\") pod \"redhat-marketplace-6x6pp\" (UID: \"34db2abd-a7a2-4334-b97b-8647e183412e\") " pod="openshift-marketplace/redhat-marketplace-6x6pp" Jan 27 09:57:47 crc kubenswrapper[4799]: I0127 09:57:47.648849 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkg4b\" (UniqueName: \"kubernetes.io/projected/34db2abd-a7a2-4334-b97b-8647e183412e-kube-api-access-rkg4b\") pod \"redhat-marketplace-6x6pp\" (UID: \"34db2abd-a7a2-4334-b97b-8647e183412e\") " pod="openshift-marketplace/redhat-marketplace-6x6pp" Jan 27 09:57:47 crc kubenswrapper[4799]: I0127 09:57:47.756887 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6x6pp" Jan 27 09:57:48 crc kubenswrapper[4799]: I0127 09:57:48.296865 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6x6pp"] Jan 27 09:57:49 crc kubenswrapper[4799]: I0127 09:57:49.032050 4799 generic.go:334] "Generic (PLEG): container finished" podID="34db2abd-a7a2-4334-b97b-8647e183412e" containerID="f0973504adee33d6dc73e9252277a37a9efe3a44cf2f24540000656bbaa4e07f" exitCode=0 Jan 27 09:57:49 crc kubenswrapper[4799]: I0127 09:57:49.033527 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6x6pp" event={"ID":"34db2abd-a7a2-4334-b97b-8647e183412e","Type":"ContainerDied","Data":"f0973504adee33d6dc73e9252277a37a9efe3a44cf2f24540000656bbaa4e07f"} Jan 27 09:57:49 crc kubenswrapper[4799]: I0127 09:57:49.034427 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6x6pp" event={"ID":"34db2abd-a7a2-4334-b97b-8647e183412e","Type":"ContainerStarted","Data":"b0a6261a010bf0f1aa676237f4e2e82d23439c72c94fe5b1f34e8f4b30c4e457"} Jan 27 09:57:50 crc kubenswrapper[4799]: I0127 09:57:50.043274 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6x6pp" event={"ID":"34db2abd-a7a2-4334-b97b-8647e183412e","Type":"ContainerStarted","Data":"5ad6ad1c973d9c884c0d67e20b4f6f4d777474ba7517e23b3d3006715ccc9210"} Jan 27 09:57:51 crc kubenswrapper[4799]: I0127 09:57:51.057205 4799 generic.go:334] "Generic (PLEG): container finished" podID="34db2abd-a7a2-4334-b97b-8647e183412e" containerID="5ad6ad1c973d9c884c0d67e20b4f6f4d777474ba7517e23b3d3006715ccc9210" exitCode=0 Jan 27 09:57:51 crc kubenswrapper[4799]: I0127 09:57:51.057269 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6x6pp" event={"ID":"34db2abd-a7a2-4334-b97b-8647e183412e","Type":"ContainerDied","Data":"5ad6ad1c973d9c884c0d67e20b4f6f4d777474ba7517e23b3d3006715ccc9210"} Jan 27 09:57:51 crc kubenswrapper[4799]: I0127 09:57:51.057326 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6x6pp" event={"ID":"34db2abd-a7a2-4334-b97b-8647e183412e","Type":"ContainerStarted","Data":"5b8f73954e292eb6e5160ae8e00a94cc7d3ea6a304dc2e547464d7a9c095a50c"} Jan 27 09:57:51 crc kubenswrapper[4799]: I0127 09:57:51.085978 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6x6pp" podStartSLOduration=2.6364398700000002 podStartE2EDuration="4.085954273s" podCreationTimestamp="2026-01-27 09:57:47 +0000 UTC" firstStartedPulling="2026-01-27 09:57:49.035315821 +0000 UTC m=+7935.346419896" lastFinishedPulling="2026-01-27 09:57:50.484830204 +0000 UTC m=+7936.795934299" observedRunningTime="2026-01-27 09:57:51.07907202 +0000 UTC m=+7937.390176155" watchObservedRunningTime="2026-01-27 09:57:51.085954273 +0000 UTC m=+7937.397058378" Jan 27 09:57:53 crc kubenswrapper[4799]: I0127 09:57:53.731280 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:57:53 crc kubenswrapper[4799]: I0127 09:57:53.731978 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:57:57 crc kubenswrapper[4799]: I0127 09:57:57.757771 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6x6pp" Jan 27 09:57:57 crc kubenswrapper[4799]: I0127 09:57:57.759646 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6x6pp" Jan 27 09:57:57 crc kubenswrapper[4799]: I0127 09:57:57.821169 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6x6pp" Jan 27 09:57:58 crc kubenswrapper[4799]: I0127 09:57:58.182698 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6x6pp" Jan 27 09:57:58 crc kubenswrapper[4799]: I0127 09:57:58.241743 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6x6pp"] Jan 27 09:58:00 crc kubenswrapper[4799]: I0127 09:58:00.153672 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6x6pp" podUID="34db2abd-a7a2-4334-b97b-8647e183412e" containerName="registry-server" containerID="cri-o://5b8f73954e292eb6e5160ae8e00a94cc7d3ea6a304dc2e547464d7a9c095a50c" gracePeriod=2 Jan 27 09:58:00 crc kubenswrapper[4799]: I0127 09:58:00.677886 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6x6pp" Jan 27 09:58:00 crc kubenswrapper[4799]: I0127 09:58:00.688555 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34db2abd-a7a2-4334-b97b-8647e183412e-catalog-content\") pod \"34db2abd-a7a2-4334-b97b-8647e183412e\" (UID: \"34db2abd-a7a2-4334-b97b-8647e183412e\") " Jan 27 09:58:00 crc kubenswrapper[4799]: I0127 09:58:00.688630 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34db2abd-a7a2-4334-b97b-8647e183412e-utilities\") pod \"34db2abd-a7a2-4334-b97b-8647e183412e\" (UID: \"34db2abd-a7a2-4334-b97b-8647e183412e\") " Jan 27 09:58:00 crc kubenswrapper[4799]: I0127 09:58:00.688712 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkg4b\" (UniqueName: \"kubernetes.io/projected/34db2abd-a7a2-4334-b97b-8647e183412e-kube-api-access-rkg4b\") pod \"34db2abd-a7a2-4334-b97b-8647e183412e\" (UID: \"34db2abd-a7a2-4334-b97b-8647e183412e\") " Jan 27 09:58:00 crc kubenswrapper[4799]: I0127 09:58:00.689704 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34db2abd-a7a2-4334-b97b-8647e183412e-utilities" (OuterVolumeSpecName: "utilities") pod "34db2abd-a7a2-4334-b97b-8647e183412e" (UID: "34db2abd-a7a2-4334-b97b-8647e183412e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:58:00 crc kubenswrapper[4799]: I0127 09:58:00.695619 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34db2abd-a7a2-4334-b97b-8647e183412e-kube-api-access-rkg4b" (OuterVolumeSpecName: "kube-api-access-rkg4b") pod "34db2abd-a7a2-4334-b97b-8647e183412e" (UID: "34db2abd-a7a2-4334-b97b-8647e183412e"). InnerVolumeSpecName "kube-api-access-rkg4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:58:00 crc kubenswrapper[4799]: I0127 09:58:00.723719 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34db2abd-a7a2-4334-b97b-8647e183412e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34db2abd-a7a2-4334-b97b-8647e183412e" (UID: "34db2abd-a7a2-4334-b97b-8647e183412e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:58:00 crc kubenswrapper[4799]: I0127 09:58:00.790708 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkg4b\" (UniqueName: \"kubernetes.io/projected/34db2abd-a7a2-4334-b97b-8647e183412e-kube-api-access-rkg4b\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:00 crc kubenswrapper[4799]: I0127 09:58:00.790739 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34db2abd-a7a2-4334-b97b-8647e183412e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:00 crc kubenswrapper[4799]: I0127 09:58:00.790751 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34db2abd-a7a2-4334-b97b-8647e183412e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:01 crc kubenswrapper[4799]: I0127 09:58:01.171876 4799 generic.go:334] "Generic (PLEG): container finished" podID="34db2abd-a7a2-4334-b97b-8647e183412e" containerID="5b8f73954e292eb6e5160ae8e00a94cc7d3ea6a304dc2e547464d7a9c095a50c" exitCode=0 Jan 27 09:58:01 crc kubenswrapper[4799]: I0127 09:58:01.171943 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6x6pp" event={"ID":"34db2abd-a7a2-4334-b97b-8647e183412e","Type":"ContainerDied","Data":"5b8f73954e292eb6e5160ae8e00a94cc7d3ea6a304dc2e547464d7a9c095a50c"} Jan 27 09:58:01 crc kubenswrapper[4799]: I0127 09:58:01.171973 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6x6pp" Jan 27 09:58:01 crc kubenswrapper[4799]: I0127 09:58:01.172003 4799 scope.go:117] "RemoveContainer" containerID="5b8f73954e292eb6e5160ae8e00a94cc7d3ea6a304dc2e547464d7a9c095a50c" Jan 27 09:58:01 crc kubenswrapper[4799]: I0127 09:58:01.171985 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6x6pp" event={"ID":"34db2abd-a7a2-4334-b97b-8647e183412e","Type":"ContainerDied","Data":"b0a6261a010bf0f1aa676237f4e2e82d23439c72c94fe5b1f34e8f4b30c4e457"} Jan 27 09:58:01 crc kubenswrapper[4799]: I0127 09:58:01.204123 4799 scope.go:117] "RemoveContainer" containerID="5ad6ad1c973d9c884c0d67e20b4f6f4d777474ba7517e23b3d3006715ccc9210" Jan 27 09:58:01 crc kubenswrapper[4799]: I0127 09:58:01.229981 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6x6pp"] Jan 27 09:58:01 crc kubenswrapper[4799]: I0127 09:58:01.241071 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6x6pp"] Jan 27 09:58:01 crc kubenswrapper[4799]: I0127 09:58:01.250049 4799 scope.go:117] "RemoveContainer" containerID="f0973504adee33d6dc73e9252277a37a9efe3a44cf2f24540000656bbaa4e07f" Jan 27 09:58:01 crc kubenswrapper[4799]: I0127 09:58:01.322814 4799 scope.go:117] "RemoveContainer" containerID="5b8f73954e292eb6e5160ae8e00a94cc7d3ea6a304dc2e547464d7a9c095a50c" Jan 27 09:58:01 crc kubenswrapper[4799]: E0127 09:58:01.323734 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b8f73954e292eb6e5160ae8e00a94cc7d3ea6a304dc2e547464d7a9c095a50c\": container with ID starting with 5b8f73954e292eb6e5160ae8e00a94cc7d3ea6a304dc2e547464d7a9c095a50c not found: ID does not exist" containerID="5b8f73954e292eb6e5160ae8e00a94cc7d3ea6a304dc2e547464d7a9c095a50c" Jan 27 09:58:01 crc kubenswrapper[4799]: I0127 09:58:01.323814 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b8f73954e292eb6e5160ae8e00a94cc7d3ea6a304dc2e547464d7a9c095a50c"} err="failed to get container status \"5b8f73954e292eb6e5160ae8e00a94cc7d3ea6a304dc2e547464d7a9c095a50c\": rpc error: code = NotFound desc = could not find container \"5b8f73954e292eb6e5160ae8e00a94cc7d3ea6a304dc2e547464d7a9c095a50c\": container with ID starting with 5b8f73954e292eb6e5160ae8e00a94cc7d3ea6a304dc2e547464d7a9c095a50c not found: ID does not exist" Jan 27 09:58:01 crc kubenswrapper[4799]: I0127 09:58:01.323867 4799 scope.go:117] "RemoveContainer" containerID="5ad6ad1c973d9c884c0d67e20b4f6f4d777474ba7517e23b3d3006715ccc9210" Jan 27 09:58:01 crc kubenswrapper[4799]: E0127 09:58:01.324240 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ad6ad1c973d9c884c0d67e20b4f6f4d777474ba7517e23b3d3006715ccc9210\": container with ID starting with 5ad6ad1c973d9c884c0d67e20b4f6f4d777474ba7517e23b3d3006715ccc9210 not found: ID does not exist" containerID="5ad6ad1c973d9c884c0d67e20b4f6f4d777474ba7517e23b3d3006715ccc9210" Jan 27 09:58:01 crc kubenswrapper[4799]: I0127 09:58:01.324323 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ad6ad1c973d9c884c0d67e20b4f6f4d777474ba7517e23b3d3006715ccc9210"} err="failed to get container status \"5ad6ad1c973d9c884c0d67e20b4f6f4d777474ba7517e23b3d3006715ccc9210\": rpc error: code = NotFound desc = could not find container \"5ad6ad1c973d9c884c0d67e20b4f6f4d777474ba7517e23b3d3006715ccc9210\": container with ID starting with 5ad6ad1c973d9c884c0d67e20b4f6f4d777474ba7517e23b3d3006715ccc9210 not found: ID does not exist" Jan 27 09:58:01 crc kubenswrapper[4799]: I0127 09:58:01.324364 4799 scope.go:117] "RemoveContainer" containerID="f0973504adee33d6dc73e9252277a37a9efe3a44cf2f24540000656bbaa4e07f" Jan 27 09:58:01 crc kubenswrapper[4799]: E0127 09:58:01.324949 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0973504adee33d6dc73e9252277a37a9efe3a44cf2f24540000656bbaa4e07f\": container with ID starting with f0973504adee33d6dc73e9252277a37a9efe3a44cf2f24540000656bbaa4e07f not found: ID does not exist" containerID="f0973504adee33d6dc73e9252277a37a9efe3a44cf2f24540000656bbaa4e07f" Jan 27 09:58:01 crc kubenswrapper[4799]: I0127 09:58:01.324978 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0973504adee33d6dc73e9252277a37a9efe3a44cf2f24540000656bbaa4e07f"} err="failed to get container status \"f0973504adee33d6dc73e9252277a37a9efe3a44cf2f24540000656bbaa4e07f\": rpc error: code = NotFound desc = could not find container \"f0973504adee33d6dc73e9252277a37a9efe3a44cf2f24540000656bbaa4e07f\": container with ID starting with f0973504adee33d6dc73e9252277a37a9efe3a44cf2f24540000656bbaa4e07f not found: ID does not exist" Jan 27 09:58:02 crc kubenswrapper[4799]: I0127 09:58:02.469538 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34db2abd-a7a2-4334-b97b-8647e183412e" path="/var/lib/kubelet/pods/34db2abd-a7a2-4334-b97b-8647e183412e/volumes" Jan 27 09:58:23 crc kubenswrapper[4799]: I0127 09:58:23.731965 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:58:23 crc kubenswrapper[4799]: I0127 09:58:23.732830 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:58:23 crc kubenswrapper[4799]: I0127 09:58:23.732942 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 09:58:23 crc kubenswrapper[4799]: I0127 09:58:23.734146 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f88c137ef2024972774ec7c8064c7864efdbbe6c60771858757cf36afeb37ddf"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 09:58:23 crc kubenswrapper[4799]: I0127 09:58:23.734259 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://f88c137ef2024972774ec7c8064c7864efdbbe6c60771858757cf36afeb37ddf" gracePeriod=600 Jan 27 09:58:24 crc kubenswrapper[4799]: I0127 09:58:24.474787 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="f88c137ef2024972774ec7c8064c7864efdbbe6c60771858757cf36afeb37ddf" exitCode=0 Jan 27 09:58:24 crc kubenswrapper[4799]: I0127 09:58:24.474955 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"f88c137ef2024972774ec7c8064c7864efdbbe6c60771858757cf36afeb37ddf"} Jan 27 09:58:24 crc kubenswrapper[4799]: I0127 09:58:24.475382 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5"} Jan 27 09:58:24 crc kubenswrapper[4799]: I0127 09:58:24.475418 4799 scope.go:117] "RemoveContainer" containerID="4f53986707f61395535d100f0f281e1e9afc2165046bf91945562473bab88a02" Jan 27 10:00:00 crc kubenswrapper[4799]: I0127 10:00:00.208720 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491800-b4rsp"] Jan 27 10:00:00 crc kubenswrapper[4799]: E0127 10:00:00.209954 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34db2abd-a7a2-4334-b97b-8647e183412e" containerName="extract-content" Jan 27 10:00:00 crc kubenswrapper[4799]: I0127 10:00:00.209978 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="34db2abd-a7a2-4334-b97b-8647e183412e" containerName="extract-content" Jan 27 10:00:00 crc kubenswrapper[4799]: E0127 10:00:00.210001 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34db2abd-a7a2-4334-b97b-8647e183412e" containerName="registry-server" Jan 27 10:00:00 crc kubenswrapper[4799]: I0127 10:00:00.210012 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="34db2abd-a7a2-4334-b97b-8647e183412e" containerName="registry-server" Jan 27 10:00:00 crc kubenswrapper[4799]: E0127 10:00:00.210058 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34db2abd-a7a2-4334-b97b-8647e183412e" containerName="extract-utilities" Jan 27 10:00:00 crc kubenswrapper[4799]: I0127 10:00:00.210073 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="34db2abd-a7a2-4334-b97b-8647e183412e" containerName="extract-utilities" Jan 27 10:00:00 crc kubenswrapper[4799]: I0127 10:00:00.210422 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="34db2abd-a7a2-4334-b97b-8647e183412e" containerName="registry-server" Jan 27 10:00:00 crc kubenswrapper[4799]: I0127 10:00:00.211801 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-b4rsp" Jan 27 10:00:00 crc kubenswrapper[4799]: I0127 10:00:00.216038 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 10:00:00 crc kubenswrapper[4799]: I0127 10:00:00.221930 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491800-b4rsp"] Jan 27 10:00:00 crc kubenswrapper[4799]: I0127 10:00:00.226710 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 10:00:00 crc kubenswrapper[4799]: I0127 10:00:00.292136 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5cc2\" (UniqueName: \"kubernetes.io/projected/f4a7f008-b447-4a94-b7fb-4d99a4fcfff8-kube-api-access-q5cc2\") pod \"collect-profiles-29491800-b4rsp\" (UID: \"f4a7f008-b447-4a94-b7fb-4d99a4fcfff8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-b4rsp" Jan 27 10:00:00 crc kubenswrapper[4799]: I0127 10:00:00.292679 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4a7f008-b447-4a94-b7fb-4d99a4fcfff8-secret-volume\") pod \"collect-profiles-29491800-b4rsp\" (UID: \"f4a7f008-b447-4a94-b7fb-4d99a4fcfff8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-b4rsp" Jan 27 10:00:00 crc kubenswrapper[4799]: I0127 10:00:00.292927 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4a7f008-b447-4a94-b7fb-4d99a4fcfff8-config-volume\") pod \"collect-profiles-29491800-b4rsp\" (UID: \"f4a7f008-b447-4a94-b7fb-4d99a4fcfff8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-b4rsp" Jan 27 10:00:00 crc kubenswrapper[4799]: I0127 10:00:00.395597 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4a7f008-b447-4a94-b7fb-4d99a4fcfff8-config-volume\") pod \"collect-profiles-29491800-b4rsp\" (UID: \"f4a7f008-b447-4a94-b7fb-4d99a4fcfff8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-b4rsp" Jan 27 10:00:00 crc kubenswrapper[4799]: I0127 10:00:00.396360 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5cc2\" (UniqueName: \"kubernetes.io/projected/f4a7f008-b447-4a94-b7fb-4d99a4fcfff8-kube-api-access-q5cc2\") pod \"collect-profiles-29491800-b4rsp\" (UID: \"f4a7f008-b447-4a94-b7fb-4d99a4fcfff8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-b4rsp" Jan 27 10:00:00 crc kubenswrapper[4799]: I0127 10:00:00.396564 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4a7f008-b447-4a94-b7fb-4d99a4fcfff8-secret-volume\") pod \"collect-profiles-29491800-b4rsp\" (UID: \"f4a7f008-b447-4a94-b7fb-4d99a4fcfff8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-b4rsp" Jan 27 10:00:00 crc kubenswrapper[4799]: I0127 10:00:00.399673 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4a7f008-b447-4a94-b7fb-4d99a4fcfff8-config-volume\") pod \"collect-profiles-29491800-b4rsp\" (UID: \"f4a7f008-b447-4a94-b7fb-4d99a4fcfff8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-b4rsp" Jan 27 10:00:00 crc kubenswrapper[4799]: I0127 10:00:00.409816 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4a7f008-b447-4a94-b7fb-4d99a4fcfff8-secret-volume\") pod \"collect-profiles-29491800-b4rsp\" (UID: \"f4a7f008-b447-4a94-b7fb-4d99a4fcfff8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-b4rsp" Jan 27 10:00:00 crc kubenswrapper[4799]: I0127 10:00:00.431059 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5cc2\" (UniqueName: \"kubernetes.io/projected/f4a7f008-b447-4a94-b7fb-4d99a4fcfff8-kube-api-access-q5cc2\") pod \"collect-profiles-29491800-b4rsp\" (UID: \"f4a7f008-b447-4a94-b7fb-4d99a4fcfff8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-b4rsp" Jan 27 10:00:00 crc kubenswrapper[4799]: I0127 10:00:00.531088 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-b4rsp" Jan 27 10:00:01 crc kubenswrapper[4799]: I0127 10:00:01.103502 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491800-b4rsp"] Jan 27 10:00:01 crc kubenswrapper[4799]: I0127 10:00:01.593794 4799 generic.go:334] "Generic (PLEG): container finished" podID="f4a7f008-b447-4a94-b7fb-4d99a4fcfff8" containerID="7ddeeec6fccf6a3f37cdc744d251973bf801687e3c4652374f84a2a5f6ae945e" exitCode=0 Jan 27 10:00:01 crc kubenswrapper[4799]: I0127 10:00:01.593908 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-b4rsp" event={"ID":"f4a7f008-b447-4a94-b7fb-4d99a4fcfff8","Type":"ContainerDied","Data":"7ddeeec6fccf6a3f37cdc744d251973bf801687e3c4652374f84a2a5f6ae945e"} Jan 27 10:00:01 crc kubenswrapper[4799]: I0127 10:00:01.594142 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-b4rsp" event={"ID":"f4a7f008-b447-4a94-b7fb-4d99a4fcfff8","Type":"ContainerStarted","Data":"55d768d20fd008fee2abe5bc6a2b6a06dee16d4244d3fdf902ffab329822a465"} Jan 27 10:00:03 crc kubenswrapper[4799]: I0127 10:00:02.968818 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-b4rsp" Jan 27 10:00:03 crc kubenswrapper[4799]: I0127 10:00:03.061203 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4a7f008-b447-4a94-b7fb-4d99a4fcfff8-secret-volume\") pod \"f4a7f008-b447-4a94-b7fb-4d99a4fcfff8\" (UID: \"f4a7f008-b447-4a94-b7fb-4d99a4fcfff8\") " Jan 27 10:00:03 crc kubenswrapper[4799]: I0127 10:00:03.061344 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5cc2\" (UniqueName: \"kubernetes.io/projected/f4a7f008-b447-4a94-b7fb-4d99a4fcfff8-kube-api-access-q5cc2\") pod \"f4a7f008-b447-4a94-b7fb-4d99a4fcfff8\" (UID: \"f4a7f008-b447-4a94-b7fb-4d99a4fcfff8\") " Jan 27 10:00:03 crc kubenswrapper[4799]: I0127 10:00:03.061444 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4a7f008-b447-4a94-b7fb-4d99a4fcfff8-config-volume\") pod \"f4a7f008-b447-4a94-b7fb-4d99a4fcfff8\" (UID: \"f4a7f008-b447-4a94-b7fb-4d99a4fcfff8\") " Jan 27 10:00:03 crc kubenswrapper[4799]: I0127 10:00:03.062522 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4a7f008-b447-4a94-b7fb-4d99a4fcfff8-config-volume" (OuterVolumeSpecName: "config-volume") pod "f4a7f008-b447-4a94-b7fb-4d99a4fcfff8" (UID: "f4a7f008-b447-4a94-b7fb-4d99a4fcfff8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:00:03 crc kubenswrapper[4799]: I0127 10:00:03.068238 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4a7f008-b447-4a94-b7fb-4d99a4fcfff8-kube-api-access-q5cc2" (OuterVolumeSpecName: "kube-api-access-q5cc2") pod "f4a7f008-b447-4a94-b7fb-4d99a4fcfff8" (UID: "f4a7f008-b447-4a94-b7fb-4d99a4fcfff8"). InnerVolumeSpecName "kube-api-access-q5cc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:00:03 crc kubenswrapper[4799]: I0127 10:00:03.068417 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a7f008-b447-4a94-b7fb-4d99a4fcfff8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f4a7f008-b447-4a94-b7fb-4d99a4fcfff8" (UID: "f4a7f008-b447-4a94-b7fb-4d99a4fcfff8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:00:03 crc kubenswrapper[4799]: I0127 10:00:03.164090 4799 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4a7f008-b447-4a94-b7fb-4d99a4fcfff8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 10:00:03 crc kubenswrapper[4799]: I0127 10:00:03.164123 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5cc2\" (UniqueName: \"kubernetes.io/projected/f4a7f008-b447-4a94-b7fb-4d99a4fcfff8-kube-api-access-q5cc2\") on node \"crc\" DevicePath \"\"" Jan 27 10:00:03 crc kubenswrapper[4799]: I0127 10:00:03.164134 4799 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4a7f008-b447-4a94-b7fb-4d99a4fcfff8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 10:00:03 crc kubenswrapper[4799]: I0127 10:00:03.618223 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-b4rsp" event={"ID":"f4a7f008-b447-4a94-b7fb-4d99a4fcfff8","Type":"ContainerDied","Data":"55d768d20fd008fee2abe5bc6a2b6a06dee16d4244d3fdf902ffab329822a465"} Jan 27 10:00:03 crc kubenswrapper[4799]: I0127 10:00:03.618271 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55d768d20fd008fee2abe5bc6a2b6a06dee16d4244d3fdf902ffab329822a465" Jan 27 10:00:03 crc kubenswrapper[4799]: I0127 10:00:03.618279 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-b4rsp" Jan 27 10:00:04 crc kubenswrapper[4799]: I0127 10:00:04.040913 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491755-6vf7z"] Jan 27 10:00:04 crc kubenswrapper[4799]: I0127 10:00:04.048886 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491755-6vf7z"] Jan 27 10:00:04 crc kubenswrapper[4799]: I0127 10:00:04.471384 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24389976-4042-4f68-b694-1002ddb60da0" path="/var/lib/kubelet/pods/24389976-4042-4f68-b694-1002ddb60da0/volumes" Jan 27 10:00:44 crc kubenswrapper[4799]: I0127 10:00:44.556183 4799 scope.go:117] "RemoveContainer" containerID="bdb889e38d4d1600150ba6a84d5ab3d8e5360d1a2b7de8a4e7b82c944182dee3" Jan 27 10:00:53 crc kubenswrapper[4799]: I0127 10:00:53.731596 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:00:53 crc kubenswrapper[4799]: I0127 10:00:53.732210 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:01:00 crc kubenswrapper[4799]: I0127 10:01:00.186035 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29491801-86fb7"] Jan 27 10:01:00 crc kubenswrapper[4799]: E0127 10:01:00.187062 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4a7f008-b447-4a94-b7fb-4d99a4fcfff8" containerName="collect-profiles" Jan 27 10:01:00 crc kubenswrapper[4799]: I0127 10:01:00.187079 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4a7f008-b447-4a94-b7fb-4d99a4fcfff8" containerName="collect-profiles" Jan 27 10:01:00 crc kubenswrapper[4799]: I0127 10:01:00.187275 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4a7f008-b447-4a94-b7fb-4d99a4fcfff8" containerName="collect-profiles" Jan 27 10:01:00 crc kubenswrapper[4799]: I0127 10:01:00.187953 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29491801-86fb7" Jan 27 10:01:00 crc kubenswrapper[4799]: I0127 10:01:00.226632 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29491801-86fb7"] Jan 27 10:01:00 crc kubenswrapper[4799]: I0127 10:01:00.343703 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/076186fc-1eef-4e54-bd41-7109370efb97-combined-ca-bundle\") pod \"keystone-cron-29491801-86fb7\" (UID: \"076186fc-1eef-4e54-bd41-7109370efb97\") " pod="openstack/keystone-cron-29491801-86fb7" Jan 27 10:01:00 crc kubenswrapper[4799]: I0127 10:01:00.343748 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/076186fc-1eef-4e54-bd41-7109370efb97-config-data\") pod \"keystone-cron-29491801-86fb7\" (UID: \"076186fc-1eef-4e54-bd41-7109370efb97\") " pod="openstack/keystone-cron-29491801-86fb7" Jan 27 10:01:00 crc kubenswrapper[4799]: I0127 10:01:00.343851 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/076186fc-1eef-4e54-bd41-7109370efb97-fernet-keys\") pod \"keystone-cron-29491801-86fb7\" (UID: \"076186fc-1eef-4e54-bd41-7109370efb97\") " pod="openstack/keystone-cron-29491801-86fb7" Jan 27 10:01:00 crc kubenswrapper[4799]: I0127 10:01:00.343867 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lpqf\" (UniqueName: \"kubernetes.io/projected/076186fc-1eef-4e54-bd41-7109370efb97-kube-api-access-6lpqf\") pod \"keystone-cron-29491801-86fb7\" (UID: \"076186fc-1eef-4e54-bd41-7109370efb97\") " pod="openstack/keystone-cron-29491801-86fb7" Jan 27 10:01:00 crc kubenswrapper[4799]: I0127 10:01:00.446362 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lpqf\" (UniqueName: \"kubernetes.io/projected/076186fc-1eef-4e54-bd41-7109370efb97-kube-api-access-6lpqf\") pod \"keystone-cron-29491801-86fb7\" (UID: \"076186fc-1eef-4e54-bd41-7109370efb97\") " pod="openstack/keystone-cron-29491801-86fb7" Jan 27 10:01:00 crc kubenswrapper[4799]: I0127 10:01:00.446865 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/076186fc-1eef-4e54-bd41-7109370efb97-fernet-keys\") pod \"keystone-cron-29491801-86fb7\" (UID: \"076186fc-1eef-4e54-bd41-7109370efb97\") " pod="openstack/keystone-cron-29491801-86fb7" Jan 27 10:01:00 crc kubenswrapper[4799]: I0127 10:01:00.448946 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/076186fc-1eef-4e54-bd41-7109370efb97-combined-ca-bundle\") pod \"keystone-cron-29491801-86fb7\" (UID: \"076186fc-1eef-4e54-bd41-7109370efb97\") " pod="openstack/keystone-cron-29491801-86fb7" Jan 27 10:01:00 crc kubenswrapper[4799]: I0127 10:01:00.449137 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/076186fc-1eef-4e54-bd41-7109370efb97-config-data\") pod \"keystone-cron-29491801-86fb7\" (UID: \"076186fc-1eef-4e54-bd41-7109370efb97\") " pod="openstack/keystone-cron-29491801-86fb7" Jan 27 10:01:00 crc kubenswrapper[4799]: I0127 10:01:00.453752 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/076186fc-1eef-4e54-bd41-7109370efb97-combined-ca-bundle\") pod \"keystone-cron-29491801-86fb7\" (UID: \"076186fc-1eef-4e54-bd41-7109370efb97\") " pod="openstack/keystone-cron-29491801-86fb7" Jan 27 10:01:00 crc kubenswrapper[4799]: I0127 10:01:00.456935 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/076186fc-1eef-4e54-bd41-7109370efb97-fernet-keys\") pod \"keystone-cron-29491801-86fb7\" (UID: \"076186fc-1eef-4e54-bd41-7109370efb97\") " pod="openstack/keystone-cron-29491801-86fb7" Jan 27 10:01:00 crc kubenswrapper[4799]: I0127 10:01:00.462991 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lpqf\" (UniqueName: \"kubernetes.io/projected/076186fc-1eef-4e54-bd41-7109370efb97-kube-api-access-6lpqf\") pod \"keystone-cron-29491801-86fb7\" (UID: \"076186fc-1eef-4e54-bd41-7109370efb97\") " pod="openstack/keystone-cron-29491801-86fb7" Jan 27 10:01:00 crc kubenswrapper[4799]: I0127 10:01:00.464117 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/076186fc-1eef-4e54-bd41-7109370efb97-config-data\") pod \"keystone-cron-29491801-86fb7\" (UID: \"076186fc-1eef-4e54-bd41-7109370efb97\") " pod="openstack/keystone-cron-29491801-86fb7" Jan 27 10:01:00 crc kubenswrapper[4799]: I0127 10:01:00.522608 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29491801-86fb7" Jan 27 10:01:01 crc kubenswrapper[4799]: I0127 10:01:01.186256 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29491801-86fb7"] Jan 27 10:01:01 crc kubenswrapper[4799]: W0127 10:01:01.196264 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod076186fc_1eef_4e54_bd41_7109370efb97.slice/crio-6823593e2ed7b73680963aa11a381a47c3436dfa920fd412703a56f723accfc7 WatchSource:0}: Error finding container 6823593e2ed7b73680963aa11a381a47c3436dfa920fd412703a56f723accfc7: Status 404 returned error can't find the container with id 6823593e2ed7b73680963aa11a381a47c3436dfa920fd412703a56f723accfc7 Jan 27 10:01:01 crc kubenswrapper[4799]: I0127 10:01:01.212537 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29491801-86fb7" event={"ID":"076186fc-1eef-4e54-bd41-7109370efb97","Type":"ContainerStarted","Data":"6823593e2ed7b73680963aa11a381a47c3436dfa920fd412703a56f723accfc7"} Jan 27 10:01:02 crc kubenswrapper[4799]: I0127 10:01:02.223802 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29491801-86fb7" event={"ID":"076186fc-1eef-4e54-bd41-7109370efb97","Type":"ContainerStarted","Data":"d2a56713af88f869dbe3fcd1ec2e8993d7400b43b50987ba6e89d9ce460db631"} Jan 27 10:01:02 crc kubenswrapper[4799]: I0127 10:01:02.248145 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29491801-86fb7" podStartSLOduration=2.248123561 podStartE2EDuration="2.248123561s" podCreationTimestamp="2026-01-27 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 10:01:02.239281896 +0000 UTC m=+8128.550386011" watchObservedRunningTime="2026-01-27 10:01:02.248123561 +0000 UTC m=+8128.559227636" Jan 27 10:01:04 crc kubenswrapper[4799]: I0127 10:01:04.241880 4799 generic.go:334] "Generic (PLEG): container finished" podID="076186fc-1eef-4e54-bd41-7109370efb97" containerID="d2a56713af88f869dbe3fcd1ec2e8993d7400b43b50987ba6e89d9ce460db631" exitCode=0 Jan 27 10:01:04 crc kubenswrapper[4799]: I0127 10:01:04.242427 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29491801-86fb7" event={"ID":"076186fc-1eef-4e54-bd41-7109370efb97","Type":"ContainerDied","Data":"d2a56713af88f869dbe3fcd1ec2e8993d7400b43b50987ba6e89d9ce460db631"} Jan 27 10:01:05 crc kubenswrapper[4799]: I0127 10:01:05.632251 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29491801-86fb7" Jan 27 10:01:05 crc kubenswrapper[4799]: I0127 10:01:05.674853 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/076186fc-1eef-4e54-bd41-7109370efb97-combined-ca-bundle\") pod \"076186fc-1eef-4e54-bd41-7109370efb97\" (UID: \"076186fc-1eef-4e54-bd41-7109370efb97\") " Jan 27 10:01:05 crc kubenswrapper[4799]: I0127 10:01:05.674923 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/076186fc-1eef-4e54-bd41-7109370efb97-config-data\") pod \"076186fc-1eef-4e54-bd41-7109370efb97\" (UID: \"076186fc-1eef-4e54-bd41-7109370efb97\") " Jan 27 10:01:05 crc kubenswrapper[4799]: I0127 10:01:05.674952 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lpqf\" (UniqueName: \"kubernetes.io/projected/076186fc-1eef-4e54-bd41-7109370efb97-kube-api-access-6lpqf\") pod \"076186fc-1eef-4e54-bd41-7109370efb97\" (UID: \"076186fc-1eef-4e54-bd41-7109370efb97\") " Jan 27 10:01:05 crc kubenswrapper[4799]: I0127 10:01:05.675057 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/076186fc-1eef-4e54-bd41-7109370efb97-fernet-keys\") pod \"076186fc-1eef-4e54-bd41-7109370efb97\" (UID: \"076186fc-1eef-4e54-bd41-7109370efb97\") " Jan 27 10:01:05 crc kubenswrapper[4799]: I0127 10:01:05.688096 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/076186fc-1eef-4e54-bd41-7109370efb97-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "076186fc-1eef-4e54-bd41-7109370efb97" (UID: "076186fc-1eef-4e54-bd41-7109370efb97"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:01:05 crc kubenswrapper[4799]: I0127 10:01:05.688369 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/076186fc-1eef-4e54-bd41-7109370efb97-kube-api-access-6lpqf" (OuterVolumeSpecName: "kube-api-access-6lpqf") pod "076186fc-1eef-4e54-bd41-7109370efb97" (UID: "076186fc-1eef-4e54-bd41-7109370efb97"). InnerVolumeSpecName "kube-api-access-6lpqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:01:05 crc kubenswrapper[4799]: I0127 10:01:05.701682 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/076186fc-1eef-4e54-bd41-7109370efb97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "076186fc-1eef-4e54-bd41-7109370efb97" (UID: "076186fc-1eef-4e54-bd41-7109370efb97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:01:05 crc kubenswrapper[4799]: I0127 10:01:05.731511 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/076186fc-1eef-4e54-bd41-7109370efb97-config-data" (OuterVolumeSpecName: "config-data") pod "076186fc-1eef-4e54-bd41-7109370efb97" (UID: "076186fc-1eef-4e54-bd41-7109370efb97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:01:05 crc kubenswrapper[4799]: I0127 10:01:05.777625 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/076186fc-1eef-4e54-bd41-7109370efb97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 10:01:05 crc kubenswrapper[4799]: I0127 10:01:05.777675 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/076186fc-1eef-4e54-bd41-7109370efb97-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 10:01:05 crc kubenswrapper[4799]: I0127 10:01:05.777693 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lpqf\" (UniqueName: \"kubernetes.io/projected/076186fc-1eef-4e54-bd41-7109370efb97-kube-api-access-6lpqf\") on node \"crc\" DevicePath \"\"" Jan 27 10:01:05 crc kubenswrapper[4799]: I0127 10:01:05.777708 4799 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/076186fc-1eef-4e54-bd41-7109370efb97-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 10:01:06 crc kubenswrapper[4799]: I0127 10:01:06.311680 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29491801-86fb7" event={"ID":"076186fc-1eef-4e54-bd41-7109370efb97","Type":"ContainerDied","Data":"6823593e2ed7b73680963aa11a381a47c3436dfa920fd412703a56f723accfc7"} Jan 27 10:01:06 crc kubenswrapper[4799]: I0127 10:01:06.311764 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6823593e2ed7b73680963aa11a381a47c3436dfa920fd412703a56f723accfc7" Jan 27 10:01:06 crc kubenswrapper[4799]: I0127 10:01:06.311903 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29491801-86fb7" Jan 27 10:01:23 crc kubenswrapper[4799]: I0127 10:01:23.731865 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:01:23 crc kubenswrapper[4799]: I0127 10:01:23.733037 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:01:42 crc kubenswrapper[4799]: I0127 10:01:42.431968 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x2g5t"] Jan 27 10:01:42 crc kubenswrapper[4799]: E0127 10:01:42.433165 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="076186fc-1eef-4e54-bd41-7109370efb97" containerName="keystone-cron" Jan 27 10:01:42 crc kubenswrapper[4799]: I0127 10:01:42.433188 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="076186fc-1eef-4e54-bd41-7109370efb97" containerName="keystone-cron" Jan 27 10:01:42 crc kubenswrapper[4799]: I0127 10:01:42.433818 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="076186fc-1eef-4e54-bd41-7109370efb97" containerName="keystone-cron" Jan 27 10:01:42 crc kubenswrapper[4799]: I0127 10:01:42.435551 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2g5t" Jan 27 10:01:42 crc kubenswrapper[4799]: I0127 10:01:42.482558 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x2g5t"] Jan 27 10:01:42 crc kubenswrapper[4799]: I0127 10:01:42.590625 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812cdc50-b45b-4b31-b7d6-b827d5875efb-utilities\") pod \"redhat-operators-x2g5t\" (UID: \"812cdc50-b45b-4b31-b7d6-b827d5875efb\") " pod="openshift-marketplace/redhat-operators-x2g5t" Jan 27 10:01:42 crc kubenswrapper[4799]: I0127 10:01:42.590694 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812cdc50-b45b-4b31-b7d6-b827d5875efb-catalog-content\") pod \"redhat-operators-x2g5t\" (UID: \"812cdc50-b45b-4b31-b7d6-b827d5875efb\") " pod="openshift-marketplace/redhat-operators-x2g5t" Jan 27 10:01:42 crc kubenswrapper[4799]: I0127 10:01:42.590804 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mk9d\" (UniqueName: \"kubernetes.io/projected/812cdc50-b45b-4b31-b7d6-b827d5875efb-kube-api-access-7mk9d\") pod \"redhat-operators-x2g5t\" (UID: \"812cdc50-b45b-4b31-b7d6-b827d5875efb\") " pod="openshift-marketplace/redhat-operators-x2g5t" Jan 27 10:01:42 crc kubenswrapper[4799]: I0127 10:01:42.698543 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812cdc50-b45b-4b31-b7d6-b827d5875efb-utilities\") pod \"redhat-operators-x2g5t\" (UID: \"812cdc50-b45b-4b31-b7d6-b827d5875efb\") " pod="openshift-marketplace/redhat-operators-x2g5t" Jan 27 10:01:42 crc kubenswrapper[4799]: I0127 10:01:42.698626 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812cdc50-b45b-4b31-b7d6-b827d5875efb-catalog-content\") pod \"redhat-operators-x2g5t\" (UID: \"812cdc50-b45b-4b31-b7d6-b827d5875efb\") " pod="openshift-marketplace/redhat-operators-x2g5t" Jan 27 10:01:42 crc kubenswrapper[4799]: I0127 10:01:42.698720 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mk9d\" (UniqueName: \"kubernetes.io/projected/812cdc50-b45b-4b31-b7d6-b827d5875efb-kube-api-access-7mk9d\") pod \"redhat-operators-x2g5t\" (UID: \"812cdc50-b45b-4b31-b7d6-b827d5875efb\") " pod="openshift-marketplace/redhat-operators-x2g5t" Jan 27 10:01:42 crc kubenswrapper[4799]: I0127 10:01:42.699847 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812cdc50-b45b-4b31-b7d6-b827d5875efb-utilities\") pod \"redhat-operators-x2g5t\" (UID: \"812cdc50-b45b-4b31-b7d6-b827d5875efb\") " pod="openshift-marketplace/redhat-operators-x2g5t" Jan 27 10:01:42 crc kubenswrapper[4799]: I0127 10:01:42.700179 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812cdc50-b45b-4b31-b7d6-b827d5875efb-catalog-content\") pod \"redhat-operators-x2g5t\" (UID: \"812cdc50-b45b-4b31-b7d6-b827d5875efb\") " pod="openshift-marketplace/redhat-operators-x2g5t" Jan 27 10:01:42 crc kubenswrapper[4799]: I0127 10:01:42.726150 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mk9d\" (UniqueName: \"kubernetes.io/projected/812cdc50-b45b-4b31-b7d6-b827d5875efb-kube-api-access-7mk9d\") pod \"redhat-operators-x2g5t\" (UID: \"812cdc50-b45b-4b31-b7d6-b827d5875efb\") " pod="openshift-marketplace/redhat-operators-x2g5t" Jan 27 10:01:42 crc kubenswrapper[4799]: I0127 10:01:42.776104 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2g5t" Jan 27 10:01:43 crc kubenswrapper[4799]: I0127 10:01:43.359407 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x2g5t"] Jan 27 10:01:43 crc kubenswrapper[4799]: I0127 10:01:43.739826 4799 generic.go:334] "Generic (PLEG): container finished" podID="812cdc50-b45b-4b31-b7d6-b827d5875efb" containerID="298278a92dc0fd3664fae7c511a2e73ab671625cb127a62632b973075cf7df8e" exitCode=0 Jan 27 10:01:43 crc kubenswrapper[4799]: I0127 10:01:43.739878 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2g5t" event={"ID":"812cdc50-b45b-4b31-b7d6-b827d5875efb","Type":"ContainerDied","Data":"298278a92dc0fd3664fae7c511a2e73ab671625cb127a62632b973075cf7df8e"} Jan 27 10:01:43 crc kubenswrapper[4799]: I0127 10:01:43.740167 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2g5t" event={"ID":"812cdc50-b45b-4b31-b7d6-b827d5875efb","Type":"ContainerStarted","Data":"9ffa5127afa124e7f3e11363e9800b5564d7e284cfbb16eebe95da23aacb7f83"} Jan 27 10:01:43 crc kubenswrapper[4799]: I0127 10:01:43.742178 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 10:01:44 crc kubenswrapper[4799]: I0127 10:01:44.748570 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2g5t" event={"ID":"812cdc50-b45b-4b31-b7d6-b827d5875efb","Type":"ContainerStarted","Data":"d85315907aec9757a0a78d66f2eec5ee77efaa223da6b273efa533bb40d2e891"} Jan 27 10:01:46 crc kubenswrapper[4799]: I0127 10:01:46.771601 4799 generic.go:334] "Generic (PLEG): container finished" podID="812cdc50-b45b-4b31-b7d6-b827d5875efb" containerID="d85315907aec9757a0a78d66f2eec5ee77efaa223da6b273efa533bb40d2e891" exitCode=0 Jan 27 10:01:46 crc kubenswrapper[4799]: I0127 10:01:46.772231 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2g5t" event={"ID":"812cdc50-b45b-4b31-b7d6-b827d5875efb","Type":"ContainerDied","Data":"d85315907aec9757a0a78d66f2eec5ee77efaa223da6b273efa533bb40d2e891"} Jan 27 10:01:48 crc kubenswrapper[4799]: I0127 10:01:48.794593 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2g5t" event={"ID":"812cdc50-b45b-4b31-b7d6-b827d5875efb","Type":"ContainerStarted","Data":"e09e997a934f0d366e5b7cf42def7f49a3e544d020e27000aa8dd8aa87bff003"} Jan 27 10:01:48 crc kubenswrapper[4799]: I0127 10:01:48.822995 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x2g5t" podStartSLOduration=3.001816073 podStartE2EDuration="6.82297151s" podCreationTimestamp="2026-01-27 10:01:42 +0000 UTC" firstStartedPulling="2026-01-27 10:01:43.741924089 +0000 UTC m=+8170.053028154" lastFinishedPulling="2026-01-27 10:01:47.563079496 +0000 UTC m=+8173.874183591" observedRunningTime="2026-01-27 10:01:48.812266985 +0000 UTC m=+8175.123371060" watchObservedRunningTime="2026-01-27 10:01:48.82297151 +0000 UTC m=+8175.134075595" Jan 27 10:01:52 crc kubenswrapper[4799]: I0127 10:01:52.776657 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x2g5t" Jan 27 10:01:52 crc kubenswrapper[4799]: I0127 10:01:52.777213 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x2g5t" Jan 27 10:01:53 crc kubenswrapper[4799]: I0127 10:01:53.731813 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:01:53 crc kubenswrapper[4799]: I0127 10:01:53.732126 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:01:53 crc kubenswrapper[4799]: I0127 10:01:53.732183 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 10:01:53 crc kubenswrapper[4799]: I0127 10:01:53.732859 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 10:01:53 crc kubenswrapper[4799]: I0127 10:01:53.732949 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" gracePeriod=600 Jan 27 10:01:53 crc kubenswrapper[4799]: I0127 10:01:53.845990 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x2g5t" podUID="812cdc50-b45b-4b31-b7d6-b827d5875efb" containerName="registry-server" probeResult="failure" output=< Jan 27 10:01:53 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Jan 27 10:01:53 crc kubenswrapper[4799]: > Jan 27 10:01:53 crc kubenswrapper[4799]: E0127 10:01:53.858778 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:01:54 crc kubenswrapper[4799]: I0127 10:01:54.853091 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" exitCode=0 Jan 27 10:01:54 crc kubenswrapper[4799]: I0127 10:01:54.853167 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5"} Jan 27 10:01:54 crc kubenswrapper[4799]: I0127 10:01:54.853549 4799 scope.go:117] "RemoveContainer" containerID="f88c137ef2024972774ec7c8064c7864efdbbe6c60771858757cf36afeb37ddf" Jan 27 10:01:54 crc kubenswrapper[4799]: I0127 10:01:54.854332 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:01:54 crc kubenswrapper[4799]: E0127 10:01:54.854703 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:02:02 crc kubenswrapper[4799]: I0127 10:02:02.843245 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x2g5t" Jan 27 10:02:02 crc kubenswrapper[4799]: I0127 10:02:02.899218 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x2g5t" Jan 27 10:02:03 crc kubenswrapper[4799]: I0127 10:02:03.089636 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x2g5t"] Jan 27 10:02:03 crc kubenswrapper[4799]: I0127 10:02:03.935065 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x2g5t" podUID="812cdc50-b45b-4b31-b7d6-b827d5875efb" containerName="registry-server" containerID="cri-o://e09e997a934f0d366e5b7cf42def7f49a3e544d020e27000aa8dd8aa87bff003" gracePeriod=2 Jan 27 10:02:04 crc kubenswrapper[4799]: I0127 10:02:04.426118 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2g5t" Jan 27 10:02:04 crc kubenswrapper[4799]: I0127 10:02:04.554680 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812cdc50-b45b-4b31-b7d6-b827d5875efb-catalog-content\") pod \"812cdc50-b45b-4b31-b7d6-b827d5875efb\" (UID: \"812cdc50-b45b-4b31-b7d6-b827d5875efb\") " Jan 27 10:02:04 crc kubenswrapper[4799]: I0127 10:02:04.554855 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812cdc50-b45b-4b31-b7d6-b827d5875efb-utilities\") pod \"812cdc50-b45b-4b31-b7d6-b827d5875efb\" (UID: \"812cdc50-b45b-4b31-b7d6-b827d5875efb\") " Jan 27 10:02:04 crc kubenswrapper[4799]: I0127 10:02:04.554882 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mk9d\" (UniqueName: \"kubernetes.io/projected/812cdc50-b45b-4b31-b7d6-b827d5875efb-kube-api-access-7mk9d\") pod \"812cdc50-b45b-4b31-b7d6-b827d5875efb\" (UID: \"812cdc50-b45b-4b31-b7d6-b827d5875efb\") " Jan 27 10:02:04 crc kubenswrapper[4799]: I0127 10:02:04.555880 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/812cdc50-b45b-4b31-b7d6-b827d5875efb-utilities" (OuterVolumeSpecName: "utilities") pod "812cdc50-b45b-4b31-b7d6-b827d5875efb" (UID: "812cdc50-b45b-4b31-b7d6-b827d5875efb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:02:04 crc kubenswrapper[4799]: I0127 10:02:04.561274 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/812cdc50-b45b-4b31-b7d6-b827d5875efb-kube-api-access-7mk9d" (OuterVolumeSpecName: "kube-api-access-7mk9d") pod "812cdc50-b45b-4b31-b7d6-b827d5875efb" (UID: "812cdc50-b45b-4b31-b7d6-b827d5875efb"). InnerVolumeSpecName "kube-api-access-7mk9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:02:04 crc kubenswrapper[4799]: I0127 10:02:04.657628 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812cdc50-b45b-4b31-b7d6-b827d5875efb-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:02:04 crc kubenswrapper[4799]: I0127 10:02:04.657658 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mk9d\" (UniqueName: \"kubernetes.io/projected/812cdc50-b45b-4b31-b7d6-b827d5875efb-kube-api-access-7mk9d\") on node \"crc\" DevicePath \"\"" Jan 27 10:02:04 crc kubenswrapper[4799]: I0127 10:02:04.677624 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/812cdc50-b45b-4b31-b7d6-b827d5875efb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "812cdc50-b45b-4b31-b7d6-b827d5875efb" (UID: "812cdc50-b45b-4b31-b7d6-b827d5875efb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:02:04 crc kubenswrapper[4799]: I0127 10:02:04.759470 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812cdc50-b45b-4b31-b7d6-b827d5875efb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:02:04 crc kubenswrapper[4799]: I0127 10:02:04.949986 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2g5t" Jan 27 10:02:04 crc kubenswrapper[4799]: I0127 10:02:04.950016 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2g5t" event={"ID":"812cdc50-b45b-4b31-b7d6-b827d5875efb","Type":"ContainerDied","Data":"e09e997a934f0d366e5b7cf42def7f49a3e544d020e27000aa8dd8aa87bff003"} Jan 27 10:02:04 crc kubenswrapper[4799]: I0127 10:02:04.950086 4799 scope.go:117] "RemoveContainer" containerID="e09e997a934f0d366e5b7cf42def7f49a3e544d020e27000aa8dd8aa87bff003" Jan 27 10:02:04 crc kubenswrapper[4799]: I0127 10:02:04.949945 4799 generic.go:334] "Generic (PLEG): container finished" podID="812cdc50-b45b-4b31-b7d6-b827d5875efb" containerID="e09e997a934f0d366e5b7cf42def7f49a3e544d020e27000aa8dd8aa87bff003" exitCode=0 Jan 27 10:02:04 crc kubenswrapper[4799]: I0127 10:02:04.950321 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2g5t" event={"ID":"812cdc50-b45b-4b31-b7d6-b827d5875efb","Type":"ContainerDied","Data":"9ffa5127afa124e7f3e11363e9800b5564d7e284cfbb16eebe95da23aacb7f83"} Jan 27 10:02:04 crc kubenswrapper[4799]: I0127 10:02:04.987678 4799 scope.go:117] "RemoveContainer" containerID="d85315907aec9757a0a78d66f2eec5ee77efaa223da6b273efa533bb40d2e891" Jan 27 10:02:05 crc kubenswrapper[4799]: I0127 10:02:05.016843 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x2g5t"] Jan 27 10:02:05 crc kubenswrapper[4799]: I0127 10:02:05.024716 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x2g5t"] Jan 27 10:02:05 crc kubenswrapper[4799]: I0127 10:02:05.044211 4799 scope.go:117] "RemoveContainer" containerID="298278a92dc0fd3664fae7c511a2e73ab671625cb127a62632b973075cf7df8e" Jan 27 10:02:05 crc kubenswrapper[4799]: I0127 10:02:05.093690 4799 scope.go:117] "RemoveContainer" containerID="e09e997a934f0d366e5b7cf42def7f49a3e544d020e27000aa8dd8aa87bff003" Jan 27 10:02:05 crc kubenswrapper[4799]: E0127 10:02:05.094247 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e09e997a934f0d366e5b7cf42def7f49a3e544d020e27000aa8dd8aa87bff003\": container with ID starting with e09e997a934f0d366e5b7cf42def7f49a3e544d020e27000aa8dd8aa87bff003 not found: ID does not exist" containerID="e09e997a934f0d366e5b7cf42def7f49a3e544d020e27000aa8dd8aa87bff003" Jan 27 10:02:05 crc kubenswrapper[4799]: I0127 10:02:05.094342 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e09e997a934f0d366e5b7cf42def7f49a3e544d020e27000aa8dd8aa87bff003"} err="failed to get container status \"e09e997a934f0d366e5b7cf42def7f49a3e544d020e27000aa8dd8aa87bff003\": rpc error: code = NotFound desc = could not find container \"e09e997a934f0d366e5b7cf42def7f49a3e544d020e27000aa8dd8aa87bff003\": container with ID starting with e09e997a934f0d366e5b7cf42def7f49a3e544d020e27000aa8dd8aa87bff003 not found: ID does not exist" Jan 27 10:02:05 crc kubenswrapper[4799]: I0127 10:02:05.094392 4799 scope.go:117] "RemoveContainer" containerID="d85315907aec9757a0a78d66f2eec5ee77efaa223da6b273efa533bb40d2e891" Jan 27 10:02:05 crc kubenswrapper[4799]: E0127 10:02:05.094923 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d85315907aec9757a0a78d66f2eec5ee77efaa223da6b273efa533bb40d2e891\": container with ID starting with d85315907aec9757a0a78d66f2eec5ee77efaa223da6b273efa533bb40d2e891 not found: ID does not exist" containerID="d85315907aec9757a0a78d66f2eec5ee77efaa223da6b273efa533bb40d2e891" Jan 27 10:02:05 crc kubenswrapper[4799]: I0127 10:02:05.094967 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d85315907aec9757a0a78d66f2eec5ee77efaa223da6b273efa533bb40d2e891"} err="failed to get container status \"d85315907aec9757a0a78d66f2eec5ee77efaa223da6b273efa533bb40d2e891\": rpc error: code = NotFound desc = could not find container \"d85315907aec9757a0a78d66f2eec5ee77efaa223da6b273efa533bb40d2e891\": container with ID starting with d85315907aec9757a0a78d66f2eec5ee77efaa223da6b273efa533bb40d2e891 not found: ID does not exist" Jan 27 10:02:05 crc kubenswrapper[4799]: I0127 10:02:05.094994 4799 scope.go:117] "RemoveContainer" containerID="298278a92dc0fd3664fae7c511a2e73ab671625cb127a62632b973075cf7df8e" Jan 27 10:02:05 crc kubenswrapper[4799]: E0127 10:02:05.095362 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"298278a92dc0fd3664fae7c511a2e73ab671625cb127a62632b973075cf7df8e\": container with ID starting with 298278a92dc0fd3664fae7c511a2e73ab671625cb127a62632b973075cf7df8e not found: ID does not exist" containerID="298278a92dc0fd3664fae7c511a2e73ab671625cb127a62632b973075cf7df8e" Jan 27 10:02:05 crc kubenswrapper[4799]: I0127 10:02:05.095405 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"298278a92dc0fd3664fae7c511a2e73ab671625cb127a62632b973075cf7df8e"} err="failed to get container status \"298278a92dc0fd3664fae7c511a2e73ab671625cb127a62632b973075cf7df8e\": rpc error: code = NotFound desc = could not find container \"298278a92dc0fd3664fae7c511a2e73ab671625cb127a62632b973075cf7df8e\": container with ID starting with 298278a92dc0fd3664fae7c511a2e73ab671625cb127a62632b973075cf7df8e not found: ID does not exist" Jan 27 10:02:06 crc kubenswrapper[4799]: I0127 10:02:06.473384 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="812cdc50-b45b-4b31-b7d6-b827d5875efb" path="/var/lib/kubelet/pods/812cdc50-b45b-4b31-b7d6-b827d5875efb/volumes" Jan 27 10:02:07 crc kubenswrapper[4799]: I0127 10:02:07.451834 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:02:07 crc kubenswrapper[4799]: E0127 10:02:07.452388 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:02:22 crc kubenswrapper[4799]: I0127 10:02:22.451850 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:02:22 crc kubenswrapper[4799]: E0127 10:02:22.453255 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:02:34 crc kubenswrapper[4799]: I0127 10:02:34.476674 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:02:34 crc kubenswrapper[4799]: E0127 10:02:34.477820 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:02:49 crc kubenswrapper[4799]: I0127 10:02:49.453199 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:02:49 crc kubenswrapper[4799]: E0127 10:02:49.455264 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:03:01 crc kubenswrapper[4799]: I0127 10:03:01.451565 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:03:01 crc kubenswrapper[4799]: E0127 10:03:01.454201 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:03:14 crc kubenswrapper[4799]: I0127 10:03:14.458055 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:03:14 crc kubenswrapper[4799]: E0127 10:03:14.459436 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:03:25 crc kubenswrapper[4799]: I0127 10:03:25.452513 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:03:25 crc kubenswrapper[4799]: E0127 10:03:25.453996 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:03:40 crc kubenswrapper[4799]: I0127 10:03:40.452502 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:03:40 crc kubenswrapper[4799]: E0127 10:03:40.453280 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:03:54 crc kubenswrapper[4799]: I0127 10:03:54.465676 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:03:54 crc kubenswrapper[4799]: E0127 10:03:54.466968 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:04:05 crc kubenswrapper[4799]: I0127 10:04:05.452506 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:04:05 crc kubenswrapper[4799]: E0127 10:04:05.453772 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:04:19 crc kubenswrapper[4799]: I0127 10:04:19.452011 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:04:19 crc kubenswrapper[4799]: E0127 10:04:19.453523 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:04:31 crc kubenswrapper[4799]: I0127 10:04:31.452744 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:04:31 crc kubenswrapper[4799]: E0127 10:04:31.453997 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:04:42 crc kubenswrapper[4799]: I0127 10:04:42.452709 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:04:42 crc kubenswrapper[4799]: E0127 10:04:42.453747 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:04:55 crc kubenswrapper[4799]: I0127 10:04:55.452262 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:04:55 crc kubenswrapper[4799]: E0127 10:04:55.453947 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:05:08 crc kubenswrapper[4799]: I0127 10:05:08.451864 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:05:08 crc kubenswrapper[4799]: E0127 10:05:08.452840 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:05:23 crc kubenswrapper[4799]: I0127 10:05:23.451715 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:05:23 crc kubenswrapper[4799]: E0127 10:05:23.452585 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:05:35 crc kubenswrapper[4799]: I0127 10:05:35.452203 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:05:35 crc kubenswrapper[4799]: E0127 10:05:35.453020 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:05:48 crc kubenswrapper[4799]: I0127 10:05:48.451604 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:05:48 crc kubenswrapper[4799]: E0127 10:05:48.452571 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:06:01 crc kubenswrapper[4799]: I0127 10:06:01.451935 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:06:01 crc kubenswrapper[4799]: E0127 10:06:01.453051 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:06:12 crc kubenswrapper[4799]: I0127 10:06:12.451449 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:06:12 crc kubenswrapper[4799]: E0127 10:06:12.452843 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:06:24 crc kubenswrapper[4799]: I0127 10:06:24.466195 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:06:24 crc kubenswrapper[4799]: E0127 10:06:24.467082 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:06:28 crc kubenswrapper[4799]: I0127 10:06:28.169965 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jcc74"] Jan 27 10:06:28 crc kubenswrapper[4799]: E0127 10:06:28.170701 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812cdc50-b45b-4b31-b7d6-b827d5875efb" containerName="registry-server" Jan 27 10:06:28 crc kubenswrapper[4799]: I0127 10:06:28.170713 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="812cdc50-b45b-4b31-b7d6-b827d5875efb" containerName="registry-server" Jan 27 10:06:28 crc kubenswrapper[4799]: E0127 10:06:28.170733 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812cdc50-b45b-4b31-b7d6-b827d5875efb" containerName="extract-utilities" Jan 27 10:06:28 crc kubenswrapper[4799]: I0127 10:06:28.170740 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="812cdc50-b45b-4b31-b7d6-b827d5875efb" containerName="extract-utilities" Jan 27 10:06:28 crc kubenswrapper[4799]: E0127 10:06:28.170766 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812cdc50-b45b-4b31-b7d6-b827d5875efb" containerName="extract-content" Jan 27 10:06:28 crc kubenswrapper[4799]: I0127 10:06:28.170772 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="812cdc50-b45b-4b31-b7d6-b827d5875efb" containerName="extract-content" Jan 27 10:06:28 crc kubenswrapper[4799]: I0127 10:06:28.170958 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="812cdc50-b45b-4b31-b7d6-b827d5875efb" containerName="registry-server" Jan 27 10:06:28 crc kubenswrapper[4799]: I0127 10:06:28.172965 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jcc74" Jan 27 10:06:28 crc kubenswrapper[4799]: I0127 10:06:28.193254 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jcc74"] Jan 27 10:06:28 crc kubenswrapper[4799]: I0127 10:06:28.236249 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5af0ea9-8e1e-4477-b74b-f26f02e99af1-utilities\") pod \"certified-operators-jcc74\" (UID: \"e5af0ea9-8e1e-4477-b74b-f26f02e99af1\") " pod="openshift-marketplace/certified-operators-jcc74" Jan 27 10:06:28 crc kubenswrapper[4799]: I0127 10:06:28.236386 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scwzq\" (UniqueName: \"kubernetes.io/projected/e5af0ea9-8e1e-4477-b74b-f26f02e99af1-kube-api-access-scwzq\") pod \"certified-operators-jcc74\" (UID: \"e5af0ea9-8e1e-4477-b74b-f26f02e99af1\") " pod="openshift-marketplace/certified-operators-jcc74" Jan 27 10:06:28 crc kubenswrapper[4799]: I0127 10:06:28.236543 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5af0ea9-8e1e-4477-b74b-f26f02e99af1-catalog-content\") pod \"certified-operators-jcc74\" (UID: \"e5af0ea9-8e1e-4477-b74b-f26f02e99af1\") " pod="openshift-marketplace/certified-operators-jcc74" Jan 27 10:06:28 crc kubenswrapper[4799]: I0127 10:06:28.338064 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5af0ea9-8e1e-4477-b74b-f26f02e99af1-utilities\") pod \"certified-operators-jcc74\" (UID: \"e5af0ea9-8e1e-4477-b74b-f26f02e99af1\") " pod="openshift-marketplace/certified-operators-jcc74" Jan 27 10:06:28 crc kubenswrapper[4799]: I0127 10:06:28.338125 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scwzq\" (UniqueName: \"kubernetes.io/projected/e5af0ea9-8e1e-4477-b74b-f26f02e99af1-kube-api-access-scwzq\") pod \"certified-operators-jcc74\" (UID: \"e5af0ea9-8e1e-4477-b74b-f26f02e99af1\") " pod="openshift-marketplace/certified-operators-jcc74" Jan 27 10:06:28 crc kubenswrapper[4799]: I0127 10:06:28.338151 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5af0ea9-8e1e-4477-b74b-f26f02e99af1-catalog-content\") pod \"certified-operators-jcc74\" (UID: \"e5af0ea9-8e1e-4477-b74b-f26f02e99af1\") " pod="openshift-marketplace/certified-operators-jcc74" Jan 27 10:06:28 crc kubenswrapper[4799]: I0127 10:06:28.338701 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5af0ea9-8e1e-4477-b74b-f26f02e99af1-utilities\") pod \"certified-operators-jcc74\" (UID: \"e5af0ea9-8e1e-4477-b74b-f26f02e99af1\") " pod="openshift-marketplace/certified-operators-jcc74" Jan 27 10:06:28 crc kubenswrapper[4799]: I0127 10:06:28.338742 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5af0ea9-8e1e-4477-b74b-f26f02e99af1-catalog-content\") pod \"certified-operators-jcc74\" (UID: \"e5af0ea9-8e1e-4477-b74b-f26f02e99af1\") " pod="openshift-marketplace/certified-operators-jcc74" Jan 27 10:06:28 crc kubenswrapper[4799]: I0127 10:06:28.360548 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scwzq\" (UniqueName: \"kubernetes.io/projected/e5af0ea9-8e1e-4477-b74b-f26f02e99af1-kube-api-access-scwzq\") pod \"certified-operators-jcc74\" (UID: \"e5af0ea9-8e1e-4477-b74b-f26f02e99af1\") " pod="openshift-marketplace/certified-operators-jcc74" Jan 27 10:06:28 crc kubenswrapper[4799]: I0127 10:06:28.496517 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jcc74" Jan 27 10:06:29 crc kubenswrapper[4799]: I0127 10:06:29.083443 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jcc74"] Jan 27 10:06:29 crc kubenswrapper[4799]: I0127 10:06:29.802449 4799 generic.go:334] "Generic (PLEG): container finished" podID="e5af0ea9-8e1e-4477-b74b-f26f02e99af1" containerID="533289de0458022c095136dc70cba409d60780c3acd8cc5db844172f0c3fc068" exitCode=0 Jan 27 10:06:29 crc kubenswrapper[4799]: I0127 10:06:29.802636 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcc74" event={"ID":"e5af0ea9-8e1e-4477-b74b-f26f02e99af1","Type":"ContainerDied","Data":"533289de0458022c095136dc70cba409d60780c3acd8cc5db844172f0c3fc068"} Jan 27 10:06:29 crc kubenswrapper[4799]: I0127 10:06:29.802758 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcc74" event={"ID":"e5af0ea9-8e1e-4477-b74b-f26f02e99af1","Type":"ContainerStarted","Data":"962ae5eacefd3b55f3a3acf9978b3ba80269d3e352d92138bc025fb73c9ccc03"} Jan 27 10:06:31 crc kubenswrapper[4799]: I0127 10:06:31.822029 4799 generic.go:334] "Generic (PLEG): container finished" podID="e5af0ea9-8e1e-4477-b74b-f26f02e99af1" containerID="3309b43ab1330379450e1fb77ced87235cfed70a5a48f7fbd20730760a835a62" exitCode=0 Jan 27 10:06:31 crc kubenswrapper[4799]: I0127 10:06:31.822116 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcc74" event={"ID":"e5af0ea9-8e1e-4477-b74b-f26f02e99af1","Type":"ContainerDied","Data":"3309b43ab1330379450e1fb77ced87235cfed70a5a48f7fbd20730760a835a62"} Jan 27 10:06:32 crc kubenswrapper[4799]: I0127 10:06:32.834555 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcc74" event={"ID":"e5af0ea9-8e1e-4477-b74b-f26f02e99af1","Type":"ContainerStarted","Data":"2089b2f64966536bfb4e3889f275c7e32ea38f0a55fc178f1870ec5612acacd7"} Jan 27 10:06:32 crc kubenswrapper[4799]: I0127 10:06:32.876888 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jcc74" podStartSLOduration=2.212148317 podStartE2EDuration="4.876865644s" podCreationTimestamp="2026-01-27 10:06:28 +0000 UTC" firstStartedPulling="2026-01-27 10:06:29.805154557 +0000 UTC m=+8456.116258622" lastFinishedPulling="2026-01-27 10:06:32.469871884 +0000 UTC m=+8458.780975949" observedRunningTime="2026-01-27 10:06:32.870037568 +0000 UTC m=+8459.181141663" watchObservedRunningTime="2026-01-27 10:06:32.876865644 +0000 UTC m=+8459.187969709" Jan 27 10:06:38 crc kubenswrapper[4799]: I0127 10:06:38.498580 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jcc74" Jan 27 10:06:38 crc kubenswrapper[4799]: I0127 10:06:38.499091 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jcc74" Jan 27 10:06:38 crc kubenswrapper[4799]: I0127 10:06:38.548092 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jcc74" Jan 27 10:06:38 crc kubenswrapper[4799]: I0127 10:06:38.994161 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jcc74" Jan 27 10:06:39 crc kubenswrapper[4799]: I0127 10:06:39.071217 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jcc74"] Jan 27 10:06:39 crc kubenswrapper[4799]: I0127 10:06:39.452536 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:06:39 crc kubenswrapper[4799]: E0127 10:06:39.452860 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:06:40 crc kubenswrapper[4799]: I0127 10:06:40.904736 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jcc74" podUID="e5af0ea9-8e1e-4477-b74b-f26f02e99af1" containerName="registry-server" containerID="cri-o://2089b2f64966536bfb4e3889f275c7e32ea38f0a55fc178f1870ec5612acacd7" gracePeriod=2 Jan 27 10:06:41 crc kubenswrapper[4799]: I0127 10:06:41.389156 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jcc74" Jan 27 10:06:41 crc kubenswrapper[4799]: I0127 10:06:41.508374 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5af0ea9-8e1e-4477-b74b-f26f02e99af1-catalog-content\") pod \"e5af0ea9-8e1e-4477-b74b-f26f02e99af1\" (UID: \"e5af0ea9-8e1e-4477-b74b-f26f02e99af1\") " Jan 27 10:06:41 crc kubenswrapper[4799]: I0127 10:06:41.508419 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5af0ea9-8e1e-4477-b74b-f26f02e99af1-utilities\") pod \"e5af0ea9-8e1e-4477-b74b-f26f02e99af1\" (UID: \"e5af0ea9-8e1e-4477-b74b-f26f02e99af1\") " Jan 27 10:06:41 crc kubenswrapper[4799]: I0127 10:06:41.508505 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scwzq\" (UniqueName: \"kubernetes.io/projected/e5af0ea9-8e1e-4477-b74b-f26f02e99af1-kube-api-access-scwzq\") pod \"e5af0ea9-8e1e-4477-b74b-f26f02e99af1\" (UID: \"e5af0ea9-8e1e-4477-b74b-f26f02e99af1\") " Jan 27 10:06:41 crc kubenswrapper[4799]: I0127 10:06:41.510362 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5af0ea9-8e1e-4477-b74b-f26f02e99af1-utilities" (OuterVolumeSpecName: "utilities") pod "e5af0ea9-8e1e-4477-b74b-f26f02e99af1" (UID: "e5af0ea9-8e1e-4477-b74b-f26f02e99af1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:06:41 crc kubenswrapper[4799]: I0127 10:06:41.515844 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5af0ea9-8e1e-4477-b74b-f26f02e99af1-kube-api-access-scwzq" (OuterVolumeSpecName: "kube-api-access-scwzq") pod "e5af0ea9-8e1e-4477-b74b-f26f02e99af1" (UID: "e5af0ea9-8e1e-4477-b74b-f26f02e99af1"). InnerVolumeSpecName "kube-api-access-scwzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:06:41 crc kubenswrapper[4799]: I0127 10:06:41.611777 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5af0ea9-8e1e-4477-b74b-f26f02e99af1-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:06:41 crc kubenswrapper[4799]: I0127 10:06:41.611820 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scwzq\" (UniqueName: \"kubernetes.io/projected/e5af0ea9-8e1e-4477-b74b-f26f02e99af1-kube-api-access-scwzq\") on node \"crc\" DevicePath \"\"" Jan 27 10:06:41 crc kubenswrapper[4799]: I0127 10:06:41.777522 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5af0ea9-8e1e-4477-b74b-f26f02e99af1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5af0ea9-8e1e-4477-b74b-f26f02e99af1" (UID: "e5af0ea9-8e1e-4477-b74b-f26f02e99af1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:06:41 crc kubenswrapper[4799]: I0127 10:06:41.815421 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5af0ea9-8e1e-4477-b74b-f26f02e99af1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:06:41 crc kubenswrapper[4799]: I0127 10:06:41.914751 4799 generic.go:334] "Generic (PLEG): container finished" podID="e5af0ea9-8e1e-4477-b74b-f26f02e99af1" containerID="2089b2f64966536bfb4e3889f275c7e32ea38f0a55fc178f1870ec5612acacd7" exitCode=0 Jan 27 10:06:41 crc kubenswrapper[4799]: I0127 10:06:41.914811 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcc74" event={"ID":"e5af0ea9-8e1e-4477-b74b-f26f02e99af1","Type":"ContainerDied","Data":"2089b2f64966536bfb4e3889f275c7e32ea38f0a55fc178f1870ec5612acacd7"} Jan 27 10:06:41 crc kubenswrapper[4799]: I0127 10:06:41.914844 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcc74" event={"ID":"e5af0ea9-8e1e-4477-b74b-f26f02e99af1","Type":"ContainerDied","Data":"962ae5eacefd3b55f3a3acf9978b3ba80269d3e352d92138bc025fb73c9ccc03"} Jan 27 10:06:41 crc kubenswrapper[4799]: I0127 10:06:41.914863 4799 scope.go:117] "RemoveContainer" containerID="2089b2f64966536bfb4e3889f275c7e32ea38f0a55fc178f1870ec5612acacd7" Jan 27 10:06:41 crc kubenswrapper[4799]: I0127 10:06:41.915083 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jcc74" Jan 27 10:06:41 crc kubenswrapper[4799]: I0127 10:06:41.946420 4799 scope.go:117] "RemoveContainer" containerID="3309b43ab1330379450e1fb77ced87235cfed70a5a48f7fbd20730760a835a62" Jan 27 10:06:41 crc kubenswrapper[4799]: I0127 10:06:41.953881 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jcc74"] Jan 27 10:06:41 crc kubenswrapper[4799]: I0127 10:06:41.962365 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jcc74"] Jan 27 10:06:41 crc kubenswrapper[4799]: I0127 10:06:41.968154 4799 scope.go:117] "RemoveContainer" containerID="533289de0458022c095136dc70cba409d60780c3acd8cc5db844172f0c3fc068" Jan 27 10:06:42 crc kubenswrapper[4799]: I0127 10:06:42.020931 4799 scope.go:117] "RemoveContainer" containerID="2089b2f64966536bfb4e3889f275c7e32ea38f0a55fc178f1870ec5612acacd7" Jan 27 10:06:42 crc kubenswrapper[4799]: E0127 10:06:42.021496 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2089b2f64966536bfb4e3889f275c7e32ea38f0a55fc178f1870ec5612acacd7\": container with ID starting with 2089b2f64966536bfb4e3889f275c7e32ea38f0a55fc178f1870ec5612acacd7 not found: ID does not exist" containerID="2089b2f64966536bfb4e3889f275c7e32ea38f0a55fc178f1870ec5612acacd7" Jan 27 10:06:42 crc kubenswrapper[4799]: I0127 10:06:42.021731 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2089b2f64966536bfb4e3889f275c7e32ea38f0a55fc178f1870ec5612acacd7"} err="failed to get container status \"2089b2f64966536bfb4e3889f275c7e32ea38f0a55fc178f1870ec5612acacd7\": rpc error: code = NotFound desc = could not find container \"2089b2f64966536bfb4e3889f275c7e32ea38f0a55fc178f1870ec5612acacd7\": container with ID starting with 2089b2f64966536bfb4e3889f275c7e32ea38f0a55fc178f1870ec5612acacd7 not found: ID does not exist" Jan 27 10:06:42 crc kubenswrapper[4799]: I0127 10:06:42.021760 4799 scope.go:117] "RemoveContainer" containerID="3309b43ab1330379450e1fb77ced87235cfed70a5a48f7fbd20730760a835a62" Jan 27 10:06:42 crc kubenswrapper[4799]: E0127 10:06:42.022150 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3309b43ab1330379450e1fb77ced87235cfed70a5a48f7fbd20730760a835a62\": container with ID starting with 3309b43ab1330379450e1fb77ced87235cfed70a5a48f7fbd20730760a835a62 not found: ID does not exist" containerID="3309b43ab1330379450e1fb77ced87235cfed70a5a48f7fbd20730760a835a62" Jan 27 10:06:42 crc kubenswrapper[4799]: I0127 10:06:42.022199 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3309b43ab1330379450e1fb77ced87235cfed70a5a48f7fbd20730760a835a62"} err="failed to get container status \"3309b43ab1330379450e1fb77ced87235cfed70a5a48f7fbd20730760a835a62\": rpc error: code = NotFound desc = could not find container \"3309b43ab1330379450e1fb77ced87235cfed70a5a48f7fbd20730760a835a62\": container with ID starting with 3309b43ab1330379450e1fb77ced87235cfed70a5a48f7fbd20730760a835a62 not found: ID does not exist" Jan 27 10:06:42 crc kubenswrapper[4799]: I0127 10:06:42.022233 4799 scope.go:117] "RemoveContainer" containerID="533289de0458022c095136dc70cba409d60780c3acd8cc5db844172f0c3fc068" Jan 27 10:06:42 crc kubenswrapper[4799]: E0127 10:06:42.022590 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"533289de0458022c095136dc70cba409d60780c3acd8cc5db844172f0c3fc068\": container with ID starting with 533289de0458022c095136dc70cba409d60780c3acd8cc5db844172f0c3fc068 not found: ID does not exist" containerID="533289de0458022c095136dc70cba409d60780c3acd8cc5db844172f0c3fc068" Jan 27 10:06:42 crc kubenswrapper[4799]: I0127 10:06:42.022617 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"533289de0458022c095136dc70cba409d60780c3acd8cc5db844172f0c3fc068"} err="failed to get container status \"533289de0458022c095136dc70cba409d60780c3acd8cc5db844172f0c3fc068\": rpc error: code = NotFound desc = could not find container \"533289de0458022c095136dc70cba409d60780c3acd8cc5db844172f0c3fc068\": container with ID starting with 533289de0458022c095136dc70cba409d60780c3acd8cc5db844172f0c3fc068 not found: ID does not exist" Jan 27 10:06:42 crc kubenswrapper[4799]: I0127 10:06:42.469791 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5af0ea9-8e1e-4477-b74b-f26f02e99af1" path="/var/lib/kubelet/pods/e5af0ea9-8e1e-4477-b74b-f26f02e99af1/volumes" Jan 27 10:06:53 crc kubenswrapper[4799]: I0127 10:06:53.451497 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:06:53 crc kubenswrapper[4799]: E0127 10:06:53.452378 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:07:04 crc kubenswrapper[4799]: I0127 10:07:04.457091 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:07:05 crc kubenswrapper[4799]: I0127 10:07:05.189463 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"6f57606b63046eee0f036a82ab3032ceaa1614d7b07921f9827b64e7d98f2d97"} Jan 27 10:09:23 crc kubenswrapper[4799]: I0127 10:09:23.731234 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:09:23 crc kubenswrapper[4799]: I0127 10:09:23.731948 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:09:53 crc kubenswrapper[4799]: I0127 10:09:53.731180 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:09:53 crc kubenswrapper[4799]: I0127 10:09:53.731818 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:10:23 crc kubenswrapper[4799]: I0127 10:10:23.730703 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:10:23 crc kubenswrapper[4799]: I0127 10:10:23.732147 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:10:23 crc kubenswrapper[4799]: I0127 10:10:23.732278 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 10:10:23 crc kubenswrapper[4799]: I0127 10:10:23.733044 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f57606b63046eee0f036a82ab3032ceaa1614d7b07921f9827b64e7d98f2d97"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 10:10:23 crc kubenswrapper[4799]: I0127 10:10:23.733168 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://6f57606b63046eee0f036a82ab3032ceaa1614d7b07921f9827b64e7d98f2d97" gracePeriod=600 Jan 27 10:10:24 crc kubenswrapper[4799]: I0127 10:10:24.236883 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="6f57606b63046eee0f036a82ab3032ceaa1614d7b07921f9827b64e7d98f2d97" exitCode=0 Jan 27 10:10:24 crc kubenswrapper[4799]: I0127 10:10:24.236963 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"6f57606b63046eee0f036a82ab3032ceaa1614d7b07921f9827b64e7d98f2d97"} Jan 27 10:10:24 crc kubenswrapper[4799]: I0127 10:10:24.237211 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e"} Jan 27 10:10:24 crc kubenswrapper[4799]: I0127 10:10:24.237236 4799 scope.go:117] "RemoveContainer" containerID="cf119859c950e94ea743defb978b2ac171424e9fb045f04a0c0795be582ea1e5" Jan 27 10:10:29 crc kubenswrapper[4799]: I0127 10:10:29.209283 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h9s9l"] Jan 27 10:10:29 crc kubenswrapper[4799]: E0127 10:10:29.210111 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5af0ea9-8e1e-4477-b74b-f26f02e99af1" containerName="registry-server" Jan 27 10:10:29 crc kubenswrapper[4799]: I0127 10:10:29.210124 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5af0ea9-8e1e-4477-b74b-f26f02e99af1" containerName="registry-server" Jan 27 10:10:29 crc kubenswrapper[4799]: E0127 10:10:29.210152 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5af0ea9-8e1e-4477-b74b-f26f02e99af1" containerName="extract-content" Jan 27 10:10:29 crc kubenswrapper[4799]: I0127 10:10:29.210158 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5af0ea9-8e1e-4477-b74b-f26f02e99af1" containerName="extract-content" Jan 27 10:10:29 crc kubenswrapper[4799]: E0127 10:10:29.210175 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5af0ea9-8e1e-4477-b74b-f26f02e99af1" containerName="extract-utilities" Jan 27 10:10:29 crc kubenswrapper[4799]: I0127 10:10:29.210182 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5af0ea9-8e1e-4477-b74b-f26f02e99af1" containerName="extract-utilities" Jan 27 10:10:29 crc kubenswrapper[4799]: I0127 10:10:29.210387 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5af0ea9-8e1e-4477-b74b-f26f02e99af1" containerName="registry-server" Jan 27 10:10:29 crc kubenswrapper[4799]: I0127 10:10:29.211593 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h9s9l" Jan 27 10:10:29 crc kubenswrapper[4799]: I0127 10:10:29.242103 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h9s9l"] Jan 27 10:10:29 crc kubenswrapper[4799]: I0127 10:10:29.324986 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9deba786-413f-4c64-b0ed-d1e44002f9c0-catalog-content\") pod \"community-operators-h9s9l\" (UID: \"9deba786-413f-4c64-b0ed-d1e44002f9c0\") " pod="openshift-marketplace/community-operators-h9s9l" Jan 27 10:10:29 crc kubenswrapper[4799]: I0127 10:10:29.325048 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f76c\" (UniqueName: \"kubernetes.io/projected/9deba786-413f-4c64-b0ed-d1e44002f9c0-kube-api-access-2f76c\") pod \"community-operators-h9s9l\" (UID: \"9deba786-413f-4c64-b0ed-d1e44002f9c0\") " pod="openshift-marketplace/community-operators-h9s9l" Jan 27 10:10:29 crc kubenswrapper[4799]: I0127 10:10:29.325074 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9deba786-413f-4c64-b0ed-d1e44002f9c0-utilities\") pod \"community-operators-h9s9l\" (UID: \"9deba786-413f-4c64-b0ed-d1e44002f9c0\") " pod="openshift-marketplace/community-operators-h9s9l" Jan 27 10:10:29 crc kubenswrapper[4799]: I0127 10:10:29.427243 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9deba786-413f-4c64-b0ed-d1e44002f9c0-catalog-content\") pod \"community-operators-h9s9l\" (UID: \"9deba786-413f-4c64-b0ed-d1e44002f9c0\") " pod="openshift-marketplace/community-operators-h9s9l" Jan 27 10:10:29 crc kubenswrapper[4799]: I0127 10:10:29.427309 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f76c\" (UniqueName: \"kubernetes.io/projected/9deba786-413f-4c64-b0ed-d1e44002f9c0-kube-api-access-2f76c\") pod \"community-operators-h9s9l\" (UID: \"9deba786-413f-4c64-b0ed-d1e44002f9c0\") " pod="openshift-marketplace/community-operators-h9s9l" Jan 27 10:10:29 crc kubenswrapper[4799]: I0127 10:10:29.427358 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9deba786-413f-4c64-b0ed-d1e44002f9c0-utilities\") pod \"community-operators-h9s9l\" (UID: \"9deba786-413f-4c64-b0ed-d1e44002f9c0\") " pod="openshift-marketplace/community-operators-h9s9l" Jan 27 10:10:29 crc kubenswrapper[4799]: I0127 10:10:29.427936 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9deba786-413f-4c64-b0ed-d1e44002f9c0-catalog-content\") pod \"community-operators-h9s9l\" (UID: \"9deba786-413f-4c64-b0ed-d1e44002f9c0\") " pod="openshift-marketplace/community-operators-h9s9l" Jan 27 10:10:29 crc kubenswrapper[4799]: I0127 10:10:29.427936 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9deba786-413f-4c64-b0ed-d1e44002f9c0-utilities\") pod \"community-operators-h9s9l\" (UID: \"9deba786-413f-4c64-b0ed-d1e44002f9c0\") " pod="openshift-marketplace/community-operators-h9s9l" Jan 27 10:10:29 crc kubenswrapper[4799]: I0127 10:10:29.471605 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f76c\" (UniqueName: \"kubernetes.io/projected/9deba786-413f-4c64-b0ed-d1e44002f9c0-kube-api-access-2f76c\") pod \"community-operators-h9s9l\" (UID: \"9deba786-413f-4c64-b0ed-d1e44002f9c0\") " pod="openshift-marketplace/community-operators-h9s9l" Jan 27 10:10:29 crc kubenswrapper[4799]: I0127 10:10:29.580104 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h9s9l" Jan 27 10:10:30 crc kubenswrapper[4799]: I0127 10:10:30.129115 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h9s9l"] Jan 27 10:10:30 crc kubenswrapper[4799]: W0127 10:10:30.144408 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9deba786_413f_4c64_b0ed_d1e44002f9c0.slice/crio-17a3d4f6b9b1627987d3ec764c46674967809020b299525475a78361af78fd68 WatchSource:0}: Error finding container 17a3d4f6b9b1627987d3ec764c46674967809020b299525475a78361af78fd68: Status 404 returned error can't find the container with id 17a3d4f6b9b1627987d3ec764c46674967809020b299525475a78361af78fd68 Jan 27 10:10:30 crc kubenswrapper[4799]: I0127 10:10:30.298854 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9s9l" event={"ID":"9deba786-413f-4c64-b0ed-d1e44002f9c0","Type":"ContainerStarted","Data":"17a3d4f6b9b1627987d3ec764c46674967809020b299525475a78361af78fd68"} Jan 27 10:10:31 crc kubenswrapper[4799]: I0127 10:10:31.312387 4799 generic.go:334] "Generic (PLEG): container finished" podID="9deba786-413f-4c64-b0ed-d1e44002f9c0" containerID="48f20cda246d36a063f1f239bab4512776cb512ae0c5bcd2477c7a603b441caf" exitCode=0 Jan 27 10:10:31 crc kubenswrapper[4799]: I0127 10:10:31.312493 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9s9l" event={"ID":"9deba786-413f-4c64-b0ed-d1e44002f9c0","Type":"ContainerDied","Data":"48f20cda246d36a063f1f239bab4512776cb512ae0c5bcd2477c7a603b441caf"} Jan 27 10:10:31 crc kubenswrapper[4799]: I0127 10:10:31.317478 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 10:10:31 crc kubenswrapper[4799]: I0127 10:10:31.857503 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-69rmq"] Jan 27 10:10:31 crc kubenswrapper[4799]: I0127 10:10:31.860303 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-69rmq" Jan 27 10:10:31 crc kubenswrapper[4799]: I0127 10:10:31.866684 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-69rmq"] Jan 27 10:10:31 crc kubenswrapper[4799]: I0127 10:10:31.983772 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f-utilities\") pod \"redhat-marketplace-69rmq\" (UID: \"7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f\") " pod="openshift-marketplace/redhat-marketplace-69rmq" Jan 27 10:10:31 crc kubenswrapper[4799]: I0127 10:10:31.984049 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f-catalog-content\") pod \"redhat-marketplace-69rmq\" (UID: \"7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f\") " pod="openshift-marketplace/redhat-marketplace-69rmq" Jan 27 10:10:31 crc kubenswrapper[4799]: I0127 10:10:31.984162 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zhqt\" (UniqueName: \"kubernetes.io/projected/7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f-kube-api-access-7zhqt\") pod \"redhat-marketplace-69rmq\" (UID: \"7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f\") " pod="openshift-marketplace/redhat-marketplace-69rmq" Jan 27 10:10:32 crc kubenswrapper[4799]: I0127 10:10:32.086022 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f-utilities\") pod \"redhat-marketplace-69rmq\" (UID: \"7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f\") " pod="openshift-marketplace/redhat-marketplace-69rmq" Jan 27 10:10:32 crc kubenswrapper[4799]: I0127 10:10:32.086390 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f-catalog-content\") pod \"redhat-marketplace-69rmq\" (UID: \"7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f\") " pod="openshift-marketplace/redhat-marketplace-69rmq" Jan 27 10:10:32 crc kubenswrapper[4799]: I0127 10:10:32.086516 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zhqt\" (UniqueName: \"kubernetes.io/projected/7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f-kube-api-access-7zhqt\") pod \"redhat-marketplace-69rmq\" (UID: \"7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f\") " pod="openshift-marketplace/redhat-marketplace-69rmq" Jan 27 10:10:32 crc kubenswrapper[4799]: I0127 10:10:32.086815 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f-utilities\") pod \"redhat-marketplace-69rmq\" (UID: \"7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f\") " pod="openshift-marketplace/redhat-marketplace-69rmq" Jan 27 10:10:32 crc kubenswrapper[4799]: I0127 10:10:32.087101 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f-catalog-content\") pod \"redhat-marketplace-69rmq\" (UID: \"7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f\") " pod="openshift-marketplace/redhat-marketplace-69rmq" Jan 27 10:10:32 crc kubenswrapper[4799]: I0127 10:10:32.112884 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zhqt\" (UniqueName: \"kubernetes.io/projected/7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f-kube-api-access-7zhqt\") pod \"redhat-marketplace-69rmq\" (UID: \"7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f\") " pod="openshift-marketplace/redhat-marketplace-69rmq" Jan 27 10:10:32 crc kubenswrapper[4799]: I0127 10:10:32.185140 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-69rmq" Jan 27 10:10:32 crc kubenswrapper[4799]: I0127 10:10:32.732501 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-69rmq"] Jan 27 10:10:32 crc kubenswrapper[4799]: W0127 10:10:32.775130 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ea68dab_3ab4_4cd1_8d5d_e0f4728d762f.slice/crio-23123b55ee4f2338f656a879088f958777644bd19bb2f8f4871e8650d7ccf3ec WatchSource:0}: Error finding container 23123b55ee4f2338f656a879088f958777644bd19bb2f8f4871e8650d7ccf3ec: Status 404 returned error can't find the container with id 23123b55ee4f2338f656a879088f958777644bd19bb2f8f4871e8650d7ccf3ec Jan 27 10:10:33 crc kubenswrapper[4799]: I0127 10:10:33.333545 4799 generic.go:334] "Generic (PLEG): container finished" podID="7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f" containerID="c89ea0d1f7a40adf764da3be041f7622b97587f20429e0788d1cd08908aa9556" exitCode=0 Jan 27 10:10:33 crc kubenswrapper[4799]: I0127 10:10:33.333733 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69rmq" event={"ID":"7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f","Type":"ContainerDied","Data":"c89ea0d1f7a40adf764da3be041f7622b97587f20429e0788d1cd08908aa9556"} Jan 27 10:10:33 crc kubenswrapper[4799]: I0127 10:10:33.333969 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69rmq" event={"ID":"7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f","Type":"ContainerStarted","Data":"23123b55ee4f2338f656a879088f958777644bd19bb2f8f4871e8650d7ccf3ec"} Jan 27 10:10:33 crc kubenswrapper[4799]: I0127 10:10:33.338401 4799 generic.go:334] "Generic (PLEG): container finished" podID="9deba786-413f-4c64-b0ed-d1e44002f9c0" containerID="1dcb5353f88c6dd80884ad4c111320bc70a67278f48f9e3774d47552f08ec018" exitCode=0 Jan 27 10:10:33 crc kubenswrapper[4799]: I0127 10:10:33.338461 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9s9l" event={"ID":"9deba786-413f-4c64-b0ed-d1e44002f9c0","Type":"ContainerDied","Data":"1dcb5353f88c6dd80884ad4c111320bc70a67278f48f9e3774d47552f08ec018"} Jan 27 10:10:34 crc kubenswrapper[4799]: I0127 10:10:34.350784 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69rmq" event={"ID":"7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f","Type":"ContainerStarted","Data":"8572d5993a7ad182a7e4882ff92c5a5e49c930bca835f07cdf6d61455629ea7b"} Jan 27 10:10:34 crc kubenswrapper[4799]: I0127 10:10:34.353533 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9s9l" event={"ID":"9deba786-413f-4c64-b0ed-d1e44002f9c0","Type":"ContainerStarted","Data":"fa5757a4edf312aa399511d0f7ee0d875663942c46eda21e7c84d2f2e575dcbb"} Jan 27 10:10:34 crc kubenswrapper[4799]: I0127 10:10:34.390613 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h9s9l" podStartSLOduration=2.968933232 podStartE2EDuration="5.390590586s" podCreationTimestamp="2026-01-27 10:10:29 +0000 UTC" firstStartedPulling="2026-01-27 10:10:31.31700352 +0000 UTC m=+8697.628107595" lastFinishedPulling="2026-01-27 10:10:33.738660864 +0000 UTC m=+8700.049764949" observedRunningTime="2026-01-27 10:10:34.388990462 +0000 UTC m=+8700.700094547" watchObservedRunningTime="2026-01-27 10:10:34.390590586 +0000 UTC m=+8700.701694651" Jan 27 10:10:35 crc kubenswrapper[4799]: I0127 10:10:35.364622 4799 generic.go:334] "Generic (PLEG): container finished" podID="7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f" containerID="8572d5993a7ad182a7e4882ff92c5a5e49c930bca835f07cdf6d61455629ea7b" exitCode=0 Jan 27 10:10:35 crc kubenswrapper[4799]: I0127 10:10:35.365429 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69rmq" event={"ID":"7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f","Type":"ContainerDied","Data":"8572d5993a7ad182a7e4882ff92c5a5e49c930bca835f07cdf6d61455629ea7b"} Jan 27 10:10:36 crc kubenswrapper[4799]: I0127 10:10:36.378648 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69rmq" event={"ID":"7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f","Type":"ContainerStarted","Data":"38636d87a948906ad10c271a0ee7eda862fd397f5e0e83f147e1f668334eef3e"} Jan 27 10:10:36 crc kubenswrapper[4799]: I0127 10:10:36.412870 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-69rmq" podStartSLOduration=2.757709133 podStartE2EDuration="5.412838149s" podCreationTimestamp="2026-01-27 10:10:31 +0000 UTC" firstStartedPulling="2026-01-27 10:10:33.337199337 +0000 UTC m=+8699.648303432" lastFinishedPulling="2026-01-27 10:10:35.992328383 +0000 UTC m=+8702.303432448" observedRunningTime="2026-01-27 10:10:36.396441243 +0000 UTC m=+8702.707545388" watchObservedRunningTime="2026-01-27 10:10:36.412838149 +0000 UTC m=+8702.723942284" Jan 27 10:10:39 crc kubenswrapper[4799]: I0127 10:10:39.580500 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h9s9l" Jan 27 10:10:39 crc kubenswrapper[4799]: I0127 10:10:39.581106 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h9s9l" Jan 27 10:10:39 crc kubenswrapper[4799]: I0127 10:10:39.638581 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h9s9l" Jan 27 10:10:40 crc kubenswrapper[4799]: I0127 10:10:40.498241 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h9s9l" Jan 27 10:10:40 crc kubenswrapper[4799]: I0127 10:10:40.806433 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h9s9l"] Jan 27 10:10:42 crc kubenswrapper[4799]: I0127 10:10:42.185711 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-69rmq" Jan 27 10:10:42 crc kubenswrapper[4799]: I0127 10:10:42.186149 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-69rmq" Jan 27 10:10:42 crc kubenswrapper[4799]: I0127 10:10:42.241603 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-69rmq" Jan 27 10:10:42 crc kubenswrapper[4799]: I0127 10:10:42.440813 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h9s9l" podUID="9deba786-413f-4c64-b0ed-d1e44002f9c0" containerName="registry-server" containerID="cri-o://fa5757a4edf312aa399511d0f7ee0d875663942c46eda21e7c84d2f2e575dcbb" gracePeriod=2 Jan 27 10:10:42 crc kubenswrapper[4799]: I0127 10:10:42.544229 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-69rmq" Jan 27 10:10:42 crc kubenswrapper[4799]: I0127 10:10:42.955788 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h9s9l" Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.069971 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9deba786-413f-4c64-b0ed-d1e44002f9c0-catalog-content\") pod \"9deba786-413f-4c64-b0ed-d1e44002f9c0\" (UID: \"9deba786-413f-4c64-b0ed-d1e44002f9c0\") " Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.070086 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f76c\" (UniqueName: \"kubernetes.io/projected/9deba786-413f-4c64-b0ed-d1e44002f9c0-kube-api-access-2f76c\") pod \"9deba786-413f-4c64-b0ed-d1e44002f9c0\" (UID: \"9deba786-413f-4c64-b0ed-d1e44002f9c0\") " Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.070125 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9deba786-413f-4c64-b0ed-d1e44002f9c0-utilities\") pod \"9deba786-413f-4c64-b0ed-d1e44002f9c0\" (UID: \"9deba786-413f-4c64-b0ed-d1e44002f9c0\") " Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.071742 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9deba786-413f-4c64-b0ed-d1e44002f9c0-utilities" (OuterVolumeSpecName: "utilities") pod "9deba786-413f-4c64-b0ed-d1e44002f9c0" (UID: "9deba786-413f-4c64-b0ed-d1e44002f9c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.076747 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9deba786-413f-4c64-b0ed-d1e44002f9c0-kube-api-access-2f76c" (OuterVolumeSpecName: "kube-api-access-2f76c") pod "9deba786-413f-4c64-b0ed-d1e44002f9c0" (UID: "9deba786-413f-4c64-b0ed-d1e44002f9c0"). InnerVolumeSpecName "kube-api-access-2f76c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.142812 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9deba786-413f-4c64-b0ed-d1e44002f9c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9deba786-413f-4c64-b0ed-d1e44002f9c0" (UID: "9deba786-413f-4c64-b0ed-d1e44002f9c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.172451 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9deba786-413f-4c64-b0ed-d1e44002f9c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.172488 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f76c\" (UniqueName: \"kubernetes.io/projected/9deba786-413f-4c64-b0ed-d1e44002f9c0-kube-api-access-2f76c\") on node \"crc\" DevicePath \"\"" Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.172504 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9deba786-413f-4c64-b0ed-d1e44002f9c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.404028 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-69rmq"] Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.452152 4799 generic.go:334] "Generic (PLEG): container finished" podID="9deba786-413f-4c64-b0ed-d1e44002f9c0" containerID="fa5757a4edf312aa399511d0f7ee0d875663942c46eda21e7c84d2f2e575dcbb" exitCode=0 Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.452205 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9s9l" event={"ID":"9deba786-413f-4c64-b0ed-d1e44002f9c0","Type":"ContainerDied","Data":"fa5757a4edf312aa399511d0f7ee0d875663942c46eda21e7c84d2f2e575dcbb"} Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.452257 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9s9l" event={"ID":"9deba786-413f-4c64-b0ed-d1e44002f9c0","Type":"ContainerDied","Data":"17a3d4f6b9b1627987d3ec764c46674967809020b299525475a78361af78fd68"} Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.452288 4799 scope.go:117] "RemoveContainer" containerID="fa5757a4edf312aa399511d0f7ee0d875663942c46eda21e7c84d2f2e575dcbb" Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.452594 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h9s9l" Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.474721 4799 scope.go:117] "RemoveContainer" containerID="1dcb5353f88c6dd80884ad4c111320bc70a67278f48f9e3774d47552f08ec018" Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.502086 4799 scope.go:117] "RemoveContainer" containerID="48f20cda246d36a063f1f239bab4512776cb512ae0c5bcd2477c7a603b441caf" Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.504965 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h9s9l"] Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.512609 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h9s9l"] Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.561548 4799 scope.go:117] "RemoveContainer" containerID="fa5757a4edf312aa399511d0f7ee0d875663942c46eda21e7c84d2f2e575dcbb" Jan 27 10:10:43 crc kubenswrapper[4799]: E0127 10:10:43.562168 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa5757a4edf312aa399511d0f7ee0d875663942c46eda21e7c84d2f2e575dcbb\": container with ID starting with fa5757a4edf312aa399511d0f7ee0d875663942c46eda21e7c84d2f2e575dcbb not found: ID does not exist" containerID="fa5757a4edf312aa399511d0f7ee0d875663942c46eda21e7c84d2f2e575dcbb" Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.562397 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa5757a4edf312aa399511d0f7ee0d875663942c46eda21e7c84d2f2e575dcbb"} err="failed to get container status \"fa5757a4edf312aa399511d0f7ee0d875663942c46eda21e7c84d2f2e575dcbb\": rpc error: code = NotFound desc = could not find container \"fa5757a4edf312aa399511d0f7ee0d875663942c46eda21e7c84d2f2e575dcbb\": container with ID starting with fa5757a4edf312aa399511d0f7ee0d875663942c46eda21e7c84d2f2e575dcbb not found: ID does not exist" Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.562432 4799 scope.go:117] "RemoveContainer" containerID="1dcb5353f88c6dd80884ad4c111320bc70a67278f48f9e3774d47552f08ec018" Jan 27 10:10:43 crc kubenswrapper[4799]: E0127 10:10:43.562813 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dcb5353f88c6dd80884ad4c111320bc70a67278f48f9e3774d47552f08ec018\": container with ID starting with 1dcb5353f88c6dd80884ad4c111320bc70a67278f48f9e3774d47552f08ec018 not found: ID does not exist" containerID="1dcb5353f88c6dd80884ad4c111320bc70a67278f48f9e3774d47552f08ec018" Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.562855 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dcb5353f88c6dd80884ad4c111320bc70a67278f48f9e3774d47552f08ec018"} err="failed to get container status \"1dcb5353f88c6dd80884ad4c111320bc70a67278f48f9e3774d47552f08ec018\": rpc error: code = NotFound desc = could not find container \"1dcb5353f88c6dd80884ad4c111320bc70a67278f48f9e3774d47552f08ec018\": container with ID starting with 1dcb5353f88c6dd80884ad4c111320bc70a67278f48f9e3774d47552f08ec018 not found: ID does not exist" Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.562868 4799 scope.go:117] "RemoveContainer" containerID="48f20cda246d36a063f1f239bab4512776cb512ae0c5bcd2477c7a603b441caf" Jan 27 10:10:43 crc kubenswrapper[4799]: E0127 10:10:43.563260 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48f20cda246d36a063f1f239bab4512776cb512ae0c5bcd2477c7a603b441caf\": container with ID starting with 48f20cda246d36a063f1f239bab4512776cb512ae0c5bcd2477c7a603b441caf not found: ID does not exist" containerID="48f20cda246d36a063f1f239bab4512776cb512ae0c5bcd2477c7a603b441caf" Jan 27 10:10:43 crc kubenswrapper[4799]: I0127 10:10:43.563287 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48f20cda246d36a063f1f239bab4512776cb512ae0c5bcd2477c7a603b441caf"} err="failed to get container status \"48f20cda246d36a063f1f239bab4512776cb512ae0c5bcd2477c7a603b441caf\": rpc error: code = NotFound desc = could not find container \"48f20cda246d36a063f1f239bab4512776cb512ae0c5bcd2477c7a603b441caf\": container with ID starting with 48f20cda246d36a063f1f239bab4512776cb512ae0c5bcd2477c7a603b441caf not found: ID does not exist" Jan 27 10:10:44 crc kubenswrapper[4799]: I0127 10:10:44.469713 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-69rmq" podUID="7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f" containerName="registry-server" containerID="cri-o://38636d87a948906ad10c271a0ee7eda862fd397f5e0e83f147e1f668334eef3e" gracePeriod=2 Jan 27 10:10:44 crc kubenswrapper[4799]: I0127 10:10:44.484623 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9deba786-413f-4c64-b0ed-d1e44002f9c0" path="/var/lib/kubelet/pods/9deba786-413f-4c64-b0ed-d1e44002f9c0/volumes" Jan 27 10:10:45 crc kubenswrapper[4799]: I0127 10:10:45.491245 4799 generic.go:334] "Generic (PLEG): container finished" podID="7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f" containerID="38636d87a948906ad10c271a0ee7eda862fd397f5e0e83f147e1f668334eef3e" exitCode=0 Jan 27 10:10:45 crc kubenswrapper[4799]: I0127 10:10:45.491433 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69rmq" event={"ID":"7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f","Type":"ContainerDied","Data":"38636d87a948906ad10c271a0ee7eda862fd397f5e0e83f147e1f668334eef3e"} Jan 27 10:10:45 crc kubenswrapper[4799]: I0127 10:10:45.889982 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-69rmq" Jan 27 10:10:46 crc kubenswrapper[4799]: I0127 10:10:46.037684 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f-utilities\") pod \"7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f\" (UID: \"7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f\") " Jan 27 10:10:46 crc kubenswrapper[4799]: I0127 10:10:46.038119 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zhqt\" (UniqueName: \"kubernetes.io/projected/7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f-kube-api-access-7zhqt\") pod \"7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f\" (UID: \"7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f\") " Jan 27 10:10:46 crc kubenswrapper[4799]: I0127 10:10:46.038158 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f-catalog-content\") pod \"7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f\" (UID: \"7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f\") " Jan 27 10:10:46 crc kubenswrapper[4799]: I0127 10:10:46.038597 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f-utilities" (OuterVolumeSpecName: "utilities") pod "7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f" (UID: "7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:10:46 crc kubenswrapper[4799]: I0127 10:10:46.056793 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f-kube-api-access-7zhqt" (OuterVolumeSpecName: "kube-api-access-7zhqt") pod "7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f" (UID: "7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f"). InnerVolumeSpecName "kube-api-access-7zhqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:10:46 crc kubenswrapper[4799]: I0127 10:10:46.075842 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f" (UID: "7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:10:46 crc kubenswrapper[4799]: I0127 10:10:46.140559 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zhqt\" (UniqueName: \"kubernetes.io/projected/7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f-kube-api-access-7zhqt\") on node \"crc\" DevicePath \"\"" Jan 27 10:10:46 crc kubenswrapper[4799]: I0127 10:10:46.140601 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:10:46 crc kubenswrapper[4799]: I0127 10:10:46.140612 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:10:46 crc kubenswrapper[4799]: I0127 10:10:46.508274 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69rmq" event={"ID":"7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f","Type":"ContainerDied","Data":"23123b55ee4f2338f656a879088f958777644bd19bb2f8f4871e8650d7ccf3ec"} Jan 27 10:10:46 crc kubenswrapper[4799]: I0127 10:10:46.508385 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-69rmq" Jan 27 10:10:46 crc kubenswrapper[4799]: I0127 10:10:46.508422 4799 scope.go:117] "RemoveContainer" containerID="38636d87a948906ad10c271a0ee7eda862fd397f5e0e83f147e1f668334eef3e" Jan 27 10:10:46 crc kubenswrapper[4799]: I0127 10:10:46.559849 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-69rmq"] Jan 27 10:10:46 crc kubenswrapper[4799]: I0127 10:10:46.568268 4799 scope.go:117] "RemoveContainer" containerID="8572d5993a7ad182a7e4882ff92c5a5e49c930bca835f07cdf6d61455629ea7b" Jan 27 10:10:46 crc kubenswrapper[4799]: I0127 10:10:46.572487 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-69rmq"] Jan 27 10:10:46 crc kubenswrapper[4799]: I0127 10:10:46.598848 4799 scope.go:117] "RemoveContainer" containerID="c89ea0d1f7a40adf764da3be041f7622b97587f20429e0788d1cd08908aa9556" Jan 27 10:10:48 crc kubenswrapper[4799]: I0127 10:10:48.466887 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f" path="/var/lib/kubelet/pods/7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f/volumes" Jan 27 10:12:46 crc kubenswrapper[4799]: I0127 10:12:46.920498 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-np5t6"] Jan 27 10:12:46 crc kubenswrapper[4799]: E0127 10:12:46.921540 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f" containerName="extract-content" Jan 27 10:12:46 crc kubenswrapper[4799]: I0127 10:12:46.921555 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f" containerName="extract-content" Jan 27 10:12:46 crc kubenswrapper[4799]: E0127 10:12:46.921571 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9deba786-413f-4c64-b0ed-d1e44002f9c0" containerName="registry-server" Jan 27 10:12:46 crc kubenswrapper[4799]: I0127 10:12:46.921580 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9deba786-413f-4c64-b0ed-d1e44002f9c0" containerName="registry-server" Jan 27 10:12:46 crc kubenswrapper[4799]: E0127 10:12:46.921589 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9deba786-413f-4c64-b0ed-d1e44002f9c0" containerName="extract-utilities" Jan 27 10:12:46 crc kubenswrapper[4799]: I0127 10:12:46.921597 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9deba786-413f-4c64-b0ed-d1e44002f9c0" containerName="extract-utilities" Jan 27 10:12:46 crc kubenswrapper[4799]: E0127 10:12:46.921610 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9deba786-413f-4c64-b0ed-d1e44002f9c0" containerName="extract-content" Jan 27 10:12:46 crc kubenswrapper[4799]: I0127 10:12:46.921617 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9deba786-413f-4c64-b0ed-d1e44002f9c0" containerName="extract-content" Jan 27 10:12:46 crc kubenswrapper[4799]: E0127 10:12:46.921642 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f" containerName="registry-server" Jan 27 10:12:46 crc kubenswrapper[4799]: I0127 10:12:46.921762 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f" containerName="registry-server" Jan 27 10:12:46 crc kubenswrapper[4799]: E0127 10:12:46.921780 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f" containerName="extract-utilities" Jan 27 10:12:46 crc kubenswrapper[4799]: I0127 10:12:46.921788 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f" containerName="extract-utilities" Jan 27 10:12:46 crc kubenswrapper[4799]: I0127 10:12:46.922033 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="9deba786-413f-4c64-b0ed-d1e44002f9c0" containerName="registry-server" Jan 27 10:12:46 crc kubenswrapper[4799]: I0127 10:12:46.922053 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea68dab-3ab4-4cd1-8d5d-e0f4728d762f" containerName="registry-server" Jan 27 10:12:46 crc kubenswrapper[4799]: I0127 10:12:46.923792 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-np5t6" Jan 27 10:12:46 crc kubenswrapper[4799]: I0127 10:12:46.941155 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-np5t6"] Jan 27 10:12:47 crc kubenswrapper[4799]: I0127 10:12:47.110070 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/285acdb2-ac4d-4981-87bc-f0f37ac40040-catalog-content\") pod \"redhat-operators-np5t6\" (UID: \"285acdb2-ac4d-4981-87bc-f0f37ac40040\") " pod="openshift-marketplace/redhat-operators-np5t6" Jan 27 10:12:47 crc kubenswrapper[4799]: I0127 10:12:47.110149 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6tk2\" (UniqueName: \"kubernetes.io/projected/285acdb2-ac4d-4981-87bc-f0f37ac40040-kube-api-access-d6tk2\") pod \"redhat-operators-np5t6\" (UID: \"285acdb2-ac4d-4981-87bc-f0f37ac40040\") " pod="openshift-marketplace/redhat-operators-np5t6" Jan 27 10:12:47 crc kubenswrapper[4799]: I0127 10:12:47.110319 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/285acdb2-ac4d-4981-87bc-f0f37ac40040-utilities\") pod \"redhat-operators-np5t6\" (UID: \"285acdb2-ac4d-4981-87bc-f0f37ac40040\") " pod="openshift-marketplace/redhat-operators-np5t6" Jan 27 10:12:47 crc kubenswrapper[4799]: I0127 10:12:47.212190 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6tk2\" (UniqueName: \"kubernetes.io/projected/285acdb2-ac4d-4981-87bc-f0f37ac40040-kube-api-access-d6tk2\") pod \"redhat-operators-np5t6\" (UID: \"285acdb2-ac4d-4981-87bc-f0f37ac40040\") " pod="openshift-marketplace/redhat-operators-np5t6" Jan 27 10:12:47 crc kubenswrapper[4799]: I0127 10:12:47.212394 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/285acdb2-ac4d-4981-87bc-f0f37ac40040-utilities\") pod \"redhat-operators-np5t6\" (UID: \"285acdb2-ac4d-4981-87bc-f0f37ac40040\") " pod="openshift-marketplace/redhat-operators-np5t6" Jan 27 10:12:47 crc kubenswrapper[4799]: I0127 10:12:47.212503 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/285acdb2-ac4d-4981-87bc-f0f37ac40040-catalog-content\") pod \"redhat-operators-np5t6\" (UID: \"285acdb2-ac4d-4981-87bc-f0f37ac40040\") " pod="openshift-marketplace/redhat-operators-np5t6" Jan 27 10:12:47 crc kubenswrapper[4799]: I0127 10:12:47.212976 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/285acdb2-ac4d-4981-87bc-f0f37ac40040-utilities\") pod \"redhat-operators-np5t6\" (UID: \"285acdb2-ac4d-4981-87bc-f0f37ac40040\") " pod="openshift-marketplace/redhat-operators-np5t6" Jan 27 10:12:47 crc kubenswrapper[4799]: I0127 10:12:47.213111 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/285acdb2-ac4d-4981-87bc-f0f37ac40040-catalog-content\") pod \"redhat-operators-np5t6\" (UID: \"285acdb2-ac4d-4981-87bc-f0f37ac40040\") " pod="openshift-marketplace/redhat-operators-np5t6" Jan 27 10:12:47 crc kubenswrapper[4799]: I0127 10:12:47.232759 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6tk2\" (UniqueName: \"kubernetes.io/projected/285acdb2-ac4d-4981-87bc-f0f37ac40040-kube-api-access-d6tk2\") pod \"redhat-operators-np5t6\" (UID: \"285acdb2-ac4d-4981-87bc-f0f37ac40040\") " pod="openshift-marketplace/redhat-operators-np5t6" Jan 27 10:12:47 crc kubenswrapper[4799]: I0127 10:12:47.246203 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-np5t6" Jan 27 10:12:47 crc kubenswrapper[4799]: I0127 10:12:47.753339 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-np5t6"] Jan 27 10:12:47 crc kubenswrapper[4799]: I0127 10:12:47.884220 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np5t6" event={"ID":"285acdb2-ac4d-4981-87bc-f0f37ac40040","Type":"ContainerStarted","Data":"9c535cbec6c06fdc31c162be1b7261e44389d2227566c63f9cd0322e08f77955"} Jan 27 10:12:48 crc kubenswrapper[4799]: I0127 10:12:48.894214 4799 generic.go:334] "Generic (PLEG): container finished" podID="285acdb2-ac4d-4981-87bc-f0f37ac40040" containerID="45c2e547ff6bd44f559edb2a68139621fb679369c175105bd9db85dc7b5352b1" exitCode=0 Jan 27 10:12:48 crc kubenswrapper[4799]: I0127 10:12:48.894326 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np5t6" event={"ID":"285acdb2-ac4d-4981-87bc-f0f37ac40040","Type":"ContainerDied","Data":"45c2e547ff6bd44f559edb2a68139621fb679369c175105bd9db85dc7b5352b1"} Jan 27 10:12:49 crc kubenswrapper[4799]: I0127 10:12:49.904169 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np5t6" event={"ID":"285acdb2-ac4d-4981-87bc-f0f37ac40040","Type":"ContainerStarted","Data":"41828f5e9cc17cde89400f158538f55b4e497fae9ebcb7291f4af5a45b732466"} Jan 27 10:12:50 crc kubenswrapper[4799]: I0127 10:12:50.914107 4799 generic.go:334] "Generic (PLEG): container finished" podID="285acdb2-ac4d-4981-87bc-f0f37ac40040" containerID="41828f5e9cc17cde89400f158538f55b4e497fae9ebcb7291f4af5a45b732466" exitCode=0 Jan 27 10:12:50 crc kubenswrapper[4799]: I0127 10:12:50.914213 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np5t6" event={"ID":"285acdb2-ac4d-4981-87bc-f0f37ac40040","Type":"ContainerDied","Data":"41828f5e9cc17cde89400f158538f55b4e497fae9ebcb7291f4af5a45b732466"} Jan 27 10:12:51 crc kubenswrapper[4799]: I0127 10:12:51.923641 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np5t6" event={"ID":"285acdb2-ac4d-4981-87bc-f0f37ac40040","Type":"ContainerStarted","Data":"5c39c501100d5c7cb95def87464ac54175b84ec150fff3f222c0deebdc16a6cd"} Jan 27 10:12:51 crc kubenswrapper[4799]: I0127 10:12:51.948748 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-np5t6" podStartSLOduration=3.47064367 podStartE2EDuration="5.948725707s" podCreationTimestamp="2026-01-27 10:12:46 +0000 UTC" firstStartedPulling="2026-01-27 10:12:48.897413967 +0000 UTC m=+8835.208518032" lastFinishedPulling="2026-01-27 10:12:51.375495964 +0000 UTC m=+8837.686600069" observedRunningTime="2026-01-27 10:12:51.940464573 +0000 UTC m=+8838.251568668" watchObservedRunningTime="2026-01-27 10:12:51.948725707 +0000 UTC m=+8838.259829772" Jan 27 10:12:53 crc kubenswrapper[4799]: I0127 10:12:53.731229 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:12:53 crc kubenswrapper[4799]: I0127 10:12:53.731324 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:12:57 crc kubenswrapper[4799]: I0127 10:12:57.246345 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-np5t6" Jan 27 10:12:57 crc kubenswrapper[4799]: I0127 10:12:57.246946 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-np5t6" Jan 27 10:12:57 crc kubenswrapper[4799]: I0127 10:12:57.815417 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-np5t6" Jan 27 10:12:58 crc kubenswrapper[4799]: I0127 10:12:58.044238 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-np5t6" Jan 27 10:12:58 crc kubenswrapper[4799]: I0127 10:12:58.093735 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-np5t6"] Jan 27 10:13:00 crc kubenswrapper[4799]: I0127 10:13:00.015634 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-np5t6" podUID="285acdb2-ac4d-4981-87bc-f0f37ac40040" containerName="registry-server" containerID="cri-o://5c39c501100d5c7cb95def87464ac54175b84ec150fff3f222c0deebdc16a6cd" gracePeriod=2 Jan 27 10:13:00 crc kubenswrapper[4799]: I0127 10:13:00.539616 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-np5t6" Jan 27 10:13:00 crc kubenswrapper[4799]: I0127 10:13:00.697206 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/285acdb2-ac4d-4981-87bc-f0f37ac40040-utilities\") pod \"285acdb2-ac4d-4981-87bc-f0f37ac40040\" (UID: \"285acdb2-ac4d-4981-87bc-f0f37ac40040\") " Jan 27 10:13:00 crc kubenswrapper[4799]: I0127 10:13:00.697287 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/285acdb2-ac4d-4981-87bc-f0f37ac40040-catalog-content\") pod \"285acdb2-ac4d-4981-87bc-f0f37ac40040\" (UID: \"285acdb2-ac4d-4981-87bc-f0f37ac40040\") " Jan 27 10:13:00 crc kubenswrapper[4799]: I0127 10:13:00.697449 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6tk2\" (UniqueName: \"kubernetes.io/projected/285acdb2-ac4d-4981-87bc-f0f37ac40040-kube-api-access-d6tk2\") pod \"285acdb2-ac4d-4981-87bc-f0f37ac40040\" (UID: \"285acdb2-ac4d-4981-87bc-f0f37ac40040\") " Jan 27 10:13:00 crc kubenswrapper[4799]: I0127 10:13:00.699078 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/285acdb2-ac4d-4981-87bc-f0f37ac40040-utilities" (OuterVolumeSpecName: "utilities") pod "285acdb2-ac4d-4981-87bc-f0f37ac40040" (UID: "285acdb2-ac4d-4981-87bc-f0f37ac40040"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:13:00 crc kubenswrapper[4799]: I0127 10:13:00.705670 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/285acdb2-ac4d-4981-87bc-f0f37ac40040-kube-api-access-d6tk2" (OuterVolumeSpecName: "kube-api-access-d6tk2") pod "285acdb2-ac4d-4981-87bc-f0f37ac40040" (UID: "285acdb2-ac4d-4981-87bc-f0f37ac40040"). InnerVolumeSpecName "kube-api-access-d6tk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:13:00 crc kubenswrapper[4799]: I0127 10:13:00.800378 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/285acdb2-ac4d-4981-87bc-f0f37ac40040-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:13:00 crc kubenswrapper[4799]: I0127 10:13:00.800426 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6tk2\" (UniqueName: \"kubernetes.io/projected/285acdb2-ac4d-4981-87bc-f0f37ac40040-kube-api-access-d6tk2\") on node \"crc\" DevicePath \"\"" Jan 27 10:13:01 crc kubenswrapper[4799]: I0127 10:13:01.028217 4799 generic.go:334] "Generic (PLEG): container finished" podID="285acdb2-ac4d-4981-87bc-f0f37ac40040" containerID="5c39c501100d5c7cb95def87464ac54175b84ec150fff3f222c0deebdc16a6cd" exitCode=0 Jan 27 10:13:01 crc kubenswrapper[4799]: I0127 10:13:01.028295 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np5t6" event={"ID":"285acdb2-ac4d-4981-87bc-f0f37ac40040","Type":"ContainerDied","Data":"5c39c501100d5c7cb95def87464ac54175b84ec150fff3f222c0deebdc16a6cd"} Jan 27 10:13:01 crc kubenswrapper[4799]: I0127 10:13:01.028399 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-np5t6" Jan 27 10:13:01 crc kubenswrapper[4799]: I0127 10:13:01.028428 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np5t6" event={"ID":"285acdb2-ac4d-4981-87bc-f0f37ac40040","Type":"ContainerDied","Data":"9c535cbec6c06fdc31c162be1b7261e44389d2227566c63f9cd0322e08f77955"} Jan 27 10:13:01 crc kubenswrapper[4799]: I0127 10:13:01.028463 4799 scope.go:117] "RemoveContainer" containerID="5c39c501100d5c7cb95def87464ac54175b84ec150fff3f222c0deebdc16a6cd" Jan 27 10:13:01 crc kubenswrapper[4799]: I0127 10:13:01.050094 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/285acdb2-ac4d-4981-87bc-f0f37ac40040-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "285acdb2-ac4d-4981-87bc-f0f37ac40040" (UID: "285acdb2-ac4d-4981-87bc-f0f37ac40040"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:13:01 crc kubenswrapper[4799]: I0127 10:13:01.055663 4799 scope.go:117] "RemoveContainer" containerID="41828f5e9cc17cde89400f158538f55b4e497fae9ebcb7291f4af5a45b732466" Jan 27 10:13:01 crc kubenswrapper[4799]: I0127 10:13:01.107347 4799 scope.go:117] "RemoveContainer" containerID="45c2e547ff6bd44f559edb2a68139621fb679369c175105bd9db85dc7b5352b1" Jan 27 10:13:01 crc kubenswrapper[4799]: I0127 10:13:01.107998 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/285acdb2-ac4d-4981-87bc-f0f37ac40040-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:13:01 crc kubenswrapper[4799]: I0127 10:13:01.160650 4799 scope.go:117] "RemoveContainer" containerID="5c39c501100d5c7cb95def87464ac54175b84ec150fff3f222c0deebdc16a6cd" Jan 27 10:13:01 crc kubenswrapper[4799]: E0127 10:13:01.162032 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c39c501100d5c7cb95def87464ac54175b84ec150fff3f222c0deebdc16a6cd\": container with ID starting with 5c39c501100d5c7cb95def87464ac54175b84ec150fff3f222c0deebdc16a6cd not found: ID does not exist" containerID="5c39c501100d5c7cb95def87464ac54175b84ec150fff3f222c0deebdc16a6cd" Jan 27 10:13:01 crc kubenswrapper[4799]: I0127 10:13:01.162101 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c39c501100d5c7cb95def87464ac54175b84ec150fff3f222c0deebdc16a6cd"} err="failed to get container status \"5c39c501100d5c7cb95def87464ac54175b84ec150fff3f222c0deebdc16a6cd\": rpc error: code = NotFound desc = could not find container \"5c39c501100d5c7cb95def87464ac54175b84ec150fff3f222c0deebdc16a6cd\": container with ID starting with 5c39c501100d5c7cb95def87464ac54175b84ec150fff3f222c0deebdc16a6cd not found: ID does not exist" Jan 27 10:13:01 crc kubenswrapper[4799]: I0127 10:13:01.162150 4799 scope.go:117] "RemoveContainer" containerID="41828f5e9cc17cde89400f158538f55b4e497fae9ebcb7291f4af5a45b732466" Jan 27 10:13:01 crc kubenswrapper[4799]: E0127 10:13:01.162863 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41828f5e9cc17cde89400f158538f55b4e497fae9ebcb7291f4af5a45b732466\": container with ID starting with 41828f5e9cc17cde89400f158538f55b4e497fae9ebcb7291f4af5a45b732466 not found: ID does not exist" containerID="41828f5e9cc17cde89400f158538f55b4e497fae9ebcb7291f4af5a45b732466" Jan 27 10:13:01 crc kubenswrapper[4799]: I0127 10:13:01.162903 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41828f5e9cc17cde89400f158538f55b4e497fae9ebcb7291f4af5a45b732466"} err="failed to get container status \"41828f5e9cc17cde89400f158538f55b4e497fae9ebcb7291f4af5a45b732466\": rpc error: code = NotFound desc = could not find container \"41828f5e9cc17cde89400f158538f55b4e497fae9ebcb7291f4af5a45b732466\": container with ID starting with 41828f5e9cc17cde89400f158538f55b4e497fae9ebcb7291f4af5a45b732466 not found: ID does not exist" Jan 27 10:13:01 crc kubenswrapper[4799]: I0127 10:13:01.162929 4799 scope.go:117] "RemoveContainer" containerID="45c2e547ff6bd44f559edb2a68139621fb679369c175105bd9db85dc7b5352b1" Jan 27 10:13:01 crc kubenswrapper[4799]: E0127 10:13:01.163339 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45c2e547ff6bd44f559edb2a68139621fb679369c175105bd9db85dc7b5352b1\": container with ID starting with 45c2e547ff6bd44f559edb2a68139621fb679369c175105bd9db85dc7b5352b1 not found: ID does not exist" containerID="45c2e547ff6bd44f559edb2a68139621fb679369c175105bd9db85dc7b5352b1" Jan 27 10:13:01 crc kubenswrapper[4799]: I0127 10:13:01.163406 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45c2e547ff6bd44f559edb2a68139621fb679369c175105bd9db85dc7b5352b1"} err="failed to get container status \"45c2e547ff6bd44f559edb2a68139621fb679369c175105bd9db85dc7b5352b1\": rpc error: code = NotFound desc = could not find container \"45c2e547ff6bd44f559edb2a68139621fb679369c175105bd9db85dc7b5352b1\": container with ID starting with 45c2e547ff6bd44f559edb2a68139621fb679369c175105bd9db85dc7b5352b1 not found: ID does not exist" Jan 27 10:13:01 crc kubenswrapper[4799]: I0127 10:13:01.382377 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-np5t6"] Jan 27 10:13:01 crc kubenswrapper[4799]: I0127 10:13:01.395164 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-np5t6"] Jan 27 10:13:02 crc kubenswrapper[4799]: I0127 10:13:02.475074 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="285acdb2-ac4d-4981-87bc-f0f37ac40040" path="/var/lib/kubelet/pods/285acdb2-ac4d-4981-87bc-f0f37ac40040/volumes" Jan 27 10:13:23 crc kubenswrapper[4799]: I0127 10:13:23.730978 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:13:23 crc kubenswrapper[4799]: I0127 10:13:23.731950 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:13:53 crc kubenswrapper[4799]: I0127 10:13:53.731399 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:13:53 crc kubenswrapper[4799]: I0127 10:13:53.732013 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:13:53 crc kubenswrapper[4799]: I0127 10:13:53.732078 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 10:13:53 crc kubenswrapper[4799]: I0127 10:13:53.733436 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 10:13:53 crc kubenswrapper[4799]: I0127 10:13:53.733539 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" gracePeriod=600 Jan 27 10:13:53 crc kubenswrapper[4799]: E0127 10:13:53.853520 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:13:54 crc kubenswrapper[4799]: I0127 10:13:54.615338 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" exitCode=0 Jan 27 10:13:54 crc kubenswrapper[4799]: I0127 10:13:54.615587 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e"} Jan 27 10:13:54 crc kubenswrapper[4799]: I0127 10:13:54.615754 4799 scope.go:117] "RemoveContainer" containerID="6f57606b63046eee0f036a82ab3032ceaa1614d7b07921f9827b64e7d98f2d97" Jan 27 10:13:54 crc kubenswrapper[4799]: I0127 10:13:54.616621 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:13:54 crc kubenswrapper[4799]: E0127 10:13:54.617064 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:14:07 crc kubenswrapper[4799]: I0127 10:14:07.452515 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:14:07 crc kubenswrapper[4799]: E0127 10:14:07.453366 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:14:18 crc kubenswrapper[4799]: I0127 10:14:18.451598 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:14:18 crc kubenswrapper[4799]: E0127 10:14:18.452787 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:14:33 crc kubenswrapper[4799]: I0127 10:14:33.451859 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:14:33 crc kubenswrapper[4799]: E0127 10:14:33.452841 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:14:45 crc kubenswrapper[4799]: I0127 10:14:45.453203 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:14:45 crc kubenswrapper[4799]: E0127 10:14:45.454281 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:14:56 crc kubenswrapper[4799]: I0127 10:14:56.451840 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:14:56 crc kubenswrapper[4799]: E0127 10:14:56.454628 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:15:00 crc kubenswrapper[4799]: I0127 10:15:00.159051 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491815-q94rq"] Jan 27 10:15:00 crc kubenswrapper[4799]: E0127 10:15:00.159863 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="285acdb2-ac4d-4981-87bc-f0f37ac40040" containerName="registry-server" Jan 27 10:15:00 crc kubenswrapper[4799]: I0127 10:15:00.159879 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="285acdb2-ac4d-4981-87bc-f0f37ac40040" containerName="registry-server" Jan 27 10:15:00 crc kubenswrapper[4799]: E0127 10:15:00.159918 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="285acdb2-ac4d-4981-87bc-f0f37ac40040" containerName="extract-content" Jan 27 10:15:00 crc kubenswrapper[4799]: I0127 10:15:00.159926 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="285acdb2-ac4d-4981-87bc-f0f37ac40040" containerName="extract-content" Jan 27 10:15:00 crc kubenswrapper[4799]: E0127 10:15:00.159960 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="285acdb2-ac4d-4981-87bc-f0f37ac40040" containerName="extract-utilities" Jan 27 10:15:00 crc kubenswrapper[4799]: I0127 10:15:00.159968 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="285acdb2-ac4d-4981-87bc-f0f37ac40040" containerName="extract-utilities" Jan 27 10:15:00 crc kubenswrapper[4799]: I0127 10:15:00.160177 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="285acdb2-ac4d-4981-87bc-f0f37ac40040" containerName="registry-server" Jan 27 10:15:00 crc kubenswrapper[4799]: I0127 10:15:00.161018 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-q94rq" Jan 27 10:15:00 crc kubenswrapper[4799]: I0127 10:15:00.164246 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 10:15:00 crc kubenswrapper[4799]: I0127 10:15:00.167094 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 10:15:00 crc kubenswrapper[4799]: I0127 10:15:00.181024 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491815-q94rq"] Jan 27 10:15:00 crc kubenswrapper[4799]: I0127 10:15:00.266727 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86314273-6d1f-4cea-95bf-81f04e98f8c4-config-volume\") pod \"collect-profiles-29491815-q94rq\" (UID: \"86314273-6d1f-4cea-95bf-81f04e98f8c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-q94rq" Jan 27 10:15:00 crc kubenswrapper[4799]: I0127 10:15:00.267225 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnfc8\" (UniqueName: \"kubernetes.io/projected/86314273-6d1f-4cea-95bf-81f04e98f8c4-kube-api-access-tnfc8\") pod \"collect-profiles-29491815-q94rq\" (UID: \"86314273-6d1f-4cea-95bf-81f04e98f8c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-q94rq" Jan 27 10:15:00 crc kubenswrapper[4799]: I0127 10:15:00.267415 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86314273-6d1f-4cea-95bf-81f04e98f8c4-secret-volume\") pod \"collect-profiles-29491815-q94rq\" (UID: \"86314273-6d1f-4cea-95bf-81f04e98f8c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-q94rq" Jan 27 10:15:00 crc kubenswrapper[4799]: I0127 10:15:00.373790 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86314273-6d1f-4cea-95bf-81f04e98f8c4-config-volume\") pod \"collect-profiles-29491815-q94rq\" (UID: \"86314273-6d1f-4cea-95bf-81f04e98f8c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-q94rq" Jan 27 10:15:00 crc kubenswrapper[4799]: I0127 10:15:00.373960 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnfc8\" (UniqueName: \"kubernetes.io/projected/86314273-6d1f-4cea-95bf-81f04e98f8c4-kube-api-access-tnfc8\") pod \"collect-profiles-29491815-q94rq\" (UID: \"86314273-6d1f-4cea-95bf-81f04e98f8c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-q94rq" Jan 27 10:15:00 crc kubenswrapper[4799]: I0127 10:15:00.374408 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86314273-6d1f-4cea-95bf-81f04e98f8c4-secret-volume\") pod \"collect-profiles-29491815-q94rq\" (UID: \"86314273-6d1f-4cea-95bf-81f04e98f8c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-q94rq" Jan 27 10:15:00 crc kubenswrapper[4799]: I0127 10:15:00.376569 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86314273-6d1f-4cea-95bf-81f04e98f8c4-config-volume\") pod \"collect-profiles-29491815-q94rq\" (UID: \"86314273-6d1f-4cea-95bf-81f04e98f8c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-q94rq" Jan 27 10:15:00 crc kubenswrapper[4799]: I0127 10:15:00.387421 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86314273-6d1f-4cea-95bf-81f04e98f8c4-secret-volume\") pod \"collect-profiles-29491815-q94rq\" (UID: \"86314273-6d1f-4cea-95bf-81f04e98f8c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-q94rq" Jan 27 10:15:00 crc kubenswrapper[4799]: I0127 10:15:00.403190 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnfc8\" (UniqueName: \"kubernetes.io/projected/86314273-6d1f-4cea-95bf-81f04e98f8c4-kube-api-access-tnfc8\") pod \"collect-profiles-29491815-q94rq\" (UID: \"86314273-6d1f-4cea-95bf-81f04e98f8c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-q94rq" Jan 27 10:15:00 crc kubenswrapper[4799]: I0127 10:15:00.491611 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-q94rq" Jan 27 10:15:00 crc kubenswrapper[4799]: I0127 10:15:00.988492 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491815-q94rq"] Jan 27 10:15:01 crc kubenswrapper[4799]: I0127 10:15:01.401186 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-q94rq" event={"ID":"86314273-6d1f-4cea-95bf-81f04e98f8c4","Type":"ContainerStarted","Data":"496083818f0b5cb48ee88d2f4501f9faaf5af6c95734187928643419a92fe688"} Jan 27 10:15:01 crc kubenswrapper[4799]: I0127 10:15:01.401583 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-q94rq" event={"ID":"86314273-6d1f-4cea-95bf-81f04e98f8c4","Type":"ContainerStarted","Data":"5b18c8e3c4ed125adca9672d14e7aa6a83b6d6dc90f1830c95ab09b497ca8843"} Jan 27 10:15:01 crc kubenswrapper[4799]: I0127 10:15:01.438390 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-q94rq" podStartSLOduration=1.438363588 podStartE2EDuration="1.438363588s" podCreationTimestamp="2026-01-27 10:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 10:15:01.428937601 +0000 UTC m=+8967.740041656" watchObservedRunningTime="2026-01-27 10:15:01.438363588 +0000 UTC m=+8967.749467653" Jan 27 10:15:02 crc kubenswrapper[4799]: I0127 10:15:02.411809 4799 generic.go:334] "Generic (PLEG): container finished" podID="86314273-6d1f-4cea-95bf-81f04e98f8c4" containerID="496083818f0b5cb48ee88d2f4501f9faaf5af6c95734187928643419a92fe688" exitCode=0 Jan 27 10:15:02 crc kubenswrapper[4799]: I0127 10:15:02.411922 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-q94rq" event={"ID":"86314273-6d1f-4cea-95bf-81f04e98f8c4","Type":"ContainerDied","Data":"496083818f0b5cb48ee88d2f4501f9faaf5af6c95734187928643419a92fe688"} Jan 27 10:15:03 crc kubenswrapper[4799]: I0127 10:15:03.802753 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-q94rq" Jan 27 10:15:03 crc kubenswrapper[4799]: I0127 10:15:03.859625 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86314273-6d1f-4cea-95bf-81f04e98f8c4-config-volume\") pod \"86314273-6d1f-4cea-95bf-81f04e98f8c4\" (UID: \"86314273-6d1f-4cea-95bf-81f04e98f8c4\") " Jan 27 10:15:03 crc kubenswrapper[4799]: I0127 10:15:03.859819 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86314273-6d1f-4cea-95bf-81f04e98f8c4-secret-volume\") pod \"86314273-6d1f-4cea-95bf-81f04e98f8c4\" (UID: \"86314273-6d1f-4cea-95bf-81f04e98f8c4\") " Jan 27 10:15:03 crc kubenswrapper[4799]: I0127 10:15:03.859841 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnfc8\" (UniqueName: \"kubernetes.io/projected/86314273-6d1f-4cea-95bf-81f04e98f8c4-kube-api-access-tnfc8\") pod \"86314273-6d1f-4cea-95bf-81f04e98f8c4\" (UID: \"86314273-6d1f-4cea-95bf-81f04e98f8c4\") " Jan 27 10:15:03 crc kubenswrapper[4799]: I0127 10:15:03.860215 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86314273-6d1f-4cea-95bf-81f04e98f8c4-config-volume" (OuterVolumeSpecName: "config-volume") pod "86314273-6d1f-4cea-95bf-81f04e98f8c4" (UID: "86314273-6d1f-4cea-95bf-81f04e98f8c4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:15:03 crc kubenswrapper[4799]: I0127 10:15:03.860629 4799 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86314273-6d1f-4cea-95bf-81f04e98f8c4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 10:15:03 crc kubenswrapper[4799]: I0127 10:15:03.866630 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86314273-6d1f-4cea-95bf-81f04e98f8c4-kube-api-access-tnfc8" (OuterVolumeSpecName: "kube-api-access-tnfc8") pod "86314273-6d1f-4cea-95bf-81f04e98f8c4" (UID: "86314273-6d1f-4cea-95bf-81f04e98f8c4"). InnerVolumeSpecName "kube-api-access-tnfc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:15:03 crc kubenswrapper[4799]: I0127 10:15:03.867158 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86314273-6d1f-4cea-95bf-81f04e98f8c4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "86314273-6d1f-4cea-95bf-81f04e98f8c4" (UID: "86314273-6d1f-4cea-95bf-81f04e98f8c4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:15:03 crc kubenswrapper[4799]: I0127 10:15:03.962192 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnfc8\" (UniqueName: \"kubernetes.io/projected/86314273-6d1f-4cea-95bf-81f04e98f8c4-kube-api-access-tnfc8\") on node \"crc\" DevicePath \"\"" Jan 27 10:15:03 crc kubenswrapper[4799]: I0127 10:15:03.962237 4799 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86314273-6d1f-4cea-95bf-81f04e98f8c4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 10:15:04 crc kubenswrapper[4799]: I0127 10:15:04.438590 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-q94rq" event={"ID":"86314273-6d1f-4cea-95bf-81f04e98f8c4","Type":"ContainerDied","Data":"5b18c8e3c4ed125adca9672d14e7aa6a83b6d6dc90f1830c95ab09b497ca8843"} Jan 27 10:15:04 crc kubenswrapper[4799]: I0127 10:15:04.439374 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b18c8e3c4ed125adca9672d14e7aa6a83b6d6dc90f1830c95ab09b497ca8843" Jan 27 10:15:04 crc kubenswrapper[4799]: I0127 10:15:04.438646 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-q94rq" Jan 27 10:15:04 crc kubenswrapper[4799]: I0127 10:15:04.903031 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491770-q9mlb"] Jan 27 10:15:04 crc kubenswrapper[4799]: I0127 10:15:04.911722 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491770-q9mlb"] Jan 27 10:15:06 crc kubenswrapper[4799]: I0127 10:15:06.468138 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b7b0162-4436-4d26-ba2b-58a750a5b02e" path="/var/lib/kubelet/pods/0b7b0162-4436-4d26-ba2b-58a750a5b02e/volumes" Jan 27 10:15:09 crc kubenswrapper[4799]: I0127 10:15:09.451841 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:15:09 crc kubenswrapper[4799]: E0127 10:15:09.452998 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:15:22 crc kubenswrapper[4799]: I0127 10:15:22.453171 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:15:22 crc kubenswrapper[4799]: E0127 10:15:22.455076 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:15:33 crc kubenswrapper[4799]: I0127 10:15:33.451104 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:15:33 crc kubenswrapper[4799]: E0127 10:15:33.452058 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:15:45 crc kubenswrapper[4799]: I0127 10:15:45.009381 4799 scope.go:117] "RemoveContainer" containerID="fdc480e47f5ac9c13eafcc30671270a042a08e2e866f3def9f1d225a54ae236f" Jan 27 10:15:45 crc kubenswrapper[4799]: I0127 10:15:45.451958 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:15:45 crc kubenswrapper[4799]: E0127 10:15:45.453090 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:15:57 crc kubenswrapper[4799]: I0127 10:15:57.451847 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:15:57 crc kubenswrapper[4799]: E0127 10:15:57.452785 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:16:12 crc kubenswrapper[4799]: I0127 10:16:12.451727 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:16:12 crc kubenswrapper[4799]: E0127 10:16:12.453035 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:16:26 crc kubenswrapper[4799]: I0127 10:16:26.452023 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:16:26 crc kubenswrapper[4799]: E0127 10:16:26.452745 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:16:37 crc kubenswrapper[4799]: I0127 10:16:37.612178 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5d2qr"] Jan 27 10:16:37 crc kubenswrapper[4799]: E0127 10:16:37.615532 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86314273-6d1f-4cea-95bf-81f04e98f8c4" containerName="collect-profiles" Jan 27 10:16:37 crc kubenswrapper[4799]: I0127 10:16:37.615621 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="86314273-6d1f-4cea-95bf-81f04e98f8c4" containerName="collect-profiles" Jan 27 10:16:37 crc kubenswrapper[4799]: I0127 10:16:37.615859 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="86314273-6d1f-4cea-95bf-81f04e98f8c4" containerName="collect-profiles" Jan 27 10:16:37 crc kubenswrapper[4799]: I0127 10:16:37.618079 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5d2qr" Jan 27 10:16:37 crc kubenswrapper[4799]: I0127 10:16:37.623875 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5d2qr"] Jan 27 10:16:37 crc kubenswrapper[4799]: I0127 10:16:37.650557 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgbkh\" (UniqueName: \"kubernetes.io/projected/f191d77e-4275-45bb-ab1e-2c4571aedb1e-kube-api-access-sgbkh\") pod \"certified-operators-5d2qr\" (UID: \"f191d77e-4275-45bb-ab1e-2c4571aedb1e\") " pod="openshift-marketplace/certified-operators-5d2qr" Jan 27 10:16:37 crc kubenswrapper[4799]: I0127 10:16:37.650789 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f191d77e-4275-45bb-ab1e-2c4571aedb1e-catalog-content\") pod \"certified-operators-5d2qr\" (UID: \"f191d77e-4275-45bb-ab1e-2c4571aedb1e\") " pod="openshift-marketplace/certified-operators-5d2qr" Jan 27 10:16:37 crc kubenswrapper[4799]: I0127 10:16:37.650906 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f191d77e-4275-45bb-ab1e-2c4571aedb1e-utilities\") pod \"certified-operators-5d2qr\" (UID: \"f191d77e-4275-45bb-ab1e-2c4571aedb1e\") " pod="openshift-marketplace/certified-operators-5d2qr" Jan 27 10:16:37 crc kubenswrapper[4799]: I0127 10:16:37.752594 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f191d77e-4275-45bb-ab1e-2c4571aedb1e-utilities\") pod \"certified-operators-5d2qr\" (UID: \"f191d77e-4275-45bb-ab1e-2c4571aedb1e\") " pod="openshift-marketplace/certified-operators-5d2qr" Jan 27 10:16:37 crc kubenswrapper[4799]: I0127 10:16:37.752768 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgbkh\" (UniqueName: \"kubernetes.io/projected/f191d77e-4275-45bb-ab1e-2c4571aedb1e-kube-api-access-sgbkh\") pod \"certified-operators-5d2qr\" (UID: \"f191d77e-4275-45bb-ab1e-2c4571aedb1e\") " pod="openshift-marketplace/certified-operators-5d2qr" Jan 27 10:16:37 crc kubenswrapper[4799]: I0127 10:16:37.752872 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f191d77e-4275-45bb-ab1e-2c4571aedb1e-catalog-content\") pod \"certified-operators-5d2qr\" (UID: \"f191d77e-4275-45bb-ab1e-2c4571aedb1e\") " pod="openshift-marketplace/certified-operators-5d2qr" Jan 27 10:16:37 crc kubenswrapper[4799]: I0127 10:16:37.753518 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f191d77e-4275-45bb-ab1e-2c4571aedb1e-catalog-content\") pod \"certified-operators-5d2qr\" (UID: \"f191d77e-4275-45bb-ab1e-2c4571aedb1e\") " pod="openshift-marketplace/certified-operators-5d2qr" Jan 27 10:16:37 crc kubenswrapper[4799]: I0127 10:16:37.753802 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f191d77e-4275-45bb-ab1e-2c4571aedb1e-utilities\") pod \"certified-operators-5d2qr\" (UID: \"f191d77e-4275-45bb-ab1e-2c4571aedb1e\") " pod="openshift-marketplace/certified-operators-5d2qr" Jan 27 10:16:37 crc kubenswrapper[4799]: I0127 10:16:37.783345 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgbkh\" (UniqueName: \"kubernetes.io/projected/f191d77e-4275-45bb-ab1e-2c4571aedb1e-kube-api-access-sgbkh\") pod \"certified-operators-5d2qr\" (UID: \"f191d77e-4275-45bb-ab1e-2c4571aedb1e\") " pod="openshift-marketplace/certified-operators-5d2qr" Jan 27 10:16:37 crc kubenswrapper[4799]: I0127 10:16:37.949810 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5d2qr" Jan 27 10:16:38 crc kubenswrapper[4799]: I0127 10:16:38.479688 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5d2qr"] Jan 27 10:16:39 crc kubenswrapper[4799]: I0127 10:16:39.452011 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:16:39 crc kubenswrapper[4799]: E0127 10:16:39.452680 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:16:39 crc kubenswrapper[4799]: I0127 10:16:39.535226 4799 generic.go:334] "Generic (PLEG): container finished" podID="f191d77e-4275-45bb-ab1e-2c4571aedb1e" containerID="4cd116c26b3d163710d92bde53b0bd16edf3556e9e128b97943b9d434143b9fe" exitCode=0 Jan 27 10:16:39 crc kubenswrapper[4799]: I0127 10:16:39.535266 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5d2qr" event={"ID":"f191d77e-4275-45bb-ab1e-2c4571aedb1e","Type":"ContainerDied","Data":"4cd116c26b3d163710d92bde53b0bd16edf3556e9e128b97943b9d434143b9fe"} Jan 27 10:16:39 crc kubenswrapper[4799]: I0127 10:16:39.535293 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5d2qr" event={"ID":"f191d77e-4275-45bb-ab1e-2c4571aedb1e","Type":"ContainerStarted","Data":"a76744872c0175c509359af0145880d6f2a9cf7d85633560c4ca70882a1fdc3a"} Jan 27 10:16:39 crc kubenswrapper[4799]: I0127 10:16:39.537039 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 10:16:41 crc kubenswrapper[4799]: I0127 10:16:41.608276 4799 generic.go:334] "Generic (PLEG): container finished" podID="f191d77e-4275-45bb-ab1e-2c4571aedb1e" containerID="d9029461d41071cdd12a3c400de5e971b5c13840af4aec5fb1efe0dc5029322f" exitCode=0 Jan 27 10:16:41 crc kubenswrapper[4799]: I0127 10:16:41.609813 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5d2qr" event={"ID":"f191d77e-4275-45bb-ab1e-2c4571aedb1e","Type":"ContainerDied","Data":"d9029461d41071cdd12a3c400de5e971b5c13840af4aec5fb1efe0dc5029322f"} Jan 27 10:16:42 crc kubenswrapper[4799]: I0127 10:16:42.622171 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5d2qr" event={"ID":"f191d77e-4275-45bb-ab1e-2c4571aedb1e","Type":"ContainerStarted","Data":"2f643084bf0fe60708166dcc933ac3d4fdaafe1eeb05e20507d7a7c52870154a"} Jan 27 10:16:42 crc kubenswrapper[4799]: I0127 10:16:42.641586 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5d2qr" podStartSLOduration=3.139235739 podStartE2EDuration="5.641562005s" podCreationTimestamp="2026-01-27 10:16:37 +0000 UTC" firstStartedPulling="2026-01-27 10:16:39.53677706 +0000 UTC m=+9065.847881125" lastFinishedPulling="2026-01-27 10:16:42.039103286 +0000 UTC m=+9068.350207391" observedRunningTime="2026-01-27 10:16:42.64028172 +0000 UTC m=+9068.951385785" watchObservedRunningTime="2026-01-27 10:16:42.641562005 +0000 UTC m=+9068.952666070" Jan 27 10:16:47 crc kubenswrapper[4799]: I0127 10:16:47.950439 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5d2qr" Jan 27 10:16:47 crc kubenswrapper[4799]: I0127 10:16:47.951126 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5d2qr" Jan 27 10:16:48 crc kubenswrapper[4799]: I0127 10:16:48.022101 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5d2qr" Jan 27 10:16:48 crc kubenswrapper[4799]: I0127 10:16:48.729015 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5d2qr" Jan 27 10:16:48 crc kubenswrapper[4799]: I0127 10:16:48.788598 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5d2qr"] Jan 27 10:16:50 crc kubenswrapper[4799]: I0127 10:16:50.697682 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5d2qr" podUID="f191d77e-4275-45bb-ab1e-2c4571aedb1e" containerName="registry-server" containerID="cri-o://2f643084bf0fe60708166dcc933ac3d4fdaafe1eeb05e20507d7a7c52870154a" gracePeriod=2 Jan 27 10:16:51 crc kubenswrapper[4799]: I0127 10:16:51.244380 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5d2qr" Jan 27 10:16:51 crc kubenswrapper[4799]: I0127 10:16:51.285236 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgbkh\" (UniqueName: \"kubernetes.io/projected/f191d77e-4275-45bb-ab1e-2c4571aedb1e-kube-api-access-sgbkh\") pod \"f191d77e-4275-45bb-ab1e-2c4571aedb1e\" (UID: \"f191d77e-4275-45bb-ab1e-2c4571aedb1e\") " Jan 27 10:16:51 crc kubenswrapper[4799]: I0127 10:16:51.285435 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f191d77e-4275-45bb-ab1e-2c4571aedb1e-utilities\") pod \"f191d77e-4275-45bb-ab1e-2c4571aedb1e\" (UID: \"f191d77e-4275-45bb-ab1e-2c4571aedb1e\") " Jan 27 10:16:51 crc kubenswrapper[4799]: I0127 10:16:51.285680 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f191d77e-4275-45bb-ab1e-2c4571aedb1e-catalog-content\") pod \"f191d77e-4275-45bb-ab1e-2c4571aedb1e\" (UID: \"f191d77e-4275-45bb-ab1e-2c4571aedb1e\") " Jan 27 10:16:51 crc kubenswrapper[4799]: I0127 10:16:51.286885 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f191d77e-4275-45bb-ab1e-2c4571aedb1e-utilities" (OuterVolumeSpecName: "utilities") pod "f191d77e-4275-45bb-ab1e-2c4571aedb1e" (UID: "f191d77e-4275-45bb-ab1e-2c4571aedb1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:16:51 crc kubenswrapper[4799]: I0127 10:16:51.295473 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f191d77e-4275-45bb-ab1e-2c4571aedb1e-kube-api-access-sgbkh" (OuterVolumeSpecName: "kube-api-access-sgbkh") pod "f191d77e-4275-45bb-ab1e-2c4571aedb1e" (UID: "f191d77e-4275-45bb-ab1e-2c4571aedb1e"). InnerVolumeSpecName "kube-api-access-sgbkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:16:51 crc kubenswrapper[4799]: I0127 10:16:51.361950 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f191d77e-4275-45bb-ab1e-2c4571aedb1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f191d77e-4275-45bb-ab1e-2c4571aedb1e" (UID: "f191d77e-4275-45bb-ab1e-2c4571aedb1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:16:51 crc kubenswrapper[4799]: I0127 10:16:51.388845 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgbkh\" (UniqueName: \"kubernetes.io/projected/f191d77e-4275-45bb-ab1e-2c4571aedb1e-kube-api-access-sgbkh\") on node \"crc\" DevicePath \"\"" Jan 27 10:16:51 crc kubenswrapper[4799]: I0127 10:16:51.388885 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f191d77e-4275-45bb-ab1e-2c4571aedb1e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:16:51 crc kubenswrapper[4799]: I0127 10:16:51.388896 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f191d77e-4275-45bb-ab1e-2c4571aedb1e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:16:51 crc kubenswrapper[4799]: I0127 10:16:51.451686 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:16:51 crc kubenswrapper[4799]: E0127 10:16:51.452152 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:16:51 crc kubenswrapper[4799]: I0127 10:16:51.708831 4799 generic.go:334] "Generic (PLEG): container finished" podID="f191d77e-4275-45bb-ab1e-2c4571aedb1e" containerID="2f643084bf0fe60708166dcc933ac3d4fdaafe1eeb05e20507d7a7c52870154a" exitCode=0 Jan 27 10:16:51 crc kubenswrapper[4799]: I0127 10:16:51.708881 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5d2qr" event={"ID":"f191d77e-4275-45bb-ab1e-2c4571aedb1e","Type":"ContainerDied","Data":"2f643084bf0fe60708166dcc933ac3d4fdaafe1eeb05e20507d7a7c52870154a"} Jan 27 10:16:51 crc kubenswrapper[4799]: I0127 10:16:51.708915 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5d2qr" event={"ID":"f191d77e-4275-45bb-ab1e-2c4571aedb1e","Type":"ContainerDied","Data":"a76744872c0175c509359af0145880d6f2a9cf7d85633560c4ca70882a1fdc3a"} Jan 27 10:16:51 crc kubenswrapper[4799]: I0127 10:16:51.708937 4799 scope.go:117] "RemoveContainer" containerID="2f643084bf0fe60708166dcc933ac3d4fdaafe1eeb05e20507d7a7c52870154a" Jan 27 10:16:51 crc kubenswrapper[4799]: I0127 10:16:51.708971 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5d2qr" Jan 27 10:16:51 crc kubenswrapper[4799]: I0127 10:16:51.734063 4799 scope.go:117] "RemoveContainer" containerID="d9029461d41071cdd12a3c400de5e971b5c13840af4aec5fb1efe0dc5029322f" Jan 27 10:16:51 crc kubenswrapper[4799]: I0127 10:16:51.756842 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5d2qr"] Jan 27 10:16:51 crc kubenswrapper[4799]: I0127 10:16:51.765885 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5d2qr"] Jan 27 10:16:52 crc kubenswrapper[4799]: I0127 10:16:52.057000 4799 scope.go:117] "RemoveContainer" containerID="4cd116c26b3d163710d92bde53b0bd16edf3556e9e128b97943b9d434143b9fe" Jan 27 10:16:52 crc kubenswrapper[4799]: I0127 10:16:52.125321 4799 scope.go:117] "RemoveContainer" containerID="2f643084bf0fe60708166dcc933ac3d4fdaafe1eeb05e20507d7a7c52870154a" Jan 27 10:16:52 crc kubenswrapper[4799]: E0127 10:16:52.128771 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f643084bf0fe60708166dcc933ac3d4fdaafe1eeb05e20507d7a7c52870154a\": container with ID starting with 2f643084bf0fe60708166dcc933ac3d4fdaafe1eeb05e20507d7a7c52870154a not found: ID does not exist" containerID="2f643084bf0fe60708166dcc933ac3d4fdaafe1eeb05e20507d7a7c52870154a" Jan 27 10:16:52 crc kubenswrapper[4799]: I0127 10:16:52.128803 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f643084bf0fe60708166dcc933ac3d4fdaafe1eeb05e20507d7a7c52870154a"} err="failed to get container status \"2f643084bf0fe60708166dcc933ac3d4fdaafe1eeb05e20507d7a7c52870154a\": rpc error: code = NotFound desc = could not find container \"2f643084bf0fe60708166dcc933ac3d4fdaafe1eeb05e20507d7a7c52870154a\": container with ID starting with 2f643084bf0fe60708166dcc933ac3d4fdaafe1eeb05e20507d7a7c52870154a not found: ID does not exist" Jan 27 10:16:52 crc kubenswrapper[4799]: I0127 10:16:52.128826 4799 scope.go:117] "RemoveContainer" containerID="d9029461d41071cdd12a3c400de5e971b5c13840af4aec5fb1efe0dc5029322f" Jan 27 10:16:52 crc kubenswrapper[4799]: E0127 10:16:52.129134 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9029461d41071cdd12a3c400de5e971b5c13840af4aec5fb1efe0dc5029322f\": container with ID starting with d9029461d41071cdd12a3c400de5e971b5c13840af4aec5fb1efe0dc5029322f not found: ID does not exist" containerID="d9029461d41071cdd12a3c400de5e971b5c13840af4aec5fb1efe0dc5029322f" Jan 27 10:16:52 crc kubenswrapper[4799]: I0127 10:16:52.129157 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9029461d41071cdd12a3c400de5e971b5c13840af4aec5fb1efe0dc5029322f"} err="failed to get container status \"d9029461d41071cdd12a3c400de5e971b5c13840af4aec5fb1efe0dc5029322f\": rpc error: code = NotFound desc = could not find container \"d9029461d41071cdd12a3c400de5e971b5c13840af4aec5fb1efe0dc5029322f\": container with ID starting with d9029461d41071cdd12a3c400de5e971b5c13840af4aec5fb1efe0dc5029322f not found: ID does not exist" Jan 27 10:16:52 crc kubenswrapper[4799]: I0127 10:16:52.129170 4799 scope.go:117] "RemoveContainer" containerID="4cd116c26b3d163710d92bde53b0bd16edf3556e9e128b97943b9d434143b9fe" Jan 27 10:16:52 crc kubenswrapper[4799]: E0127 10:16:52.130805 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cd116c26b3d163710d92bde53b0bd16edf3556e9e128b97943b9d434143b9fe\": container with ID starting with 4cd116c26b3d163710d92bde53b0bd16edf3556e9e128b97943b9d434143b9fe not found: ID does not exist" containerID="4cd116c26b3d163710d92bde53b0bd16edf3556e9e128b97943b9d434143b9fe" Jan 27 10:16:52 crc kubenswrapper[4799]: I0127 10:16:52.130826 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd116c26b3d163710d92bde53b0bd16edf3556e9e128b97943b9d434143b9fe"} err="failed to get container status \"4cd116c26b3d163710d92bde53b0bd16edf3556e9e128b97943b9d434143b9fe\": rpc error: code = NotFound desc = could not find container \"4cd116c26b3d163710d92bde53b0bd16edf3556e9e128b97943b9d434143b9fe\": container with ID starting with 4cd116c26b3d163710d92bde53b0bd16edf3556e9e128b97943b9d434143b9fe not found: ID does not exist" Jan 27 10:16:52 crc kubenswrapper[4799]: I0127 10:16:52.461144 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f191d77e-4275-45bb-ab1e-2c4571aedb1e" path="/var/lib/kubelet/pods/f191d77e-4275-45bb-ab1e-2c4571aedb1e/volumes" Jan 27 10:17:02 crc kubenswrapper[4799]: I0127 10:17:02.451749 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:17:02 crc kubenswrapper[4799]: E0127 10:17:02.453461 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:17:05 crc kubenswrapper[4799]: I0127 10:17:05.749711 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="f1c13e78-9e9e-4b56-aba4-df7d2a77339d" containerName="galera" probeResult="failure" output="command timed out" Jan 27 10:17:15 crc kubenswrapper[4799]: I0127 10:17:15.452192 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:17:15 crc kubenswrapper[4799]: E0127 10:17:15.453559 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:17:27 crc kubenswrapper[4799]: I0127 10:17:27.452578 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:17:27 crc kubenswrapper[4799]: E0127 10:17:27.453734 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:17:39 crc kubenswrapper[4799]: I0127 10:17:39.455200 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:17:39 crc kubenswrapper[4799]: E0127 10:17:39.455913 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:17:50 crc kubenswrapper[4799]: I0127 10:17:50.451944 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:17:50 crc kubenswrapper[4799]: E0127 10:17:50.453217 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:18:02 crc kubenswrapper[4799]: I0127 10:18:02.452084 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:18:02 crc kubenswrapper[4799]: E0127 10:18:02.453153 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:18:14 crc kubenswrapper[4799]: I0127 10:18:14.463724 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:18:14 crc kubenswrapper[4799]: E0127 10:18:14.465029 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:18:27 crc kubenswrapper[4799]: I0127 10:18:27.452913 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:18:27 crc kubenswrapper[4799]: E0127 10:18:27.454234 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:18:42 crc kubenswrapper[4799]: I0127 10:18:42.451953 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:18:42 crc kubenswrapper[4799]: E0127 10:18:42.452857 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:18:57 crc kubenswrapper[4799]: I0127 10:18:57.452087 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:18:58 crc kubenswrapper[4799]: I0127 10:18:58.572009 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"0df998f00d1367b8658a5dfcfc7a9edf1970e35af92947f241a6cf155f8c1cc4"} Jan 27 10:21:23 crc kubenswrapper[4799]: I0127 10:21:23.731568 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:21:23 crc kubenswrapper[4799]: I0127 10:21:23.732366 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:21:53 crc kubenswrapper[4799]: I0127 10:21:53.732104 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:21:53 crc kubenswrapper[4799]: I0127 10:21:53.732771 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:22:23 crc kubenswrapper[4799]: I0127 10:22:23.731629 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:22:23 crc kubenswrapper[4799]: I0127 10:22:23.732898 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:22:23 crc kubenswrapper[4799]: I0127 10:22:23.732998 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 10:22:23 crc kubenswrapper[4799]: I0127 10:22:23.733831 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0df998f00d1367b8658a5dfcfc7a9edf1970e35af92947f241a6cf155f8c1cc4"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 10:22:23 crc kubenswrapper[4799]: I0127 10:22:23.733958 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://0df998f00d1367b8658a5dfcfc7a9edf1970e35af92947f241a6cf155f8c1cc4" gracePeriod=600 Jan 27 10:22:24 crc kubenswrapper[4799]: I0127 10:22:24.852711 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="0df998f00d1367b8658a5dfcfc7a9edf1970e35af92947f241a6cf155f8c1cc4" exitCode=0 Jan 27 10:22:24 crc kubenswrapper[4799]: I0127 10:22:24.852780 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"0df998f00d1367b8658a5dfcfc7a9edf1970e35af92947f241a6cf155f8c1cc4"} Jan 27 10:22:24 crc kubenswrapper[4799]: I0127 10:22:24.853868 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6"} Jan 27 10:22:24 crc kubenswrapper[4799]: I0127 10:22:24.853915 4799 scope.go:117] "RemoveContainer" containerID="84418ad87863686e06a206f86722c077fee4b6651321909a2e0e5a270b615d1e" Jan 27 10:23:12 crc kubenswrapper[4799]: I0127 10:23:12.829910 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gnrr6"] Jan 27 10:23:12 crc kubenswrapper[4799]: E0127 10:23:12.831615 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f191d77e-4275-45bb-ab1e-2c4571aedb1e" containerName="extract-content" Jan 27 10:23:12 crc kubenswrapper[4799]: I0127 10:23:12.831648 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f191d77e-4275-45bb-ab1e-2c4571aedb1e" containerName="extract-content" Jan 27 10:23:12 crc kubenswrapper[4799]: E0127 10:23:12.831685 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f191d77e-4275-45bb-ab1e-2c4571aedb1e" containerName="extract-utilities" Jan 27 10:23:12 crc kubenswrapper[4799]: I0127 10:23:12.831701 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f191d77e-4275-45bb-ab1e-2c4571aedb1e" containerName="extract-utilities" Jan 27 10:23:12 crc kubenswrapper[4799]: E0127 10:23:12.831720 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f191d77e-4275-45bb-ab1e-2c4571aedb1e" containerName="registry-server" Jan 27 10:23:12 crc kubenswrapper[4799]: I0127 10:23:12.831738 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f191d77e-4275-45bb-ab1e-2c4571aedb1e" containerName="registry-server" Jan 27 10:23:12 crc kubenswrapper[4799]: I0127 10:23:12.832341 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f191d77e-4275-45bb-ab1e-2c4571aedb1e" containerName="registry-server" Jan 27 10:23:12 crc kubenswrapper[4799]: I0127 10:23:12.836273 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gnrr6" Jan 27 10:23:12 crc kubenswrapper[4799]: I0127 10:23:12.845385 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnrr6"] Jan 27 10:23:12 crc kubenswrapper[4799]: I0127 10:23:12.943374 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c8c5ed-2473-414f-8f7c-1126b0ba114e-catalog-content\") pod \"redhat-marketplace-gnrr6\" (UID: \"37c8c5ed-2473-414f-8f7c-1126b0ba114e\") " pod="openshift-marketplace/redhat-marketplace-gnrr6" Jan 27 10:23:12 crc kubenswrapper[4799]: I0127 10:23:12.943419 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zxqf\" (UniqueName: \"kubernetes.io/projected/37c8c5ed-2473-414f-8f7c-1126b0ba114e-kube-api-access-2zxqf\") pod \"redhat-marketplace-gnrr6\" (UID: \"37c8c5ed-2473-414f-8f7c-1126b0ba114e\") " pod="openshift-marketplace/redhat-marketplace-gnrr6" Jan 27 10:23:12 crc kubenswrapper[4799]: I0127 10:23:12.943581 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c8c5ed-2473-414f-8f7c-1126b0ba114e-utilities\") pod \"redhat-marketplace-gnrr6\" (UID: \"37c8c5ed-2473-414f-8f7c-1126b0ba114e\") " pod="openshift-marketplace/redhat-marketplace-gnrr6" Jan 27 10:23:13 crc kubenswrapper[4799]: I0127 10:23:13.045479 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c8c5ed-2473-414f-8f7c-1126b0ba114e-utilities\") pod \"redhat-marketplace-gnrr6\" (UID: \"37c8c5ed-2473-414f-8f7c-1126b0ba114e\") " pod="openshift-marketplace/redhat-marketplace-gnrr6" Jan 27 10:23:13 crc kubenswrapper[4799]: I0127 10:23:13.045628 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c8c5ed-2473-414f-8f7c-1126b0ba114e-catalog-content\") pod \"redhat-marketplace-gnrr6\" (UID: \"37c8c5ed-2473-414f-8f7c-1126b0ba114e\") " pod="openshift-marketplace/redhat-marketplace-gnrr6" Jan 27 10:23:13 crc kubenswrapper[4799]: I0127 10:23:13.045654 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zxqf\" (UniqueName: \"kubernetes.io/projected/37c8c5ed-2473-414f-8f7c-1126b0ba114e-kube-api-access-2zxqf\") pod \"redhat-marketplace-gnrr6\" (UID: \"37c8c5ed-2473-414f-8f7c-1126b0ba114e\") " pod="openshift-marketplace/redhat-marketplace-gnrr6" Jan 27 10:23:13 crc kubenswrapper[4799]: I0127 10:23:13.046255 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c8c5ed-2473-414f-8f7c-1126b0ba114e-utilities\") pod \"redhat-marketplace-gnrr6\" (UID: \"37c8c5ed-2473-414f-8f7c-1126b0ba114e\") " pod="openshift-marketplace/redhat-marketplace-gnrr6" Jan 27 10:23:13 crc kubenswrapper[4799]: I0127 10:23:13.046381 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c8c5ed-2473-414f-8f7c-1126b0ba114e-catalog-content\") pod \"redhat-marketplace-gnrr6\" (UID: \"37c8c5ed-2473-414f-8f7c-1126b0ba114e\") " pod="openshift-marketplace/redhat-marketplace-gnrr6" Jan 27 10:23:13 crc kubenswrapper[4799]: I0127 10:23:13.084684 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zxqf\" (UniqueName: \"kubernetes.io/projected/37c8c5ed-2473-414f-8f7c-1126b0ba114e-kube-api-access-2zxqf\") pod \"redhat-marketplace-gnrr6\" (UID: \"37c8c5ed-2473-414f-8f7c-1126b0ba114e\") " pod="openshift-marketplace/redhat-marketplace-gnrr6" Jan 27 10:23:13 crc kubenswrapper[4799]: I0127 10:23:13.173825 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gnrr6" Jan 27 10:23:13 crc kubenswrapper[4799]: I0127 10:23:13.411739 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zz8v9"] Jan 27 10:23:13 crc kubenswrapper[4799]: I0127 10:23:13.414646 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zz8v9" Jan 27 10:23:13 crc kubenswrapper[4799]: I0127 10:23:13.438853 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zz8v9"] Jan 27 10:23:13 crc kubenswrapper[4799]: I0127 10:23:13.584738 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48c6ad23-943d-4951-bb0b-af8fbe9f4609-utilities\") pod \"community-operators-zz8v9\" (UID: \"48c6ad23-943d-4951-bb0b-af8fbe9f4609\") " pod="openshift-marketplace/community-operators-zz8v9" Jan 27 10:23:13 crc kubenswrapper[4799]: I0127 10:23:13.585181 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qnk9\" (UniqueName: \"kubernetes.io/projected/48c6ad23-943d-4951-bb0b-af8fbe9f4609-kube-api-access-4qnk9\") pod \"community-operators-zz8v9\" (UID: \"48c6ad23-943d-4951-bb0b-af8fbe9f4609\") " pod="openshift-marketplace/community-operators-zz8v9" Jan 27 10:23:13 crc kubenswrapper[4799]: I0127 10:23:13.585229 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48c6ad23-943d-4951-bb0b-af8fbe9f4609-catalog-content\") pod \"community-operators-zz8v9\" (UID: \"48c6ad23-943d-4951-bb0b-af8fbe9f4609\") " pod="openshift-marketplace/community-operators-zz8v9" Jan 27 10:23:13 crc kubenswrapper[4799]: I0127 10:23:13.687093 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48c6ad23-943d-4951-bb0b-af8fbe9f4609-utilities\") pod \"community-operators-zz8v9\" (UID: \"48c6ad23-943d-4951-bb0b-af8fbe9f4609\") " pod="openshift-marketplace/community-operators-zz8v9" Jan 27 10:23:13 crc kubenswrapper[4799]: I0127 10:23:13.687160 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qnk9\" (UniqueName: \"kubernetes.io/projected/48c6ad23-943d-4951-bb0b-af8fbe9f4609-kube-api-access-4qnk9\") pod \"community-operators-zz8v9\" (UID: \"48c6ad23-943d-4951-bb0b-af8fbe9f4609\") " pod="openshift-marketplace/community-operators-zz8v9" Jan 27 10:23:13 crc kubenswrapper[4799]: I0127 10:23:13.687190 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48c6ad23-943d-4951-bb0b-af8fbe9f4609-catalog-content\") pod \"community-operators-zz8v9\" (UID: \"48c6ad23-943d-4951-bb0b-af8fbe9f4609\") " pod="openshift-marketplace/community-operators-zz8v9" Jan 27 10:23:13 crc kubenswrapper[4799]: I0127 10:23:13.687646 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48c6ad23-943d-4951-bb0b-af8fbe9f4609-utilities\") pod \"community-operators-zz8v9\" (UID: \"48c6ad23-943d-4951-bb0b-af8fbe9f4609\") " pod="openshift-marketplace/community-operators-zz8v9" Jan 27 10:23:13 crc kubenswrapper[4799]: I0127 10:23:13.687692 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48c6ad23-943d-4951-bb0b-af8fbe9f4609-catalog-content\") pod \"community-operators-zz8v9\" (UID: \"48c6ad23-943d-4951-bb0b-af8fbe9f4609\") " pod="openshift-marketplace/community-operators-zz8v9" Jan 27 10:23:13 crc kubenswrapper[4799]: I0127 10:23:13.722653 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qnk9\" (UniqueName: \"kubernetes.io/projected/48c6ad23-943d-4951-bb0b-af8fbe9f4609-kube-api-access-4qnk9\") pod \"community-operators-zz8v9\" (UID: \"48c6ad23-943d-4951-bb0b-af8fbe9f4609\") " pod="openshift-marketplace/community-operators-zz8v9" Jan 27 10:23:13 crc kubenswrapper[4799]: I0127 10:23:13.728159 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnrr6"] Jan 27 10:23:13 crc kubenswrapper[4799]: I0127 10:23:13.749530 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zz8v9" Jan 27 10:23:14 crc kubenswrapper[4799]: I0127 10:23:14.270561 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zz8v9"] Jan 27 10:23:14 crc kubenswrapper[4799]: I0127 10:23:14.524507 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zz8v9" event={"ID":"48c6ad23-943d-4951-bb0b-af8fbe9f4609","Type":"ContainerStarted","Data":"9dc5962e084a296c4cd14733dad77cc28b434de83c4ab4236969d29383d88ca1"} Jan 27 10:23:14 crc kubenswrapper[4799]: I0127 10:23:14.524601 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zz8v9" event={"ID":"48c6ad23-943d-4951-bb0b-af8fbe9f4609","Type":"ContainerStarted","Data":"a670ce0252c3519510913c0053d86ce0b8d640d24d32ceec5ee1acf7827dee38"} Jan 27 10:23:14 crc kubenswrapper[4799]: I0127 10:23:14.526080 4799 generic.go:334] "Generic (PLEG): container finished" podID="37c8c5ed-2473-414f-8f7c-1126b0ba114e" containerID="cb1457b32b335e82fea356223bede27feb61ea9fb86f47f64b921a4937323186" exitCode=0 Jan 27 10:23:14 crc kubenswrapper[4799]: I0127 10:23:14.526127 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnrr6" event={"ID":"37c8c5ed-2473-414f-8f7c-1126b0ba114e","Type":"ContainerDied","Data":"cb1457b32b335e82fea356223bede27feb61ea9fb86f47f64b921a4937323186"} Jan 27 10:23:14 crc kubenswrapper[4799]: I0127 10:23:14.526169 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnrr6" event={"ID":"37c8c5ed-2473-414f-8f7c-1126b0ba114e","Type":"ContainerStarted","Data":"94257b7cbba48c9e7fd263be272d0b1b2441f52b7d9c478ff2e83ef3f8e600ea"} Jan 27 10:23:14 crc kubenswrapper[4799]: I0127 10:23:14.527115 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 10:23:15 crc kubenswrapper[4799]: I0127 10:23:15.226779 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x6ftl"] Jan 27 10:23:15 crc kubenswrapper[4799]: I0127 10:23:15.233178 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6ftl" Jan 27 10:23:15 crc kubenswrapper[4799]: I0127 10:23:15.263625 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x6ftl"] Jan 27 10:23:15 crc kubenswrapper[4799]: I0127 10:23:15.429411 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f039cb0-72a0-4e7b-8def-6200d5dbbacb-utilities\") pod \"redhat-operators-x6ftl\" (UID: \"8f039cb0-72a0-4e7b-8def-6200d5dbbacb\") " pod="openshift-marketplace/redhat-operators-x6ftl" Jan 27 10:23:15 crc kubenswrapper[4799]: I0127 10:23:15.429914 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f039cb0-72a0-4e7b-8def-6200d5dbbacb-catalog-content\") pod \"redhat-operators-x6ftl\" (UID: \"8f039cb0-72a0-4e7b-8def-6200d5dbbacb\") " pod="openshift-marketplace/redhat-operators-x6ftl" Jan 27 10:23:15 crc kubenswrapper[4799]: I0127 10:23:15.430145 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfsmp\" (UniqueName: \"kubernetes.io/projected/8f039cb0-72a0-4e7b-8def-6200d5dbbacb-kube-api-access-qfsmp\") pod \"redhat-operators-x6ftl\" (UID: \"8f039cb0-72a0-4e7b-8def-6200d5dbbacb\") " pod="openshift-marketplace/redhat-operators-x6ftl" Jan 27 10:23:15 crc kubenswrapper[4799]: I0127 10:23:15.538130 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f039cb0-72a0-4e7b-8def-6200d5dbbacb-utilities\") pod \"redhat-operators-x6ftl\" (UID: \"8f039cb0-72a0-4e7b-8def-6200d5dbbacb\") " pod="openshift-marketplace/redhat-operators-x6ftl" Jan 27 10:23:15 crc kubenswrapper[4799]: I0127 10:23:15.538410 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f039cb0-72a0-4e7b-8def-6200d5dbbacb-catalog-content\") pod \"redhat-operators-x6ftl\" (UID: \"8f039cb0-72a0-4e7b-8def-6200d5dbbacb\") " pod="openshift-marketplace/redhat-operators-x6ftl" Jan 27 10:23:15 crc kubenswrapper[4799]: I0127 10:23:15.538597 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfsmp\" (UniqueName: \"kubernetes.io/projected/8f039cb0-72a0-4e7b-8def-6200d5dbbacb-kube-api-access-qfsmp\") pod \"redhat-operators-x6ftl\" (UID: \"8f039cb0-72a0-4e7b-8def-6200d5dbbacb\") " pod="openshift-marketplace/redhat-operators-x6ftl" Jan 27 10:23:15 crc kubenswrapper[4799]: I0127 10:23:15.539092 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f039cb0-72a0-4e7b-8def-6200d5dbbacb-catalog-content\") pod \"redhat-operators-x6ftl\" (UID: \"8f039cb0-72a0-4e7b-8def-6200d5dbbacb\") " pod="openshift-marketplace/redhat-operators-x6ftl" Jan 27 10:23:15 crc kubenswrapper[4799]: I0127 10:23:15.541452 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f039cb0-72a0-4e7b-8def-6200d5dbbacb-utilities\") pod \"redhat-operators-x6ftl\" (UID: \"8f039cb0-72a0-4e7b-8def-6200d5dbbacb\") " pod="openshift-marketplace/redhat-operators-x6ftl" Jan 27 10:23:15 crc kubenswrapper[4799]: I0127 10:23:15.547823 4799 generic.go:334] "Generic (PLEG): container finished" podID="48c6ad23-943d-4951-bb0b-af8fbe9f4609" containerID="9dc5962e084a296c4cd14733dad77cc28b434de83c4ab4236969d29383d88ca1" exitCode=0 Jan 27 10:23:15 crc kubenswrapper[4799]: I0127 10:23:15.547967 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zz8v9" event={"ID":"48c6ad23-943d-4951-bb0b-af8fbe9f4609","Type":"ContainerDied","Data":"9dc5962e084a296c4cd14733dad77cc28b434de83c4ab4236969d29383d88ca1"} Jan 27 10:23:15 crc kubenswrapper[4799]: I0127 10:23:15.548021 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zz8v9" event={"ID":"48c6ad23-943d-4951-bb0b-af8fbe9f4609","Type":"ContainerStarted","Data":"33f9c6348691369e3363457844fc45b0476d586e0a93897c803b0d11ebc21fbd"} Jan 27 10:23:15 crc kubenswrapper[4799]: I0127 10:23:15.552167 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnrr6" event={"ID":"37c8c5ed-2473-414f-8f7c-1126b0ba114e","Type":"ContainerStarted","Data":"5dfcf2eaebaf50ac9b18b798721629c80e5275e67d4dfc7a6bee0bd0504cfc81"} Jan 27 10:23:15 crc kubenswrapper[4799]: I0127 10:23:15.564679 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfsmp\" (UniqueName: \"kubernetes.io/projected/8f039cb0-72a0-4e7b-8def-6200d5dbbacb-kube-api-access-qfsmp\") pod \"redhat-operators-x6ftl\" (UID: \"8f039cb0-72a0-4e7b-8def-6200d5dbbacb\") " pod="openshift-marketplace/redhat-operators-x6ftl" Jan 27 10:23:15 crc kubenswrapper[4799]: I0127 10:23:15.585376 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6ftl" Jan 27 10:23:16 crc kubenswrapper[4799]: I0127 10:23:16.039961 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x6ftl"] Jan 27 10:23:16 crc kubenswrapper[4799]: W0127 10:23:16.044397 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f039cb0_72a0_4e7b_8def_6200d5dbbacb.slice/crio-4561993e10dbc3bb42b9a5b722c795db30607615cc2ea24d799ce712c128e0f0 WatchSource:0}: Error finding container 4561993e10dbc3bb42b9a5b722c795db30607615cc2ea24d799ce712c128e0f0: Status 404 returned error can't find the container with id 4561993e10dbc3bb42b9a5b722c795db30607615cc2ea24d799ce712c128e0f0 Jan 27 10:23:16 crc kubenswrapper[4799]: I0127 10:23:16.560487 4799 generic.go:334] "Generic (PLEG): container finished" podID="8f039cb0-72a0-4e7b-8def-6200d5dbbacb" containerID="5afc844c9378abf7989db15ebe84cb5e60e9741e5854dfe3cf7360e265314714" exitCode=0 Jan 27 10:23:16 crc kubenswrapper[4799]: I0127 10:23:16.560556 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6ftl" event={"ID":"8f039cb0-72a0-4e7b-8def-6200d5dbbacb","Type":"ContainerDied","Data":"5afc844c9378abf7989db15ebe84cb5e60e9741e5854dfe3cf7360e265314714"} Jan 27 10:23:16 crc kubenswrapper[4799]: I0127 10:23:16.560584 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6ftl" event={"ID":"8f039cb0-72a0-4e7b-8def-6200d5dbbacb","Type":"ContainerStarted","Data":"4561993e10dbc3bb42b9a5b722c795db30607615cc2ea24d799ce712c128e0f0"} Jan 27 10:23:16 crc kubenswrapper[4799]: I0127 10:23:16.562775 4799 generic.go:334] "Generic (PLEG): container finished" podID="37c8c5ed-2473-414f-8f7c-1126b0ba114e" containerID="5dfcf2eaebaf50ac9b18b798721629c80e5275e67d4dfc7a6bee0bd0504cfc81" exitCode=0 Jan 27 10:23:16 crc kubenswrapper[4799]: I0127 10:23:16.562904 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnrr6" event={"ID":"37c8c5ed-2473-414f-8f7c-1126b0ba114e","Type":"ContainerDied","Data":"5dfcf2eaebaf50ac9b18b798721629c80e5275e67d4dfc7a6bee0bd0504cfc81"} Jan 27 10:23:16 crc kubenswrapper[4799]: I0127 10:23:16.565390 4799 generic.go:334] "Generic (PLEG): container finished" podID="48c6ad23-943d-4951-bb0b-af8fbe9f4609" containerID="33f9c6348691369e3363457844fc45b0476d586e0a93897c803b0d11ebc21fbd" exitCode=0 Jan 27 10:23:16 crc kubenswrapper[4799]: I0127 10:23:16.565424 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zz8v9" event={"ID":"48c6ad23-943d-4951-bb0b-af8fbe9f4609","Type":"ContainerDied","Data":"33f9c6348691369e3363457844fc45b0476d586e0a93897c803b0d11ebc21fbd"} Jan 27 10:23:17 crc kubenswrapper[4799]: I0127 10:23:17.585047 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnrr6" event={"ID":"37c8c5ed-2473-414f-8f7c-1126b0ba114e","Type":"ContainerStarted","Data":"0a217989dc1dc3926707e37798e80ddf15b6ca2c4142926f3113c75eaf32f02b"} Jan 27 10:23:17 crc kubenswrapper[4799]: I0127 10:23:17.588401 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zz8v9" event={"ID":"48c6ad23-943d-4951-bb0b-af8fbe9f4609","Type":"ContainerStarted","Data":"f5ba7fb88fff021f01b16a579ce824804f145eb16ac64bd06be2f58eb16092b0"} Jan 27 10:23:17 crc kubenswrapper[4799]: I0127 10:23:17.615009 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gnrr6" podStartSLOduration=3.138172443 podStartE2EDuration="5.614990945s" podCreationTimestamp="2026-01-27 10:23:12 +0000 UTC" firstStartedPulling="2026-01-27 10:23:14.527951625 +0000 UTC m=+9460.839055690" lastFinishedPulling="2026-01-27 10:23:17.004770117 +0000 UTC m=+9463.315874192" observedRunningTime="2026-01-27 10:23:17.614472751 +0000 UTC m=+9463.925576826" watchObservedRunningTime="2026-01-27 10:23:17.614990945 +0000 UTC m=+9463.926095020" Jan 27 10:23:17 crc kubenswrapper[4799]: I0127 10:23:17.640868 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zz8v9" podStartSLOduration=2.145489163 podStartE2EDuration="4.640852378s" podCreationTimestamp="2026-01-27 10:23:13 +0000 UTC" firstStartedPulling="2026-01-27 10:23:14.526815604 +0000 UTC m=+9460.837919669" lastFinishedPulling="2026-01-27 10:23:17.022178819 +0000 UTC m=+9463.333282884" observedRunningTime="2026-01-27 10:23:17.638056472 +0000 UTC m=+9463.949160547" watchObservedRunningTime="2026-01-27 10:23:17.640852378 +0000 UTC m=+9463.951956443" Jan 27 10:23:18 crc kubenswrapper[4799]: I0127 10:23:18.604134 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6ftl" event={"ID":"8f039cb0-72a0-4e7b-8def-6200d5dbbacb","Type":"ContainerStarted","Data":"c7a06aba7feb0539f44d5b4dcc54eb0e1ae2ffea2b7fb778ca2c9fd9e58d16d3"} Jan 27 10:23:20 crc kubenswrapper[4799]: I0127 10:23:20.633690 4799 generic.go:334] "Generic (PLEG): container finished" podID="8f039cb0-72a0-4e7b-8def-6200d5dbbacb" containerID="c7a06aba7feb0539f44d5b4dcc54eb0e1ae2ffea2b7fb778ca2c9fd9e58d16d3" exitCode=0 Jan 27 10:23:20 crc kubenswrapper[4799]: I0127 10:23:20.633899 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6ftl" event={"ID":"8f039cb0-72a0-4e7b-8def-6200d5dbbacb","Type":"ContainerDied","Data":"c7a06aba7feb0539f44d5b4dcc54eb0e1ae2ffea2b7fb778ca2c9fd9e58d16d3"} Jan 27 10:23:22 crc kubenswrapper[4799]: I0127 10:23:22.664858 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6ftl" event={"ID":"8f039cb0-72a0-4e7b-8def-6200d5dbbacb","Type":"ContainerStarted","Data":"a4d9261027b498f74f5e1c8a264d00b4ce7a9dc1182524311e95d8c66c882254"} Jan 27 10:23:22 crc kubenswrapper[4799]: I0127 10:23:22.703831 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x6ftl" podStartSLOduration=2.495607712 podStartE2EDuration="7.703807313s" podCreationTimestamp="2026-01-27 10:23:15 +0000 UTC" firstStartedPulling="2026-01-27 10:23:16.56284983 +0000 UTC m=+9462.873953895" lastFinishedPulling="2026-01-27 10:23:21.771049431 +0000 UTC m=+9468.082153496" observedRunningTime="2026-01-27 10:23:22.697873412 +0000 UTC m=+9469.008977527" watchObservedRunningTime="2026-01-27 10:23:22.703807313 +0000 UTC m=+9469.014911408" Jan 27 10:23:23 crc kubenswrapper[4799]: I0127 10:23:23.174442 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gnrr6" Jan 27 10:23:23 crc kubenswrapper[4799]: I0127 10:23:23.174527 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gnrr6" Jan 27 10:23:23 crc kubenswrapper[4799]: I0127 10:23:23.254058 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gnrr6" Jan 27 10:23:23 crc kubenswrapper[4799]: I0127 10:23:23.750162 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zz8v9" Jan 27 10:23:23 crc kubenswrapper[4799]: I0127 10:23:23.750487 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zz8v9" Jan 27 10:23:23 crc kubenswrapper[4799]: I0127 10:23:23.752880 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gnrr6" Jan 27 10:23:24 crc kubenswrapper[4799]: I0127 10:23:24.706374 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zz8v9" Jan 27 10:23:24 crc kubenswrapper[4799]: I0127 10:23:24.762053 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zz8v9" Jan 27 10:23:25 crc kubenswrapper[4799]: I0127 10:23:25.586878 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x6ftl" Jan 27 10:23:25 crc kubenswrapper[4799]: I0127 10:23:25.587319 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x6ftl" Jan 27 10:23:25 crc kubenswrapper[4799]: I0127 10:23:25.611758 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnrr6"] Jan 27 10:23:25 crc kubenswrapper[4799]: I0127 10:23:25.692106 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gnrr6" podUID="37c8c5ed-2473-414f-8f7c-1126b0ba114e" containerName="registry-server" containerID="cri-o://0a217989dc1dc3926707e37798e80ddf15b6ca2c4142926f3113c75eaf32f02b" gracePeriod=2 Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.188710 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gnrr6" Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.285425 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c8c5ed-2473-414f-8f7c-1126b0ba114e-catalog-content\") pod \"37c8c5ed-2473-414f-8f7c-1126b0ba114e\" (UID: \"37c8c5ed-2473-414f-8f7c-1126b0ba114e\") " Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.285520 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zxqf\" (UniqueName: \"kubernetes.io/projected/37c8c5ed-2473-414f-8f7c-1126b0ba114e-kube-api-access-2zxqf\") pod \"37c8c5ed-2473-414f-8f7c-1126b0ba114e\" (UID: \"37c8c5ed-2473-414f-8f7c-1126b0ba114e\") " Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.285542 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c8c5ed-2473-414f-8f7c-1126b0ba114e-utilities\") pod \"37c8c5ed-2473-414f-8f7c-1126b0ba114e\" (UID: \"37c8c5ed-2473-414f-8f7c-1126b0ba114e\") " Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.286538 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37c8c5ed-2473-414f-8f7c-1126b0ba114e-utilities" (OuterVolumeSpecName: "utilities") pod "37c8c5ed-2473-414f-8f7c-1126b0ba114e" (UID: "37c8c5ed-2473-414f-8f7c-1126b0ba114e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.293456 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37c8c5ed-2473-414f-8f7c-1126b0ba114e-kube-api-access-2zxqf" (OuterVolumeSpecName: "kube-api-access-2zxqf") pod "37c8c5ed-2473-414f-8f7c-1126b0ba114e" (UID: "37c8c5ed-2473-414f-8f7c-1126b0ba114e"). InnerVolumeSpecName "kube-api-access-2zxqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.307514 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37c8c5ed-2473-414f-8f7c-1126b0ba114e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37c8c5ed-2473-414f-8f7c-1126b0ba114e" (UID: "37c8c5ed-2473-414f-8f7c-1126b0ba114e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.388525 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c8c5ed-2473-414f-8f7c-1126b0ba114e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.388582 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zxqf\" (UniqueName: \"kubernetes.io/projected/37c8c5ed-2473-414f-8f7c-1126b0ba114e-kube-api-access-2zxqf\") on node \"crc\" DevicePath \"\"" Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.388604 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c8c5ed-2473-414f-8f7c-1126b0ba114e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.655635 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x6ftl" podUID="8f039cb0-72a0-4e7b-8def-6200d5dbbacb" containerName="registry-server" probeResult="failure" output=< Jan 27 10:23:26 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Jan 27 10:23:26 crc kubenswrapper[4799]: > Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.704948 4799 generic.go:334] "Generic (PLEG): container finished" podID="37c8c5ed-2473-414f-8f7c-1126b0ba114e" containerID="0a217989dc1dc3926707e37798e80ddf15b6ca2c4142926f3113c75eaf32f02b" exitCode=0 Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.705023 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnrr6" event={"ID":"37c8c5ed-2473-414f-8f7c-1126b0ba114e","Type":"ContainerDied","Data":"0a217989dc1dc3926707e37798e80ddf15b6ca2c4142926f3113c75eaf32f02b"} Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.705054 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gnrr6" Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.705106 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnrr6" event={"ID":"37c8c5ed-2473-414f-8f7c-1126b0ba114e","Type":"ContainerDied","Data":"94257b7cbba48c9e7fd263be272d0b1b2441f52b7d9c478ff2e83ef3f8e600ea"} Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.705139 4799 scope.go:117] "RemoveContainer" containerID="0a217989dc1dc3926707e37798e80ddf15b6ca2c4142926f3113c75eaf32f02b" Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.735689 4799 scope.go:117] "RemoveContainer" containerID="5dfcf2eaebaf50ac9b18b798721629c80e5275e67d4dfc7a6bee0bd0504cfc81" Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.737137 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnrr6"] Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.748395 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnrr6"] Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.765500 4799 scope.go:117] "RemoveContainer" containerID="cb1457b32b335e82fea356223bede27feb61ea9fb86f47f64b921a4937323186" Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.828177 4799 scope.go:117] "RemoveContainer" containerID="0a217989dc1dc3926707e37798e80ddf15b6ca2c4142926f3113c75eaf32f02b" Jan 27 10:23:26 crc kubenswrapper[4799]: E0127 10:23:26.828596 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a217989dc1dc3926707e37798e80ddf15b6ca2c4142926f3113c75eaf32f02b\": container with ID starting with 0a217989dc1dc3926707e37798e80ddf15b6ca2c4142926f3113c75eaf32f02b not found: ID does not exist" containerID="0a217989dc1dc3926707e37798e80ddf15b6ca2c4142926f3113c75eaf32f02b" Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.828624 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a217989dc1dc3926707e37798e80ddf15b6ca2c4142926f3113c75eaf32f02b"} err="failed to get container status \"0a217989dc1dc3926707e37798e80ddf15b6ca2c4142926f3113c75eaf32f02b\": rpc error: code = NotFound desc = could not find container \"0a217989dc1dc3926707e37798e80ddf15b6ca2c4142926f3113c75eaf32f02b\": container with ID starting with 0a217989dc1dc3926707e37798e80ddf15b6ca2c4142926f3113c75eaf32f02b not found: ID does not exist" Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.828643 4799 scope.go:117] "RemoveContainer" containerID="5dfcf2eaebaf50ac9b18b798721629c80e5275e67d4dfc7a6bee0bd0504cfc81" Jan 27 10:23:26 crc kubenswrapper[4799]: E0127 10:23:26.828975 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dfcf2eaebaf50ac9b18b798721629c80e5275e67d4dfc7a6bee0bd0504cfc81\": container with ID starting with 5dfcf2eaebaf50ac9b18b798721629c80e5275e67d4dfc7a6bee0bd0504cfc81 not found: ID does not exist" containerID="5dfcf2eaebaf50ac9b18b798721629c80e5275e67d4dfc7a6bee0bd0504cfc81" Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.828996 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dfcf2eaebaf50ac9b18b798721629c80e5275e67d4dfc7a6bee0bd0504cfc81"} err="failed to get container status \"5dfcf2eaebaf50ac9b18b798721629c80e5275e67d4dfc7a6bee0bd0504cfc81\": rpc error: code = NotFound desc = could not find container \"5dfcf2eaebaf50ac9b18b798721629c80e5275e67d4dfc7a6bee0bd0504cfc81\": container with ID starting with 5dfcf2eaebaf50ac9b18b798721629c80e5275e67d4dfc7a6bee0bd0504cfc81 not found: ID does not exist" Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.829008 4799 scope.go:117] "RemoveContainer" containerID="cb1457b32b335e82fea356223bede27feb61ea9fb86f47f64b921a4937323186" Jan 27 10:23:26 crc kubenswrapper[4799]: E0127 10:23:26.829314 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb1457b32b335e82fea356223bede27feb61ea9fb86f47f64b921a4937323186\": container with ID starting with cb1457b32b335e82fea356223bede27feb61ea9fb86f47f64b921a4937323186 not found: ID does not exist" containerID="cb1457b32b335e82fea356223bede27feb61ea9fb86f47f64b921a4937323186" Jan 27 10:23:26 crc kubenswrapper[4799]: I0127 10:23:26.829337 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb1457b32b335e82fea356223bede27feb61ea9fb86f47f64b921a4937323186"} err="failed to get container status \"cb1457b32b335e82fea356223bede27feb61ea9fb86f47f64b921a4937323186\": rpc error: code = NotFound desc = could not find container \"cb1457b32b335e82fea356223bede27feb61ea9fb86f47f64b921a4937323186\": container with ID starting with cb1457b32b335e82fea356223bede27feb61ea9fb86f47f64b921a4937323186 not found: ID does not exist" Jan 27 10:23:27 crc kubenswrapper[4799]: I0127 10:23:27.806623 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zz8v9"] Jan 27 10:23:27 crc kubenswrapper[4799]: I0127 10:23:27.806903 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zz8v9" podUID="48c6ad23-943d-4951-bb0b-af8fbe9f4609" containerName="registry-server" containerID="cri-o://f5ba7fb88fff021f01b16a579ce824804f145eb16ac64bd06be2f58eb16092b0" gracePeriod=2 Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.270625 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zz8v9" Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.433998 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qnk9\" (UniqueName: \"kubernetes.io/projected/48c6ad23-943d-4951-bb0b-af8fbe9f4609-kube-api-access-4qnk9\") pod \"48c6ad23-943d-4951-bb0b-af8fbe9f4609\" (UID: \"48c6ad23-943d-4951-bb0b-af8fbe9f4609\") " Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.434132 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48c6ad23-943d-4951-bb0b-af8fbe9f4609-utilities\") pod \"48c6ad23-943d-4951-bb0b-af8fbe9f4609\" (UID: \"48c6ad23-943d-4951-bb0b-af8fbe9f4609\") " Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.434355 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48c6ad23-943d-4951-bb0b-af8fbe9f4609-catalog-content\") pod \"48c6ad23-943d-4951-bb0b-af8fbe9f4609\" (UID: \"48c6ad23-943d-4951-bb0b-af8fbe9f4609\") " Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.435213 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48c6ad23-943d-4951-bb0b-af8fbe9f4609-utilities" (OuterVolumeSpecName: "utilities") pod "48c6ad23-943d-4951-bb0b-af8fbe9f4609" (UID: "48c6ad23-943d-4951-bb0b-af8fbe9f4609"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.440606 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48c6ad23-943d-4951-bb0b-af8fbe9f4609-kube-api-access-4qnk9" (OuterVolumeSpecName: "kube-api-access-4qnk9") pod "48c6ad23-943d-4951-bb0b-af8fbe9f4609" (UID: "48c6ad23-943d-4951-bb0b-af8fbe9f4609"). InnerVolumeSpecName "kube-api-access-4qnk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.468429 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37c8c5ed-2473-414f-8f7c-1126b0ba114e" path="/var/lib/kubelet/pods/37c8c5ed-2473-414f-8f7c-1126b0ba114e/volumes" Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.493771 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48c6ad23-943d-4951-bb0b-af8fbe9f4609-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48c6ad23-943d-4951-bb0b-af8fbe9f4609" (UID: "48c6ad23-943d-4951-bb0b-af8fbe9f4609"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.536487 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48c6ad23-943d-4951-bb0b-af8fbe9f4609-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.536518 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qnk9\" (UniqueName: \"kubernetes.io/projected/48c6ad23-943d-4951-bb0b-af8fbe9f4609-kube-api-access-4qnk9\") on node \"crc\" DevicePath \"\"" Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.536528 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48c6ad23-943d-4951-bb0b-af8fbe9f4609-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.735712 4799 generic.go:334] "Generic (PLEG): container finished" podID="48c6ad23-943d-4951-bb0b-af8fbe9f4609" containerID="f5ba7fb88fff021f01b16a579ce824804f145eb16ac64bd06be2f58eb16092b0" exitCode=0 Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.735762 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zz8v9" event={"ID":"48c6ad23-943d-4951-bb0b-af8fbe9f4609","Type":"ContainerDied","Data":"f5ba7fb88fff021f01b16a579ce824804f145eb16ac64bd06be2f58eb16092b0"} Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.735836 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zz8v9" event={"ID":"48c6ad23-943d-4951-bb0b-af8fbe9f4609","Type":"ContainerDied","Data":"a670ce0252c3519510913c0053d86ce0b8d640d24d32ceec5ee1acf7827dee38"} Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.735839 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zz8v9" Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.735864 4799 scope.go:117] "RemoveContainer" containerID="f5ba7fb88fff021f01b16a579ce824804f145eb16ac64bd06be2f58eb16092b0" Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.760951 4799 scope.go:117] "RemoveContainer" containerID="33f9c6348691369e3363457844fc45b0476d586e0a93897c803b0d11ebc21fbd" Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.769568 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zz8v9"] Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.783158 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zz8v9"] Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.798810 4799 scope.go:117] "RemoveContainer" containerID="9dc5962e084a296c4cd14733dad77cc28b434de83c4ab4236969d29383d88ca1" Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.836081 4799 scope.go:117] "RemoveContainer" containerID="f5ba7fb88fff021f01b16a579ce824804f145eb16ac64bd06be2f58eb16092b0" Jan 27 10:23:28 crc kubenswrapper[4799]: E0127 10:23:28.837003 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5ba7fb88fff021f01b16a579ce824804f145eb16ac64bd06be2f58eb16092b0\": container with ID starting with f5ba7fb88fff021f01b16a579ce824804f145eb16ac64bd06be2f58eb16092b0 not found: ID does not exist" containerID="f5ba7fb88fff021f01b16a579ce824804f145eb16ac64bd06be2f58eb16092b0" Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.837094 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5ba7fb88fff021f01b16a579ce824804f145eb16ac64bd06be2f58eb16092b0"} err="failed to get container status \"f5ba7fb88fff021f01b16a579ce824804f145eb16ac64bd06be2f58eb16092b0\": rpc error: code = NotFound desc = could not find container \"f5ba7fb88fff021f01b16a579ce824804f145eb16ac64bd06be2f58eb16092b0\": container with ID starting with f5ba7fb88fff021f01b16a579ce824804f145eb16ac64bd06be2f58eb16092b0 not found: ID does not exist" Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.837169 4799 scope.go:117] "RemoveContainer" containerID="33f9c6348691369e3363457844fc45b0476d586e0a93897c803b0d11ebc21fbd" Jan 27 10:23:28 crc kubenswrapper[4799]: E0127 10:23:28.837505 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33f9c6348691369e3363457844fc45b0476d586e0a93897c803b0d11ebc21fbd\": container with ID starting with 33f9c6348691369e3363457844fc45b0476d586e0a93897c803b0d11ebc21fbd not found: ID does not exist" containerID="33f9c6348691369e3363457844fc45b0476d586e0a93897c803b0d11ebc21fbd" Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.837538 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33f9c6348691369e3363457844fc45b0476d586e0a93897c803b0d11ebc21fbd"} err="failed to get container status \"33f9c6348691369e3363457844fc45b0476d586e0a93897c803b0d11ebc21fbd\": rpc error: code = NotFound desc = could not find container \"33f9c6348691369e3363457844fc45b0476d586e0a93897c803b0d11ebc21fbd\": container with ID starting with 33f9c6348691369e3363457844fc45b0476d586e0a93897c803b0d11ebc21fbd not found: ID does not exist" Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.837561 4799 scope.go:117] "RemoveContainer" containerID="9dc5962e084a296c4cd14733dad77cc28b434de83c4ab4236969d29383d88ca1" Jan 27 10:23:28 crc kubenswrapper[4799]: E0127 10:23:28.837976 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dc5962e084a296c4cd14733dad77cc28b434de83c4ab4236969d29383d88ca1\": container with ID starting with 9dc5962e084a296c4cd14733dad77cc28b434de83c4ab4236969d29383d88ca1 not found: ID does not exist" containerID="9dc5962e084a296c4cd14733dad77cc28b434de83c4ab4236969d29383d88ca1" Jan 27 10:23:28 crc kubenswrapper[4799]: I0127 10:23:28.838016 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dc5962e084a296c4cd14733dad77cc28b434de83c4ab4236969d29383d88ca1"} err="failed to get container status \"9dc5962e084a296c4cd14733dad77cc28b434de83c4ab4236969d29383d88ca1\": rpc error: code = NotFound desc = could not find container \"9dc5962e084a296c4cd14733dad77cc28b434de83c4ab4236969d29383d88ca1\": container with ID starting with 9dc5962e084a296c4cd14733dad77cc28b434de83c4ab4236969d29383d88ca1 not found: ID does not exist" Jan 27 10:23:28 crc kubenswrapper[4799]: E0127 10:23:28.901850 4799 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48c6ad23_943d_4951_bb0b_af8fbe9f4609.slice/crio-a670ce0252c3519510913c0053d86ce0b8d640d24d32ceec5ee1acf7827dee38\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48c6ad23_943d_4951_bb0b_af8fbe9f4609.slice\": RecentStats: unable to find data in memory cache]" Jan 27 10:23:30 crc kubenswrapper[4799]: I0127 10:23:30.462028 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48c6ad23-943d-4951-bb0b-af8fbe9f4609" path="/var/lib/kubelet/pods/48c6ad23-943d-4951-bb0b-af8fbe9f4609/volumes" Jan 27 10:23:35 crc kubenswrapper[4799]: I0127 10:23:35.661208 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x6ftl" Jan 27 10:23:36 crc kubenswrapper[4799]: I0127 10:23:36.205843 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x6ftl" Jan 27 10:23:36 crc kubenswrapper[4799]: I0127 10:23:36.297860 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x6ftl"] Jan 27 10:23:36 crc kubenswrapper[4799]: I0127 10:23:36.826606 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x6ftl" podUID="8f039cb0-72a0-4e7b-8def-6200d5dbbacb" containerName="registry-server" containerID="cri-o://a4d9261027b498f74f5e1c8a264d00b4ce7a9dc1182524311e95d8c66c882254" gracePeriod=2 Jan 27 10:23:37 crc kubenswrapper[4799]: I0127 10:23:37.371832 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6ftl" Jan 27 10:23:37 crc kubenswrapper[4799]: I0127 10:23:37.490051 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f039cb0-72a0-4e7b-8def-6200d5dbbacb-utilities\") pod \"8f039cb0-72a0-4e7b-8def-6200d5dbbacb\" (UID: \"8f039cb0-72a0-4e7b-8def-6200d5dbbacb\") " Jan 27 10:23:37 crc kubenswrapper[4799]: I0127 10:23:37.490561 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f039cb0-72a0-4e7b-8def-6200d5dbbacb-catalog-content\") pod \"8f039cb0-72a0-4e7b-8def-6200d5dbbacb\" (UID: \"8f039cb0-72a0-4e7b-8def-6200d5dbbacb\") " Jan 27 10:23:37 crc kubenswrapper[4799]: I0127 10:23:37.490681 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfsmp\" (UniqueName: \"kubernetes.io/projected/8f039cb0-72a0-4e7b-8def-6200d5dbbacb-kube-api-access-qfsmp\") pod \"8f039cb0-72a0-4e7b-8def-6200d5dbbacb\" (UID: \"8f039cb0-72a0-4e7b-8def-6200d5dbbacb\") " Jan 27 10:23:37 crc kubenswrapper[4799]: I0127 10:23:37.490915 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f039cb0-72a0-4e7b-8def-6200d5dbbacb-utilities" (OuterVolumeSpecName: "utilities") pod "8f039cb0-72a0-4e7b-8def-6200d5dbbacb" (UID: "8f039cb0-72a0-4e7b-8def-6200d5dbbacb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:23:37 crc kubenswrapper[4799]: I0127 10:23:37.491232 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f039cb0-72a0-4e7b-8def-6200d5dbbacb-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:23:37 crc kubenswrapper[4799]: I0127 10:23:37.501031 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f039cb0-72a0-4e7b-8def-6200d5dbbacb-kube-api-access-qfsmp" (OuterVolumeSpecName: "kube-api-access-qfsmp") pod "8f039cb0-72a0-4e7b-8def-6200d5dbbacb" (UID: "8f039cb0-72a0-4e7b-8def-6200d5dbbacb"). InnerVolumeSpecName "kube-api-access-qfsmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:23:37 crc kubenswrapper[4799]: I0127 10:23:37.593750 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfsmp\" (UniqueName: \"kubernetes.io/projected/8f039cb0-72a0-4e7b-8def-6200d5dbbacb-kube-api-access-qfsmp\") on node \"crc\" DevicePath \"\"" Jan 27 10:23:37 crc kubenswrapper[4799]: I0127 10:23:37.620020 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f039cb0-72a0-4e7b-8def-6200d5dbbacb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f039cb0-72a0-4e7b-8def-6200d5dbbacb" (UID: "8f039cb0-72a0-4e7b-8def-6200d5dbbacb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:23:37 crc kubenswrapper[4799]: I0127 10:23:37.695829 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f039cb0-72a0-4e7b-8def-6200d5dbbacb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:23:37 crc kubenswrapper[4799]: I0127 10:23:37.845148 4799 generic.go:334] "Generic (PLEG): container finished" podID="8f039cb0-72a0-4e7b-8def-6200d5dbbacb" containerID="a4d9261027b498f74f5e1c8a264d00b4ce7a9dc1182524311e95d8c66c882254" exitCode=0 Jan 27 10:23:37 crc kubenswrapper[4799]: I0127 10:23:37.845228 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6ftl" event={"ID":"8f039cb0-72a0-4e7b-8def-6200d5dbbacb","Type":"ContainerDied","Data":"a4d9261027b498f74f5e1c8a264d00b4ce7a9dc1182524311e95d8c66c882254"} Jan 27 10:23:37 crc kubenswrapper[4799]: I0127 10:23:37.845277 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6ftl" Jan 27 10:23:37 crc kubenswrapper[4799]: I0127 10:23:37.845288 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6ftl" event={"ID":"8f039cb0-72a0-4e7b-8def-6200d5dbbacb","Type":"ContainerDied","Data":"4561993e10dbc3bb42b9a5b722c795db30607615cc2ea24d799ce712c128e0f0"} Jan 27 10:23:37 crc kubenswrapper[4799]: I0127 10:23:37.845341 4799 scope.go:117] "RemoveContainer" containerID="a4d9261027b498f74f5e1c8a264d00b4ce7a9dc1182524311e95d8c66c882254" Jan 27 10:23:37 crc kubenswrapper[4799]: I0127 10:23:37.884944 4799 scope.go:117] "RemoveContainer" containerID="c7a06aba7feb0539f44d5b4dcc54eb0e1ae2ffea2b7fb778ca2c9fd9e58d16d3" Jan 27 10:23:37 crc kubenswrapper[4799]: I0127 10:23:37.910015 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x6ftl"] Jan 27 10:23:37 crc kubenswrapper[4799]: I0127 10:23:37.922011 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x6ftl"] Jan 27 10:23:38 crc kubenswrapper[4799]: I0127 10:23:38.466377 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f039cb0-72a0-4e7b-8def-6200d5dbbacb" path="/var/lib/kubelet/pods/8f039cb0-72a0-4e7b-8def-6200d5dbbacb/volumes" Jan 27 10:23:38 crc kubenswrapper[4799]: I0127 10:23:38.578574 4799 scope.go:117] "RemoveContainer" containerID="5afc844c9378abf7989db15ebe84cb5e60e9741e5854dfe3cf7360e265314714" Jan 27 10:23:38 crc kubenswrapper[4799]: I0127 10:23:38.633162 4799 scope.go:117] "RemoveContainer" containerID="a4d9261027b498f74f5e1c8a264d00b4ce7a9dc1182524311e95d8c66c882254" Jan 27 10:23:38 crc kubenswrapper[4799]: E0127 10:23:38.634065 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4d9261027b498f74f5e1c8a264d00b4ce7a9dc1182524311e95d8c66c882254\": container with ID starting with a4d9261027b498f74f5e1c8a264d00b4ce7a9dc1182524311e95d8c66c882254 not found: ID does not exist" containerID="a4d9261027b498f74f5e1c8a264d00b4ce7a9dc1182524311e95d8c66c882254" Jan 27 10:23:38 crc kubenswrapper[4799]: I0127 10:23:38.634133 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4d9261027b498f74f5e1c8a264d00b4ce7a9dc1182524311e95d8c66c882254"} err="failed to get container status \"a4d9261027b498f74f5e1c8a264d00b4ce7a9dc1182524311e95d8c66c882254\": rpc error: code = NotFound desc = could not find container \"a4d9261027b498f74f5e1c8a264d00b4ce7a9dc1182524311e95d8c66c882254\": container with ID starting with a4d9261027b498f74f5e1c8a264d00b4ce7a9dc1182524311e95d8c66c882254 not found: ID does not exist" Jan 27 10:23:38 crc kubenswrapper[4799]: I0127 10:23:38.634177 4799 scope.go:117] "RemoveContainer" containerID="c7a06aba7feb0539f44d5b4dcc54eb0e1ae2ffea2b7fb778ca2c9fd9e58d16d3" Jan 27 10:23:38 crc kubenswrapper[4799]: E0127 10:23:38.634644 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7a06aba7feb0539f44d5b4dcc54eb0e1ae2ffea2b7fb778ca2c9fd9e58d16d3\": container with ID starting with c7a06aba7feb0539f44d5b4dcc54eb0e1ae2ffea2b7fb778ca2c9fd9e58d16d3 not found: ID does not exist" containerID="c7a06aba7feb0539f44d5b4dcc54eb0e1ae2ffea2b7fb778ca2c9fd9e58d16d3" Jan 27 10:23:38 crc kubenswrapper[4799]: I0127 10:23:38.634701 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7a06aba7feb0539f44d5b4dcc54eb0e1ae2ffea2b7fb778ca2c9fd9e58d16d3"} err="failed to get container status \"c7a06aba7feb0539f44d5b4dcc54eb0e1ae2ffea2b7fb778ca2c9fd9e58d16d3\": rpc error: code = NotFound desc = could not find container \"c7a06aba7feb0539f44d5b4dcc54eb0e1ae2ffea2b7fb778ca2c9fd9e58d16d3\": container with ID starting with c7a06aba7feb0539f44d5b4dcc54eb0e1ae2ffea2b7fb778ca2c9fd9e58d16d3 not found: ID does not exist" Jan 27 10:23:38 crc kubenswrapper[4799]: I0127 10:23:38.634737 4799 scope.go:117] "RemoveContainer" containerID="5afc844c9378abf7989db15ebe84cb5e60e9741e5854dfe3cf7360e265314714" Jan 27 10:23:38 crc kubenswrapper[4799]: E0127 10:23:38.635284 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5afc844c9378abf7989db15ebe84cb5e60e9741e5854dfe3cf7360e265314714\": container with ID starting with 5afc844c9378abf7989db15ebe84cb5e60e9741e5854dfe3cf7360e265314714 not found: ID does not exist" containerID="5afc844c9378abf7989db15ebe84cb5e60e9741e5854dfe3cf7360e265314714" Jan 27 10:23:38 crc kubenswrapper[4799]: I0127 10:23:38.635404 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5afc844c9378abf7989db15ebe84cb5e60e9741e5854dfe3cf7360e265314714"} err="failed to get container status \"5afc844c9378abf7989db15ebe84cb5e60e9741e5854dfe3cf7360e265314714\": rpc error: code = NotFound desc = could not find container \"5afc844c9378abf7989db15ebe84cb5e60e9741e5854dfe3cf7360e265314714\": container with ID starting with 5afc844c9378abf7989db15ebe84cb5e60e9741e5854dfe3cf7360e265314714 not found: ID does not exist" Jan 27 10:24:53 crc kubenswrapper[4799]: I0127 10:24:53.731233 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:24:53 crc kubenswrapper[4799]: I0127 10:24:53.732688 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:25:23 crc kubenswrapper[4799]: I0127 10:25:23.732295 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:25:23 crc kubenswrapper[4799]: I0127 10:25:23.732943 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:25:53 crc kubenswrapper[4799]: I0127 10:25:53.731431 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:25:53 crc kubenswrapper[4799]: I0127 10:25:53.732136 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:25:53 crc kubenswrapper[4799]: I0127 10:25:53.732188 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 10:25:53 crc kubenswrapper[4799]: I0127 10:25:53.733066 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 10:25:53 crc kubenswrapper[4799]: I0127 10:25:53.733136 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" gracePeriod=600 Jan 27 10:25:53 crc kubenswrapper[4799]: E0127 10:25:53.875950 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:25:54 crc kubenswrapper[4799]: I0127 10:25:54.351883 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" exitCode=0 Jan 27 10:25:54 crc kubenswrapper[4799]: I0127 10:25:54.352281 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6"} Jan 27 10:25:54 crc kubenswrapper[4799]: I0127 10:25:54.352338 4799 scope.go:117] "RemoveContainer" containerID="0df998f00d1367b8658a5dfcfc7a9edf1970e35af92947f241a6cf155f8c1cc4" Jan 27 10:25:54 crc kubenswrapper[4799]: I0127 10:25:54.352980 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:25:54 crc kubenswrapper[4799]: E0127 10:25:54.353291 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:26:08 crc kubenswrapper[4799]: I0127 10:26:08.451697 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:26:08 crc kubenswrapper[4799]: E0127 10:26:08.453002 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:26:20 crc kubenswrapper[4799]: I0127 10:26:20.453556 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:26:20 crc kubenswrapper[4799]: E0127 10:26:20.454684 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:26:34 crc kubenswrapper[4799]: I0127 10:26:34.471543 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:26:34 crc kubenswrapper[4799]: E0127 10:26:34.476671 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:26:46 crc kubenswrapper[4799]: I0127 10:26:46.452062 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:26:46 crc kubenswrapper[4799]: E0127 10:26:46.453151 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:27:00 crc kubenswrapper[4799]: I0127 10:27:00.451906 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:27:00 crc kubenswrapper[4799]: E0127 10:27:00.455168 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.008386 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cvr59"] Jan 27 10:27:13 crc kubenswrapper[4799]: E0127 10:27:13.009741 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f039cb0-72a0-4e7b-8def-6200d5dbbacb" containerName="extract-utilities" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.009764 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f039cb0-72a0-4e7b-8def-6200d5dbbacb" containerName="extract-utilities" Jan 27 10:27:13 crc kubenswrapper[4799]: E0127 10:27:13.009795 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f039cb0-72a0-4e7b-8def-6200d5dbbacb" containerName="extract-content" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.009812 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f039cb0-72a0-4e7b-8def-6200d5dbbacb" containerName="extract-content" Jan 27 10:27:13 crc kubenswrapper[4799]: E0127 10:27:13.009839 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48c6ad23-943d-4951-bb0b-af8fbe9f4609" containerName="extract-utilities" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.009852 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c6ad23-943d-4951-bb0b-af8fbe9f4609" containerName="extract-utilities" Jan 27 10:27:13 crc kubenswrapper[4799]: E0127 10:27:13.009884 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48c6ad23-943d-4951-bb0b-af8fbe9f4609" containerName="extract-content" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.009896 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c6ad23-943d-4951-bb0b-af8fbe9f4609" containerName="extract-content" Jan 27 10:27:13 crc kubenswrapper[4799]: E0127 10:27:13.009923 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c8c5ed-2473-414f-8f7c-1126b0ba114e" containerName="extract-content" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.009936 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c8c5ed-2473-414f-8f7c-1126b0ba114e" containerName="extract-content" Jan 27 10:27:13 crc kubenswrapper[4799]: E0127 10:27:13.009951 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48c6ad23-943d-4951-bb0b-af8fbe9f4609" containerName="registry-server" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.009964 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c6ad23-943d-4951-bb0b-af8fbe9f4609" containerName="registry-server" Jan 27 10:27:13 crc kubenswrapper[4799]: E0127 10:27:13.009992 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c8c5ed-2473-414f-8f7c-1126b0ba114e" containerName="extract-utilities" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.010005 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c8c5ed-2473-414f-8f7c-1126b0ba114e" containerName="extract-utilities" Jan 27 10:27:13 crc kubenswrapper[4799]: E0127 10:27:13.010053 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c8c5ed-2473-414f-8f7c-1126b0ba114e" containerName="registry-server" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.010066 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c8c5ed-2473-414f-8f7c-1126b0ba114e" containerName="registry-server" Jan 27 10:27:13 crc kubenswrapper[4799]: E0127 10:27:13.010092 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f039cb0-72a0-4e7b-8def-6200d5dbbacb" containerName="registry-server" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.010105 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f039cb0-72a0-4e7b-8def-6200d5dbbacb" containerName="registry-server" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.010512 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f039cb0-72a0-4e7b-8def-6200d5dbbacb" containerName="registry-server" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.010550 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="48c6ad23-943d-4951-bb0b-af8fbe9f4609" containerName="registry-server" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.010579 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="37c8c5ed-2473-414f-8f7c-1126b0ba114e" containerName="registry-server" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.013275 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cvr59" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.026515 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cvr59"] Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.061434 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b37c730a-35e5-4072-bcfb-64a9cec8e080-catalog-content\") pod \"certified-operators-cvr59\" (UID: \"b37c730a-35e5-4072-bcfb-64a9cec8e080\") " pod="openshift-marketplace/certified-operators-cvr59" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.061880 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b37c730a-35e5-4072-bcfb-64a9cec8e080-utilities\") pod \"certified-operators-cvr59\" (UID: \"b37c730a-35e5-4072-bcfb-64a9cec8e080\") " pod="openshift-marketplace/certified-operators-cvr59" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.062275 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rh7h\" (UniqueName: \"kubernetes.io/projected/b37c730a-35e5-4072-bcfb-64a9cec8e080-kube-api-access-5rh7h\") pod \"certified-operators-cvr59\" (UID: \"b37c730a-35e5-4072-bcfb-64a9cec8e080\") " pod="openshift-marketplace/certified-operators-cvr59" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.164692 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rh7h\" (UniqueName: \"kubernetes.io/projected/b37c730a-35e5-4072-bcfb-64a9cec8e080-kube-api-access-5rh7h\") pod \"certified-operators-cvr59\" (UID: \"b37c730a-35e5-4072-bcfb-64a9cec8e080\") " pod="openshift-marketplace/certified-operators-cvr59" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.164860 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b37c730a-35e5-4072-bcfb-64a9cec8e080-catalog-content\") pod \"certified-operators-cvr59\" (UID: \"b37c730a-35e5-4072-bcfb-64a9cec8e080\") " pod="openshift-marketplace/certified-operators-cvr59" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.165026 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b37c730a-35e5-4072-bcfb-64a9cec8e080-utilities\") pod \"certified-operators-cvr59\" (UID: \"b37c730a-35e5-4072-bcfb-64a9cec8e080\") " pod="openshift-marketplace/certified-operators-cvr59" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.166147 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b37c730a-35e5-4072-bcfb-64a9cec8e080-utilities\") pod \"certified-operators-cvr59\" (UID: \"b37c730a-35e5-4072-bcfb-64a9cec8e080\") " pod="openshift-marketplace/certified-operators-cvr59" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.166520 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b37c730a-35e5-4072-bcfb-64a9cec8e080-catalog-content\") pod \"certified-operators-cvr59\" (UID: \"b37c730a-35e5-4072-bcfb-64a9cec8e080\") " pod="openshift-marketplace/certified-operators-cvr59" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.348552 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rh7h\" (UniqueName: \"kubernetes.io/projected/b37c730a-35e5-4072-bcfb-64a9cec8e080-kube-api-access-5rh7h\") pod \"certified-operators-cvr59\" (UID: \"b37c730a-35e5-4072-bcfb-64a9cec8e080\") " pod="openshift-marketplace/certified-operators-cvr59" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.360517 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cvr59" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.451822 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:27:13 crc kubenswrapper[4799]: E0127 10:27:13.452063 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:27:13 crc kubenswrapper[4799]: I0127 10:27:13.904144 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cvr59"] Jan 27 10:27:14 crc kubenswrapper[4799]: I0127 10:27:14.328719 4799 generic.go:334] "Generic (PLEG): container finished" podID="b37c730a-35e5-4072-bcfb-64a9cec8e080" containerID="e09351b349186388ffd120dc97e4d1aab05f9795d33ad3801c5017fb11bec45e" exitCode=0 Jan 27 10:27:14 crc kubenswrapper[4799]: I0127 10:27:14.328778 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvr59" event={"ID":"b37c730a-35e5-4072-bcfb-64a9cec8e080","Type":"ContainerDied","Data":"e09351b349186388ffd120dc97e4d1aab05f9795d33ad3801c5017fb11bec45e"} Jan 27 10:27:14 crc kubenswrapper[4799]: I0127 10:27:14.329113 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvr59" event={"ID":"b37c730a-35e5-4072-bcfb-64a9cec8e080","Type":"ContainerStarted","Data":"4d955e50321ea81b97b7407c7db72f70d655a8564c937b9a3d4488b110892d55"} Jan 27 10:27:15 crc kubenswrapper[4799]: I0127 10:27:15.341253 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvr59" event={"ID":"b37c730a-35e5-4072-bcfb-64a9cec8e080","Type":"ContainerStarted","Data":"01428f280bd7b4d4fdce0174f372d8ccdfbdf2d95fed8c8aa12e66eeefcbaa95"} Jan 27 10:27:16 crc kubenswrapper[4799]: I0127 10:27:16.355240 4799 generic.go:334] "Generic (PLEG): container finished" podID="b37c730a-35e5-4072-bcfb-64a9cec8e080" containerID="01428f280bd7b4d4fdce0174f372d8ccdfbdf2d95fed8c8aa12e66eeefcbaa95" exitCode=0 Jan 27 10:27:16 crc kubenswrapper[4799]: I0127 10:27:16.355347 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvr59" event={"ID":"b37c730a-35e5-4072-bcfb-64a9cec8e080","Type":"ContainerDied","Data":"01428f280bd7b4d4fdce0174f372d8ccdfbdf2d95fed8c8aa12e66eeefcbaa95"} Jan 27 10:27:17 crc kubenswrapper[4799]: I0127 10:27:17.370007 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvr59" event={"ID":"b37c730a-35e5-4072-bcfb-64a9cec8e080","Type":"ContainerStarted","Data":"d04a5fc02d4d29db00254424a0514613efd03c3600f5a68027fe3fc8848da60e"} Jan 27 10:27:17 crc kubenswrapper[4799]: I0127 10:27:17.402936 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cvr59" podStartSLOduration=2.914426724 podStartE2EDuration="5.402916993s" podCreationTimestamp="2026-01-27 10:27:12 +0000 UTC" firstStartedPulling="2026-01-27 10:27:14.332007731 +0000 UTC m=+9700.643111796" lastFinishedPulling="2026-01-27 10:27:16.82049796 +0000 UTC m=+9703.131602065" observedRunningTime="2026-01-27 10:27:17.39728628 +0000 UTC m=+9703.708390385" watchObservedRunningTime="2026-01-27 10:27:17.402916993 +0000 UTC m=+9703.714021068" Jan 27 10:27:23 crc kubenswrapper[4799]: I0127 10:27:23.362144 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cvr59" Jan 27 10:27:23 crc kubenswrapper[4799]: I0127 10:27:23.362861 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cvr59" Jan 27 10:27:23 crc kubenswrapper[4799]: I0127 10:27:23.449149 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cvr59" Jan 27 10:27:23 crc kubenswrapper[4799]: I0127 10:27:23.539197 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cvr59" Jan 27 10:27:23 crc kubenswrapper[4799]: I0127 10:27:23.700036 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cvr59"] Jan 27 10:27:24 crc kubenswrapper[4799]: I0127 10:27:24.464618 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:27:24 crc kubenswrapper[4799]: E0127 10:27:24.464894 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:27:25 crc kubenswrapper[4799]: I0127 10:27:25.489722 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cvr59" podUID="b37c730a-35e5-4072-bcfb-64a9cec8e080" containerName="registry-server" containerID="cri-o://d04a5fc02d4d29db00254424a0514613efd03c3600f5a68027fe3fc8848da60e" gracePeriod=2 Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.026202 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cvr59" Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.119178 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b37c730a-35e5-4072-bcfb-64a9cec8e080-catalog-content\") pod \"b37c730a-35e5-4072-bcfb-64a9cec8e080\" (UID: \"b37c730a-35e5-4072-bcfb-64a9cec8e080\") " Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.119543 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rh7h\" (UniqueName: \"kubernetes.io/projected/b37c730a-35e5-4072-bcfb-64a9cec8e080-kube-api-access-5rh7h\") pod \"b37c730a-35e5-4072-bcfb-64a9cec8e080\" (UID: \"b37c730a-35e5-4072-bcfb-64a9cec8e080\") " Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.119815 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b37c730a-35e5-4072-bcfb-64a9cec8e080-utilities\") pod \"b37c730a-35e5-4072-bcfb-64a9cec8e080\" (UID: \"b37c730a-35e5-4072-bcfb-64a9cec8e080\") " Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.121549 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b37c730a-35e5-4072-bcfb-64a9cec8e080-utilities" (OuterVolumeSpecName: "utilities") pod "b37c730a-35e5-4072-bcfb-64a9cec8e080" (UID: "b37c730a-35e5-4072-bcfb-64a9cec8e080"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.215597 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b37c730a-35e5-4072-bcfb-64a9cec8e080-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b37c730a-35e5-4072-bcfb-64a9cec8e080" (UID: "b37c730a-35e5-4072-bcfb-64a9cec8e080"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.222778 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b37c730a-35e5-4072-bcfb-64a9cec8e080-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.222818 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b37c730a-35e5-4072-bcfb-64a9cec8e080-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.501936 4799 generic.go:334] "Generic (PLEG): container finished" podID="b37c730a-35e5-4072-bcfb-64a9cec8e080" containerID="d04a5fc02d4d29db00254424a0514613efd03c3600f5a68027fe3fc8848da60e" exitCode=0 Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.502001 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvr59" event={"ID":"b37c730a-35e5-4072-bcfb-64a9cec8e080","Type":"ContainerDied","Data":"d04a5fc02d4d29db00254424a0514613efd03c3600f5a68027fe3fc8848da60e"} Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.502031 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cvr59" Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.502055 4799 scope.go:117] "RemoveContainer" containerID="d04a5fc02d4d29db00254424a0514613efd03c3600f5a68027fe3fc8848da60e" Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.502039 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvr59" event={"ID":"b37c730a-35e5-4072-bcfb-64a9cec8e080","Type":"ContainerDied","Data":"4d955e50321ea81b97b7407c7db72f70d655a8564c937b9a3d4488b110892d55"} Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.531152 4799 scope.go:117] "RemoveContainer" containerID="01428f280bd7b4d4fdce0174f372d8ccdfbdf2d95fed8c8aa12e66eeefcbaa95" Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.643451 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b37c730a-35e5-4072-bcfb-64a9cec8e080-kube-api-access-5rh7h" (OuterVolumeSpecName: "kube-api-access-5rh7h") pod "b37c730a-35e5-4072-bcfb-64a9cec8e080" (UID: "b37c730a-35e5-4072-bcfb-64a9cec8e080"). InnerVolumeSpecName "kube-api-access-5rh7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.671045 4799 scope.go:117] "RemoveContainer" containerID="e09351b349186388ffd120dc97e4d1aab05f9795d33ad3801c5017fb11bec45e" Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.732830 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rh7h\" (UniqueName: \"kubernetes.io/projected/b37c730a-35e5-4072-bcfb-64a9cec8e080-kube-api-access-5rh7h\") on node \"crc\" DevicePath \"\"" Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.768261 4799 scope.go:117] "RemoveContainer" containerID="d04a5fc02d4d29db00254424a0514613efd03c3600f5a68027fe3fc8848da60e" Jan 27 10:27:26 crc kubenswrapper[4799]: E0127 10:27:26.770675 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d04a5fc02d4d29db00254424a0514613efd03c3600f5a68027fe3fc8848da60e\": container with ID starting with d04a5fc02d4d29db00254424a0514613efd03c3600f5a68027fe3fc8848da60e not found: ID does not exist" containerID="d04a5fc02d4d29db00254424a0514613efd03c3600f5a68027fe3fc8848da60e" Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.770742 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d04a5fc02d4d29db00254424a0514613efd03c3600f5a68027fe3fc8848da60e"} err="failed to get container status \"d04a5fc02d4d29db00254424a0514613efd03c3600f5a68027fe3fc8848da60e\": rpc error: code = NotFound desc = could not find container \"d04a5fc02d4d29db00254424a0514613efd03c3600f5a68027fe3fc8848da60e\": container with ID starting with d04a5fc02d4d29db00254424a0514613efd03c3600f5a68027fe3fc8848da60e not found: ID does not exist" Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.770774 4799 scope.go:117] "RemoveContainer" containerID="01428f280bd7b4d4fdce0174f372d8ccdfbdf2d95fed8c8aa12e66eeefcbaa95" Jan 27 10:27:26 crc kubenswrapper[4799]: E0127 10:27:26.771279 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01428f280bd7b4d4fdce0174f372d8ccdfbdf2d95fed8c8aa12e66eeefcbaa95\": container with ID starting with 01428f280bd7b4d4fdce0174f372d8ccdfbdf2d95fed8c8aa12e66eeefcbaa95 not found: ID does not exist" containerID="01428f280bd7b4d4fdce0174f372d8ccdfbdf2d95fed8c8aa12e66eeefcbaa95" Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.771490 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01428f280bd7b4d4fdce0174f372d8ccdfbdf2d95fed8c8aa12e66eeefcbaa95"} err="failed to get container status \"01428f280bd7b4d4fdce0174f372d8ccdfbdf2d95fed8c8aa12e66eeefcbaa95\": rpc error: code = NotFound desc = could not find container \"01428f280bd7b4d4fdce0174f372d8ccdfbdf2d95fed8c8aa12e66eeefcbaa95\": container with ID starting with 01428f280bd7b4d4fdce0174f372d8ccdfbdf2d95fed8c8aa12e66eeefcbaa95 not found: ID does not exist" Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.771580 4799 scope.go:117] "RemoveContainer" containerID="e09351b349186388ffd120dc97e4d1aab05f9795d33ad3801c5017fb11bec45e" Jan 27 10:27:26 crc kubenswrapper[4799]: E0127 10:27:26.772219 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e09351b349186388ffd120dc97e4d1aab05f9795d33ad3801c5017fb11bec45e\": container with ID starting with e09351b349186388ffd120dc97e4d1aab05f9795d33ad3801c5017fb11bec45e not found: ID does not exist" containerID="e09351b349186388ffd120dc97e4d1aab05f9795d33ad3801c5017fb11bec45e" Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.772269 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e09351b349186388ffd120dc97e4d1aab05f9795d33ad3801c5017fb11bec45e"} err="failed to get container status \"e09351b349186388ffd120dc97e4d1aab05f9795d33ad3801c5017fb11bec45e\": rpc error: code = NotFound desc = could not find container \"e09351b349186388ffd120dc97e4d1aab05f9795d33ad3801c5017fb11bec45e\": container with ID starting with e09351b349186388ffd120dc97e4d1aab05f9795d33ad3801c5017fb11bec45e not found: ID does not exist" Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.852472 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cvr59"] Jan 27 10:27:26 crc kubenswrapper[4799]: I0127 10:27:26.865794 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cvr59"] Jan 27 10:27:28 crc kubenswrapper[4799]: I0127 10:27:28.461872 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b37c730a-35e5-4072-bcfb-64a9cec8e080" path="/var/lib/kubelet/pods/b37c730a-35e5-4072-bcfb-64a9cec8e080/volumes" Jan 27 10:27:36 crc kubenswrapper[4799]: I0127 10:27:36.451992 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:27:36 crc kubenswrapper[4799]: E0127 10:27:36.453515 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:27:50 crc kubenswrapper[4799]: I0127 10:27:50.452351 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:27:50 crc kubenswrapper[4799]: E0127 10:27:50.453356 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:28:01 crc kubenswrapper[4799]: I0127 10:28:01.451787 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:28:01 crc kubenswrapper[4799]: E0127 10:28:01.452979 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:28:16 crc kubenswrapper[4799]: I0127 10:28:16.451305 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:28:16 crc kubenswrapper[4799]: E0127 10:28:16.452199 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:28:28 crc kubenswrapper[4799]: I0127 10:28:28.453354 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:28:28 crc kubenswrapper[4799]: E0127 10:28:28.454783 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:28:40 crc kubenswrapper[4799]: I0127 10:28:40.452052 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:28:40 crc kubenswrapper[4799]: E0127 10:28:40.453254 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:28:54 crc kubenswrapper[4799]: I0127 10:28:54.458146 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:28:54 crc kubenswrapper[4799]: E0127 10:28:54.459105 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:29:05 crc kubenswrapper[4799]: I0127 10:29:05.451457 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:29:05 crc kubenswrapper[4799]: E0127 10:29:05.452656 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:29:17 crc kubenswrapper[4799]: I0127 10:29:17.451933 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:29:17 crc kubenswrapper[4799]: E0127 10:29:17.453690 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:29:31 crc kubenswrapper[4799]: I0127 10:29:31.452161 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:29:31 crc kubenswrapper[4799]: E0127 10:29:31.453257 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:29:46 crc kubenswrapper[4799]: I0127 10:29:46.452766 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:29:46 crc kubenswrapper[4799]: E0127 10:29:46.453830 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:30:00 crc kubenswrapper[4799]: I0127 10:30:00.155055 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491830-nxsfs"] Jan 27 10:30:00 crc kubenswrapper[4799]: E0127 10:30:00.156263 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37c730a-35e5-4072-bcfb-64a9cec8e080" containerName="extract-content" Jan 27 10:30:00 crc kubenswrapper[4799]: I0127 10:30:00.156283 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37c730a-35e5-4072-bcfb-64a9cec8e080" containerName="extract-content" Jan 27 10:30:00 crc kubenswrapper[4799]: E0127 10:30:00.156352 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37c730a-35e5-4072-bcfb-64a9cec8e080" containerName="extract-utilities" Jan 27 10:30:00 crc kubenswrapper[4799]: I0127 10:30:00.156361 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37c730a-35e5-4072-bcfb-64a9cec8e080" containerName="extract-utilities" Jan 27 10:30:00 crc kubenswrapper[4799]: E0127 10:30:00.156394 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37c730a-35e5-4072-bcfb-64a9cec8e080" containerName="registry-server" Jan 27 10:30:00 crc kubenswrapper[4799]: I0127 10:30:00.156403 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37c730a-35e5-4072-bcfb-64a9cec8e080" containerName="registry-server" Jan 27 10:30:00 crc kubenswrapper[4799]: I0127 10:30:00.156627 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="b37c730a-35e5-4072-bcfb-64a9cec8e080" containerName="registry-server" Jan 27 10:30:00 crc kubenswrapper[4799]: I0127 10:30:00.157443 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-nxsfs" Jan 27 10:30:00 crc kubenswrapper[4799]: I0127 10:30:00.160906 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 10:30:00 crc kubenswrapper[4799]: I0127 10:30:00.166132 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 10:30:00 crc kubenswrapper[4799]: I0127 10:30:00.186351 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491830-nxsfs"] Jan 27 10:30:00 crc kubenswrapper[4799]: I0127 10:30:00.271694 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa46df8e-7770-42ae-bbb9-8a39c26e8d9f-secret-volume\") pod \"collect-profiles-29491830-nxsfs\" (UID: \"aa46df8e-7770-42ae-bbb9-8a39c26e8d9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-nxsfs" Jan 27 10:30:00 crc kubenswrapper[4799]: I0127 10:30:00.271784 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa46df8e-7770-42ae-bbb9-8a39c26e8d9f-config-volume\") pod \"collect-profiles-29491830-nxsfs\" (UID: \"aa46df8e-7770-42ae-bbb9-8a39c26e8d9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-nxsfs" Jan 27 10:30:00 crc kubenswrapper[4799]: I0127 10:30:00.271929 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z26j2\" (UniqueName: \"kubernetes.io/projected/aa46df8e-7770-42ae-bbb9-8a39c26e8d9f-kube-api-access-z26j2\") pod \"collect-profiles-29491830-nxsfs\" (UID: \"aa46df8e-7770-42ae-bbb9-8a39c26e8d9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-nxsfs" Jan 27 10:30:00 crc kubenswrapper[4799]: I0127 10:30:00.374070 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z26j2\" (UniqueName: \"kubernetes.io/projected/aa46df8e-7770-42ae-bbb9-8a39c26e8d9f-kube-api-access-z26j2\") pod \"collect-profiles-29491830-nxsfs\" (UID: \"aa46df8e-7770-42ae-bbb9-8a39c26e8d9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-nxsfs" Jan 27 10:30:00 crc kubenswrapper[4799]: I0127 10:30:00.374414 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa46df8e-7770-42ae-bbb9-8a39c26e8d9f-secret-volume\") pod \"collect-profiles-29491830-nxsfs\" (UID: \"aa46df8e-7770-42ae-bbb9-8a39c26e8d9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-nxsfs" Jan 27 10:30:00 crc kubenswrapper[4799]: I0127 10:30:00.376643 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa46df8e-7770-42ae-bbb9-8a39c26e8d9f-config-volume\") pod \"collect-profiles-29491830-nxsfs\" (UID: \"aa46df8e-7770-42ae-bbb9-8a39c26e8d9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-nxsfs" Jan 27 10:30:00 crc kubenswrapper[4799]: I0127 10:30:00.378542 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa46df8e-7770-42ae-bbb9-8a39c26e8d9f-config-volume\") pod \"collect-profiles-29491830-nxsfs\" (UID: \"aa46df8e-7770-42ae-bbb9-8a39c26e8d9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-nxsfs" Jan 27 10:30:00 crc kubenswrapper[4799]: I0127 10:30:00.380912 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa46df8e-7770-42ae-bbb9-8a39c26e8d9f-secret-volume\") pod \"collect-profiles-29491830-nxsfs\" (UID: \"aa46df8e-7770-42ae-bbb9-8a39c26e8d9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-nxsfs" Jan 27 10:30:00 crc kubenswrapper[4799]: I0127 10:30:00.405411 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z26j2\" (UniqueName: \"kubernetes.io/projected/aa46df8e-7770-42ae-bbb9-8a39c26e8d9f-kube-api-access-z26j2\") pod \"collect-profiles-29491830-nxsfs\" (UID: \"aa46df8e-7770-42ae-bbb9-8a39c26e8d9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-nxsfs" Jan 27 10:30:00 crc kubenswrapper[4799]: I0127 10:30:00.495255 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-nxsfs" Jan 27 10:30:00 crc kubenswrapper[4799]: I0127 10:30:00.995810 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491830-nxsfs"] Jan 27 10:30:01 crc kubenswrapper[4799]: I0127 10:30:01.321485 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-nxsfs" event={"ID":"aa46df8e-7770-42ae-bbb9-8a39c26e8d9f","Type":"ContainerStarted","Data":"d4cd4f2f36a2f34e2233c362890076d94ab6a743514ba905420ca4fd2866dd18"} Jan 27 10:30:01 crc kubenswrapper[4799]: I0127 10:30:01.451877 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:30:01 crc kubenswrapper[4799]: E0127 10:30:01.452314 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:30:02 crc kubenswrapper[4799]: I0127 10:30:02.333012 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-nxsfs" event={"ID":"aa46df8e-7770-42ae-bbb9-8a39c26e8d9f","Type":"ContainerStarted","Data":"d8577c9984080b9e578b485f12172b131a558c0ae148d37798c7656fdacc5067"} Jan 27 10:30:02 crc kubenswrapper[4799]: I0127 10:30:02.375900 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-nxsfs" podStartSLOduration=2.375871895 podStartE2EDuration="2.375871895s" podCreationTimestamp="2026-01-27 10:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 10:30:02.357500027 +0000 UTC m=+9868.668604132" watchObservedRunningTime="2026-01-27 10:30:02.375871895 +0000 UTC m=+9868.686976000" Jan 27 10:30:03 crc kubenswrapper[4799]: I0127 10:30:03.348141 4799 generic.go:334] "Generic (PLEG): container finished" podID="aa46df8e-7770-42ae-bbb9-8a39c26e8d9f" containerID="d8577c9984080b9e578b485f12172b131a558c0ae148d37798c7656fdacc5067" exitCode=0 Jan 27 10:30:03 crc kubenswrapper[4799]: I0127 10:30:03.348261 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-nxsfs" event={"ID":"aa46df8e-7770-42ae-bbb9-8a39c26e8d9f","Type":"ContainerDied","Data":"d8577c9984080b9e578b485f12172b131a558c0ae148d37798c7656fdacc5067"} Jan 27 10:30:04 crc kubenswrapper[4799]: I0127 10:30:04.801075 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-nxsfs" Jan 27 10:30:04 crc kubenswrapper[4799]: I0127 10:30:04.875851 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z26j2\" (UniqueName: \"kubernetes.io/projected/aa46df8e-7770-42ae-bbb9-8a39c26e8d9f-kube-api-access-z26j2\") pod \"aa46df8e-7770-42ae-bbb9-8a39c26e8d9f\" (UID: \"aa46df8e-7770-42ae-bbb9-8a39c26e8d9f\") " Jan 27 10:30:04 crc kubenswrapper[4799]: I0127 10:30:04.876032 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa46df8e-7770-42ae-bbb9-8a39c26e8d9f-secret-volume\") pod \"aa46df8e-7770-42ae-bbb9-8a39c26e8d9f\" (UID: \"aa46df8e-7770-42ae-bbb9-8a39c26e8d9f\") " Jan 27 10:30:04 crc kubenswrapper[4799]: I0127 10:30:04.876158 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa46df8e-7770-42ae-bbb9-8a39c26e8d9f-config-volume\") pod \"aa46df8e-7770-42ae-bbb9-8a39c26e8d9f\" (UID: \"aa46df8e-7770-42ae-bbb9-8a39c26e8d9f\") " Jan 27 10:30:04 crc kubenswrapper[4799]: I0127 10:30:04.877439 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa46df8e-7770-42ae-bbb9-8a39c26e8d9f-config-volume" (OuterVolumeSpecName: "config-volume") pod "aa46df8e-7770-42ae-bbb9-8a39c26e8d9f" (UID: "aa46df8e-7770-42ae-bbb9-8a39c26e8d9f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:30:04 crc kubenswrapper[4799]: I0127 10:30:04.942874 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa46df8e-7770-42ae-bbb9-8a39c26e8d9f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aa46df8e-7770-42ae-bbb9-8a39c26e8d9f" (UID: "aa46df8e-7770-42ae-bbb9-8a39c26e8d9f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:30:04 crc kubenswrapper[4799]: I0127 10:30:04.942973 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa46df8e-7770-42ae-bbb9-8a39c26e8d9f-kube-api-access-z26j2" (OuterVolumeSpecName: "kube-api-access-z26j2") pod "aa46df8e-7770-42ae-bbb9-8a39c26e8d9f" (UID: "aa46df8e-7770-42ae-bbb9-8a39c26e8d9f"). InnerVolumeSpecName "kube-api-access-z26j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:30:04 crc kubenswrapper[4799]: I0127 10:30:04.979405 4799 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa46df8e-7770-42ae-bbb9-8a39c26e8d9f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 10:30:04 crc kubenswrapper[4799]: I0127 10:30:04.979442 4799 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa46df8e-7770-42ae-bbb9-8a39c26e8d9f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 10:30:04 crc kubenswrapper[4799]: I0127 10:30:04.979455 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z26j2\" (UniqueName: \"kubernetes.io/projected/aa46df8e-7770-42ae-bbb9-8a39c26e8d9f-kube-api-access-z26j2\") on node \"crc\" DevicePath \"\"" Jan 27 10:30:05 crc kubenswrapper[4799]: I0127 10:30:05.377846 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-nxsfs" event={"ID":"aa46df8e-7770-42ae-bbb9-8a39c26e8d9f","Type":"ContainerDied","Data":"d4cd4f2f36a2f34e2233c362890076d94ab6a743514ba905420ca4fd2866dd18"} Jan 27 10:30:05 crc kubenswrapper[4799]: I0127 10:30:05.377903 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4cd4f2f36a2f34e2233c362890076d94ab6a743514ba905420ca4fd2866dd18" Jan 27 10:30:05 crc kubenswrapper[4799]: I0127 10:30:05.377999 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-nxsfs" Jan 27 10:30:05 crc kubenswrapper[4799]: I0127 10:30:05.454001 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491785-5kvll"] Jan 27 10:30:05 crc kubenswrapper[4799]: I0127 10:30:05.461230 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491785-5kvll"] Jan 27 10:30:06 crc kubenswrapper[4799]: I0127 10:30:06.463976 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e" path="/var/lib/kubelet/pods/8e9ba4d4-4efb-4cd8-856d-760fd0a7c52e/volumes" Jan 27 10:30:15 crc kubenswrapper[4799]: I0127 10:30:15.452277 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:30:15 crc kubenswrapper[4799]: E0127 10:30:15.453536 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:30:16 crc kubenswrapper[4799]: I0127 10:30:16.495779 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tscsr/must-gather-sdlhf"] Jan 27 10:30:16 crc kubenswrapper[4799]: E0127 10:30:16.496423 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa46df8e-7770-42ae-bbb9-8a39c26e8d9f" containerName="collect-profiles" Jan 27 10:30:16 crc kubenswrapper[4799]: I0127 10:30:16.496436 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa46df8e-7770-42ae-bbb9-8a39c26e8d9f" containerName="collect-profiles" Jan 27 10:30:16 crc kubenswrapper[4799]: I0127 10:30:16.496639 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa46df8e-7770-42ae-bbb9-8a39c26e8d9f" containerName="collect-profiles" Jan 27 10:30:16 crc kubenswrapper[4799]: I0127 10:30:16.497651 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tscsr/must-gather-sdlhf" Jan 27 10:30:16 crc kubenswrapper[4799]: I0127 10:30:16.499952 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tscsr"/"kube-root-ca.crt" Jan 27 10:30:16 crc kubenswrapper[4799]: I0127 10:30:16.500172 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-tscsr"/"default-dockercfg-pzfbg" Jan 27 10:30:16 crc kubenswrapper[4799]: I0127 10:30:16.500329 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tscsr"/"openshift-service-ca.crt" Jan 27 10:30:16 crc kubenswrapper[4799]: I0127 10:30:16.511652 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tscsr/must-gather-sdlhf"] Jan 27 10:30:16 crc kubenswrapper[4799]: I0127 10:30:16.577515 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxwmt\" (UniqueName: \"kubernetes.io/projected/a245d31a-a78c-4805-a3d8-7336f85f7cff-kube-api-access-hxwmt\") pod \"must-gather-sdlhf\" (UID: \"a245d31a-a78c-4805-a3d8-7336f85f7cff\") " pod="openshift-must-gather-tscsr/must-gather-sdlhf" Jan 27 10:30:16 crc kubenswrapper[4799]: I0127 10:30:16.577626 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a245d31a-a78c-4805-a3d8-7336f85f7cff-must-gather-output\") pod \"must-gather-sdlhf\" (UID: \"a245d31a-a78c-4805-a3d8-7336f85f7cff\") " pod="openshift-must-gather-tscsr/must-gather-sdlhf" Jan 27 10:30:16 crc kubenswrapper[4799]: I0127 10:30:16.679872 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a245d31a-a78c-4805-a3d8-7336f85f7cff-must-gather-output\") pod \"must-gather-sdlhf\" (UID: \"a245d31a-a78c-4805-a3d8-7336f85f7cff\") " pod="openshift-must-gather-tscsr/must-gather-sdlhf" Jan 27 10:30:16 crc kubenswrapper[4799]: I0127 10:30:16.680136 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxwmt\" (UniqueName: \"kubernetes.io/projected/a245d31a-a78c-4805-a3d8-7336f85f7cff-kube-api-access-hxwmt\") pod \"must-gather-sdlhf\" (UID: \"a245d31a-a78c-4805-a3d8-7336f85f7cff\") " pod="openshift-must-gather-tscsr/must-gather-sdlhf" Jan 27 10:30:16 crc kubenswrapper[4799]: I0127 10:30:16.680648 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a245d31a-a78c-4805-a3d8-7336f85f7cff-must-gather-output\") pod \"must-gather-sdlhf\" (UID: \"a245d31a-a78c-4805-a3d8-7336f85f7cff\") " pod="openshift-must-gather-tscsr/must-gather-sdlhf" Jan 27 10:30:17 crc kubenswrapper[4799]: I0127 10:30:17.044596 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxwmt\" (UniqueName: \"kubernetes.io/projected/a245d31a-a78c-4805-a3d8-7336f85f7cff-kube-api-access-hxwmt\") pod \"must-gather-sdlhf\" (UID: \"a245d31a-a78c-4805-a3d8-7336f85f7cff\") " pod="openshift-must-gather-tscsr/must-gather-sdlhf" Jan 27 10:30:17 crc kubenswrapper[4799]: I0127 10:30:17.112995 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tscsr/must-gather-sdlhf" Jan 27 10:30:17 crc kubenswrapper[4799]: I0127 10:30:17.606051 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tscsr/must-gather-sdlhf"] Jan 27 10:30:17 crc kubenswrapper[4799]: I0127 10:30:17.612100 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 10:30:18 crc kubenswrapper[4799]: I0127 10:30:18.537687 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tscsr/must-gather-sdlhf" event={"ID":"a245d31a-a78c-4805-a3d8-7336f85f7cff","Type":"ContainerStarted","Data":"32c93276802c01cb0f82b8cd9ce3b93a77666813aafe72a0e740252cd2f98f60"} Jan 27 10:30:25 crc kubenswrapper[4799]: I0127 10:30:25.612073 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tscsr/must-gather-sdlhf" event={"ID":"a245d31a-a78c-4805-a3d8-7336f85f7cff","Type":"ContainerStarted","Data":"815a35240be15129dc2dc6d5ed9b53ece124546a32b9c4764afa6105c497f6a9"} Jan 27 10:30:25 crc kubenswrapper[4799]: I0127 10:30:25.612548 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tscsr/must-gather-sdlhf" event={"ID":"a245d31a-a78c-4805-a3d8-7336f85f7cff","Type":"ContainerStarted","Data":"6abdd27a533555e5804f1514b13f05082790f99317ce917942ef4b028ac1de0f"} Jan 27 10:30:25 crc kubenswrapper[4799]: I0127 10:30:25.635758 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tscsr/must-gather-sdlhf" podStartSLOduration=2.587946645 podStartE2EDuration="9.635730619s" podCreationTimestamp="2026-01-27 10:30:16 +0000 UTC" firstStartedPulling="2026-01-27 10:30:17.611778856 +0000 UTC m=+9883.922882941" lastFinishedPulling="2026-01-27 10:30:24.65956285 +0000 UTC m=+9890.970666915" observedRunningTime="2026-01-27 10:30:25.632627205 +0000 UTC m=+9891.943731280" watchObservedRunningTime="2026-01-27 10:30:25.635730619 +0000 UTC m=+9891.946834724" Jan 27 10:30:29 crc kubenswrapper[4799]: I0127 10:30:29.451517 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:30:29 crc kubenswrapper[4799]: E0127 10:30:29.452281 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:30:29 crc kubenswrapper[4799]: I0127 10:30:29.700682 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tscsr/crc-debug-hsck8"] Jan 27 10:30:29 crc kubenswrapper[4799]: I0127 10:30:29.702057 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tscsr/crc-debug-hsck8" Jan 27 10:30:29 crc kubenswrapper[4799]: I0127 10:30:29.773008 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ba165ba-37fd-4561-958f-7daba5d8496a-host\") pod \"crc-debug-hsck8\" (UID: \"1ba165ba-37fd-4561-958f-7daba5d8496a\") " pod="openshift-must-gather-tscsr/crc-debug-hsck8" Jan 27 10:30:29 crc kubenswrapper[4799]: I0127 10:30:29.773093 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4zs8\" (UniqueName: \"kubernetes.io/projected/1ba165ba-37fd-4561-958f-7daba5d8496a-kube-api-access-r4zs8\") pod \"crc-debug-hsck8\" (UID: \"1ba165ba-37fd-4561-958f-7daba5d8496a\") " pod="openshift-must-gather-tscsr/crc-debug-hsck8" Jan 27 10:30:29 crc kubenswrapper[4799]: I0127 10:30:29.875735 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ba165ba-37fd-4561-958f-7daba5d8496a-host\") pod \"crc-debug-hsck8\" (UID: \"1ba165ba-37fd-4561-958f-7daba5d8496a\") " pod="openshift-must-gather-tscsr/crc-debug-hsck8" Jan 27 10:30:29 crc kubenswrapper[4799]: I0127 10:30:29.875896 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4zs8\" (UniqueName: \"kubernetes.io/projected/1ba165ba-37fd-4561-958f-7daba5d8496a-kube-api-access-r4zs8\") pod \"crc-debug-hsck8\" (UID: \"1ba165ba-37fd-4561-958f-7daba5d8496a\") " pod="openshift-must-gather-tscsr/crc-debug-hsck8" Jan 27 10:30:29 crc kubenswrapper[4799]: I0127 10:30:29.875926 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ba165ba-37fd-4561-958f-7daba5d8496a-host\") pod \"crc-debug-hsck8\" (UID: \"1ba165ba-37fd-4561-958f-7daba5d8496a\") " pod="openshift-must-gather-tscsr/crc-debug-hsck8" Jan 27 10:30:29 crc kubenswrapper[4799]: I0127 10:30:29.898167 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4zs8\" (UniqueName: \"kubernetes.io/projected/1ba165ba-37fd-4561-958f-7daba5d8496a-kube-api-access-r4zs8\") pod \"crc-debug-hsck8\" (UID: \"1ba165ba-37fd-4561-958f-7daba5d8496a\") " pod="openshift-must-gather-tscsr/crc-debug-hsck8" Jan 27 10:30:30 crc kubenswrapper[4799]: I0127 10:30:30.025987 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tscsr/crc-debug-hsck8" Jan 27 10:30:30 crc kubenswrapper[4799]: I0127 10:30:30.687225 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tscsr/crc-debug-hsck8" event={"ID":"1ba165ba-37fd-4561-958f-7daba5d8496a","Type":"ContainerStarted","Data":"80e623733bbd8bde2384333fb2abc0e398d9069b8cd26fc876e2af4ee2bbafbc"} Jan 27 10:30:41 crc kubenswrapper[4799]: I0127 10:30:41.781931 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tscsr/crc-debug-hsck8" event={"ID":"1ba165ba-37fd-4561-958f-7daba5d8496a","Type":"ContainerStarted","Data":"51c10f25579ffe39685dc23654ed5fbdf74e29a65707a45aea91fa19afb78d2b"} Jan 27 10:30:41 crc kubenswrapper[4799]: I0127 10:30:41.805461 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tscsr/crc-debug-hsck8" podStartSLOduration=2.066407988 podStartE2EDuration="12.805442993s" podCreationTimestamp="2026-01-27 10:30:29 +0000 UTC" firstStartedPulling="2026-01-27 10:30:30.069871527 +0000 UTC m=+9896.380975632" lastFinishedPulling="2026-01-27 10:30:40.808906572 +0000 UTC m=+9907.120010637" observedRunningTime="2026-01-27 10:30:41.802175574 +0000 UTC m=+9908.113279679" watchObservedRunningTime="2026-01-27 10:30:41.805442993 +0000 UTC m=+9908.116547058" Jan 27 10:30:44 crc kubenswrapper[4799]: I0127 10:30:44.467338 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:30:44 crc kubenswrapper[4799]: E0127 10:30:44.468208 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:30:45 crc kubenswrapper[4799]: I0127 10:30:45.516712 4799 scope.go:117] "RemoveContainer" containerID="b0a2abb695ac227783cf064ac4326e466d5f64e58ac2e54c59af4d6f46881140" Jan 27 10:30:58 crc kubenswrapper[4799]: I0127 10:30:58.451775 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:30:58 crc kubenswrapper[4799]: I0127 10:30:58.929916 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"f4c3254e058931cf30af93b06e3a03336364d8eae7744ad2c786efe39969a52b"} Jan 27 10:30:59 crc kubenswrapper[4799]: I0127 10:30:59.941498 4799 generic.go:334] "Generic (PLEG): container finished" podID="1ba165ba-37fd-4561-958f-7daba5d8496a" containerID="51c10f25579ffe39685dc23654ed5fbdf74e29a65707a45aea91fa19afb78d2b" exitCode=0 Jan 27 10:30:59 crc kubenswrapper[4799]: I0127 10:30:59.941594 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tscsr/crc-debug-hsck8" event={"ID":"1ba165ba-37fd-4561-958f-7daba5d8496a","Type":"ContainerDied","Data":"51c10f25579ffe39685dc23654ed5fbdf74e29a65707a45aea91fa19afb78d2b"} Jan 27 10:31:01 crc kubenswrapper[4799]: I0127 10:31:01.055930 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tscsr/crc-debug-hsck8" Jan 27 10:31:01 crc kubenswrapper[4799]: I0127 10:31:01.087823 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tscsr/crc-debug-hsck8"] Jan 27 10:31:01 crc kubenswrapper[4799]: I0127 10:31:01.094655 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tscsr/crc-debug-hsck8"] Jan 27 10:31:01 crc kubenswrapper[4799]: I0127 10:31:01.202164 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ba165ba-37fd-4561-958f-7daba5d8496a-host\") pod \"1ba165ba-37fd-4561-958f-7daba5d8496a\" (UID: \"1ba165ba-37fd-4561-958f-7daba5d8496a\") " Jan 27 10:31:01 crc kubenswrapper[4799]: I0127 10:31:01.202262 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ba165ba-37fd-4561-958f-7daba5d8496a-host" (OuterVolumeSpecName: "host") pod "1ba165ba-37fd-4561-958f-7daba5d8496a" (UID: "1ba165ba-37fd-4561-958f-7daba5d8496a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:31:01 crc kubenswrapper[4799]: I0127 10:31:01.202348 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4zs8\" (UniqueName: \"kubernetes.io/projected/1ba165ba-37fd-4561-958f-7daba5d8496a-kube-api-access-r4zs8\") pod \"1ba165ba-37fd-4561-958f-7daba5d8496a\" (UID: \"1ba165ba-37fd-4561-958f-7daba5d8496a\") " Jan 27 10:31:01 crc kubenswrapper[4799]: I0127 10:31:01.202781 4799 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ba165ba-37fd-4561-958f-7daba5d8496a-host\") on node \"crc\" DevicePath \"\"" Jan 27 10:31:01 crc kubenswrapper[4799]: I0127 10:31:01.207572 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ba165ba-37fd-4561-958f-7daba5d8496a-kube-api-access-r4zs8" (OuterVolumeSpecName: "kube-api-access-r4zs8") pod "1ba165ba-37fd-4561-958f-7daba5d8496a" (UID: "1ba165ba-37fd-4561-958f-7daba5d8496a"). InnerVolumeSpecName "kube-api-access-r4zs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:31:01 crc kubenswrapper[4799]: I0127 10:31:01.305265 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4zs8\" (UniqueName: \"kubernetes.io/projected/1ba165ba-37fd-4561-958f-7daba5d8496a-kube-api-access-r4zs8\") on node \"crc\" DevicePath \"\"" Jan 27 10:31:01 crc kubenswrapper[4799]: I0127 10:31:01.959980 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80e623733bbd8bde2384333fb2abc0e398d9069b8cd26fc876e2af4ee2bbafbc" Jan 27 10:31:01 crc kubenswrapper[4799]: I0127 10:31:01.960069 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tscsr/crc-debug-hsck8" Jan 27 10:31:02 crc kubenswrapper[4799]: I0127 10:31:02.280073 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tscsr/crc-debug-mjlcc"] Jan 27 10:31:02 crc kubenswrapper[4799]: E0127 10:31:02.280564 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba165ba-37fd-4561-958f-7daba5d8496a" containerName="container-00" Jan 27 10:31:02 crc kubenswrapper[4799]: I0127 10:31:02.280581 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba165ba-37fd-4561-958f-7daba5d8496a" containerName="container-00" Jan 27 10:31:02 crc kubenswrapper[4799]: I0127 10:31:02.280867 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ba165ba-37fd-4561-958f-7daba5d8496a" containerName="container-00" Jan 27 10:31:02 crc kubenswrapper[4799]: I0127 10:31:02.281670 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tscsr/crc-debug-mjlcc" Jan 27 10:31:02 crc kubenswrapper[4799]: I0127 10:31:02.428029 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnskx\" (UniqueName: \"kubernetes.io/projected/54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee-kube-api-access-bnskx\") pod \"crc-debug-mjlcc\" (UID: \"54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee\") " pod="openshift-must-gather-tscsr/crc-debug-mjlcc" Jan 27 10:31:02 crc kubenswrapper[4799]: I0127 10:31:02.428369 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee-host\") pod \"crc-debug-mjlcc\" (UID: \"54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee\") " pod="openshift-must-gather-tscsr/crc-debug-mjlcc" Jan 27 10:31:02 crc kubenswrapper[4799]: I0127 10:31:02.472563 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ba165ba-37fd-4561-958f-7daba5d8496a" path="/var/lib/kubelet/pods/1ba165ba-37fd-4561-958f-7daba5d8496a/volumes" Jan 27 10:31:02 crc kubenswrapper[4799]: I0127 10:31:02.530693 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnskx\" (UniqueName: \"kubernetes.io/projected/54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee-kube-api-access-bnskx\") pod \"crc-debug-mjlcc\" (UID: \"54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee\") " pod="openshift-must-gather-tscsr/crc-debug-mjlcc" Jan 27 10:31:02 crc kubenswrapper[4799]: I0127 10:31:02.530793 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee-host\") pod \"crc-debug-mjlcc\" (UID: \"54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee\") " pod="openshift-must-gather-tscsr/crc-debug-mjlcc" Jan 27 10:31:02 crc kubenswrapper[4799]: I0127 10:31:02.531987 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee-host\") pod \"crc-debug-mjlcc\" (UID: \"54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee\") " pod="openshift-must-gather-tscsr/crc-debug-mjlcc" Jan 27 10:31:02 crc kubenswrapper[4799]: I0127 10:31:02.553803 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnskx\" (UniqueName: \"kubernetes.io/projected/54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee-kube-api-access-bnskx\") pod \"crc-debug-mjlcc\" (UID: \"54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee\") " pod="openshift-must-gather-tscsr/crc-debug-mjlcc" Jan 27 10:31:02 crc kubenswrapper[4799]: I0127 10:31:02.601856 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tscsr/crc-debug-mjlcc" Jan 27 10:31:02 crc kubenswrapper[4799]: I0127 10:31:02.969533 4799 generic.go:334] "Generic (PLEG): container finished" podID="54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee" containerID="c873600008c73266eb0e7e687d6b74d55b7134f7a67de5de32805be95dc57ee8" exitCode=1 Jan 27 10:31:02 crc kubenswrapper[4799]: I0127 10:31:02.969647 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tscsr/crc-debug-mjlcc" event={"ID":"54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee","Type":"ContainerDied","Data":"c873600008c73266eb0e7e687d6b74d55b7134f7a67de5de32805be95dc57ee8"} Jan 27 10:31:02 crc kubenswrapper[4799]: I0127 10:31:02.969801 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tscsr/crc-debug-mjlcc" event={"ID":"54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee","Type":"ContainerStarted","Data":"d189b229dca5168ffdea13effda1b6e31eb0319b49bc24964b722ed40a837114"} Jan 27 10:31:03 crc kubenswrapper[4799]: I0127 10:31:03.004796 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tscsr/crc-debug-mjlcc"] Jan 27 10:31:03 crc kubenswrapper[4799]: I0127 10:31:03.017142 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tscsr/crc-debug-mjlcc"] Jan 27 10:31:04 crc kubenswrapper[4799]: I0127 10:31:04.098073 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tscsr/crc-debug-mjlcc" Jan 27 10:31:04 crc kubenswrapper[4799]: I0127 10:31:04.179060 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnskx\" (UniqueName: \"kubernetes.io/projected/54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee-kube-api-access-bnskx\") pod \"54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee\" (UID: \"54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee\") " Jan 27 10:31:04 crc kubenswrapper[4799]: I0127 10:31:04.179133 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee-host\") pod \"54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee\" (UID: \"54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee\") " Jan 27 10:31:04 crc kubenswrapper[4799]: I0127 10:31:04.179530 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee-host" (OuterVolumeSpecName: "host") pod "54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee" (UID: "54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:31:04 crc kubenswrapper[4799]: I0127 10:31:04.179831 4799 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee-host\") on node \"crc\" DevicePath \"\"" Jan 27 10:31:04 crc kubenswrapper[4799]: I0127 10:31:04.186212 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee-kube-api-access-bnskx" (OuterVolumeSpecName: "kube-api-access-bnskx") pod "54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee" (UID: "54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee"). InnerVolumeSpecName "kube-api-access-bnskx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:31:04 crc kubenswrapper[4799]: I0127 10:31:04.281953 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnskx\" (UniqueName: \"kubernetes.io/projected/54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee-kube-api-access-bnskx\") on node \"crc\" DevicePath \"\"" Jan 27 10:31:04 crc kubenswrapper[4799]: I0127 10:31:04.474815 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee" path="/var/lib/kubelet/pods/54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee/volumes" Jan 27 10:31:04 crc kubenswrapper[4799]: I0127 10:31:04.999340 4799 scope.go:117] "RemoveContainer" containerID="c873600008c73266eb0e7e687d6b74d55b7134f7a67de5de32805be95dc57ee8" Jan 27 10:31:04 crc kubenswrapper[4799]: I0127 10:31:04.999502 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tscsr/crc-debug-mjlcc" Jan 27 10:31:39 crc kubenswrapper[4799]: I0127 10:31:39.921944 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-765c486f4b-k85rt_12991e8b-afab-43fd-8635-c48de903d58a/barbican-api/0.log" Jan 27 10:31:40 crc kubenswrapper[4799]: I0127 10:31:40.137171 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-765c486f4b-k85rt_12991e8b-afab-43fd-8635-c48de903d58a/barbican-api-log/0.log" Jan 27 10:31:40 crc kubenswrapper[4799]: I0127 10:31:40.220656 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-df9f7bcc4-tnmxn_2435d082-5291-4270-b40e-eae6085ee3db/barbican-keystone-listener/0.log" Jan 27 10:31:40 crc kubenswrapper[4799]: I0127 10:31:40.385967 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-df9f7bcc4-tnmxn_2435d082-5291-4270-b40e-eae6085ee3db/barbican-keystone-listener-log/0.log" Jan 27 10:31:40 crc kubenswrapper[4799]: I0127 10:31:40.418195 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-646689794f-ln658_42976d87-a22a-4111-9c1c-35370e961782/barbican-worker/0.log" Jan 27 10:31:40 crc kubenswrapper[4799]: I0127 10:31:40.455118 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-646689794f-ln658_42976d87-a22a-4111-9c1c-35370e961782/barbican-worker-log/0.log" Jan 27 10:31:40 crc kubenswrapper[4799]: I0127 10:31:40.667083 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7fec7388-0fd9-4481-adff-14df549f15ba/cinder-api-log/0.log" Jan 27 10:31:40 crc kubenswrapper[4799]: I0127 10:31:40.676320 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7fec7388-0fd9-4481-adff-14df549f15ba/cinder-api/0.log" Jan 27 10:31:40 crc kubenswrapper[4799]: I0127 10:31:40.903758 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_0d6405a1-c618-4492-95ee-bc909981d06c/probe/0.log" Jan 27 10:31:40 crc kubenswrapper[4799]: I0127 10:31:40.961683 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_0d6405a1-c618-4492-95ee-bc909981d06c/cinder-backup/0.log" Jan 27 10:31:41 crc kubenswrapper[4799]: I0127 10:31:41.000790 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7481c01b-ab94-4a72-a35c-033cd195be3b/cinder-scheduler/0.log" Jan 27 10:31:41 crc kubenswrapper[4799]: I0127 10:31:41.133901 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7481c01b-ab94-4a72-a35c-033cd195be3b/probe/0.log" Jan 27 10:31:41 crc kubenswrapper[4799]: I0127 10:31:41.159071 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_1aaf2cd9-1e7e-487e-abc5-b49315e2b068/cinder-volume/0.log" Jan 27 10:31:41 crc kubenswrapper[4799]: I0127 10:31:41.227702 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_1aaf2cd9-1e7e-487e-abc5-b49315e2b068/probe/0.log" Jan 27 10:31:41 crc kubenswrapper[4799]: I0127 10:31:41.361508 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-557fb89b5c-8kwpv_b65b5ffb-7c8c-4092-a794-8ea6b6c490eb/init/0.log" Jan 27 10:31:41 crc kubenswrapper[4799]: I0127 10:31:41.506204 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-557fb89b5c-8kwpv_b65b5ffb-7c8c-4092-a794-8ea6b6c490eb/dnsmasq-dns/0.log" Jan 27 10:31:41 crc kubenswrapper[4799]: I0127 10:31:41.545974 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-557fb89b5c-8kwpv_b65b5ffb-7c8c-4092-a794-8ea6b6c490eb/init/0.log" Jan 27 10:31:41 crc kubenswrapper[4799]: I0127 10:31:41.615761 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_743f1808-211a-4ebd-9a0e-32af8ccf1ba8/glance-httpd/0.log" Jan 27 10:31:41 crc kubenswrapper[4799]: I0127 10:31:41.809522 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_743f1808-211a-4ebd-9a0e-32af8ccf1ba8/glance-log/0.log" Jan 27 10:31:41 crc kubenswrapper[4799]: I0127 10:31:41.959897 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6b78008e-aa20-42d5-a0a3-ec4c0481a0b6/glance-httpd/0.log" Jan 27 10:31:41 crc kubenswrapper[4799]: I0127 10:31:41.993932 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6b78008e-aa20-42d5-a0a3-ec4c0481a0b6/glance-log/0.log" Jan 27 10:31:42 crc kubenswrapper[4799]: I0127 10:31:42.229554 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29491801-86fb7_076186fc-1eef-4e54-bd41-7109370efb97/keystone-cron/0.log" Jan 27 10:31:42 crc kubenswrapper[4799]: I0127 10:31:42.271485 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7cb6c95d94-hxzp8_a86471f0-5809-491d-8f4a-f236533017f8/keystone-api/0.log" Jan 27 10:31:42 crc kubenswrapper[4799]: I0127 10:31:42.405123 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_cd96bf26-b547-49bf-8ee2-15c25fe611fc/adoption/0.log" Jan 27 10:31:42 crc kubenswrapper[4799]: I0127 10:31:42.747119 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-59d99bc4df-65fzn_88499b25-ea05-4dad-b96a-9ff1244b25e1/neutron-api/0.log" Jan 27 10:31:42 crc kubenswrapper[4799]: I0127 10:31:42.811052 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-59d99bc4df-65fzn_88499b25-ea05-4dad-b96a-9ff1244b25e1/neutron-httpd/0.log" Jan 27 10:31:43 crc kubenswrapper[4799]: I0127 10:31:43.006638 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_23a85377-d942-461f-b381-da5730b8b48d/nova-api-api/0.log" Jan 27 10:31:43 crc kubenswrapper[4799]: I0127 10:31:43.154440 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_23a85377-d942-461f-b381-da5730b8b48d/nova-api-log/0.log" Jan 27 10:31:43 crc kubenswrapper[4799]: I0127 10:31:43.262955 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_3434f389-fe51-497d-af18-a0e23a76cb52/nova-cell0-conductor-conductor/0.log" Jan 27 10:31:43 crc kubenswrapper[4799]: I0127 10:31:43.487213 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f9091e3a-e58d-4bbb-9f81-78db65d552dd/nova-cell1-conductor-conductor/0.log" Jan 27 10:31:43 crc kubenswrapper[4799]: I0127 10:31:43.646685 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6803f652-fe33-451f-8a37-5ab86eddd782/nova-cell1-novncproxy-novncproxy/0.log" Jan 27 10:31:43 crc kubenswrapper[4799]: I0127 10:31:43.823065 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9e5450ba-70ac-47fc-ac5c-8cd34f80c39c/nova-metadata-log/0.log" Jan 27 10:31:43 crc kubenswrapper[4799]: I0127 10:31:43.997784 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_096c7330-6c6a-48f8-bd44-ee5ed6893012/nova-scheduler-scheduler/0.log" Jan 27 10:31:44 crc kubenswrapper[4799]: I0127 10:31:44.057273 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9e5450ba-70ac-47fc-ac5c-8cd34f80c39c/nova-metadata-metadata/0.log" Jan 27 10:31:44 crc kubenswrapper[4799]: I0127 10:31:44.170047 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-6c4dfc9d78-dch6f_044571bd-c726-4af3-8344-7df2aafcca9a/init/0.log" Jan 27 10:31:44 crc kubenswrapper[4799]: I0127 10:31:44.279798 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-6c4dfc9d78-dch6f_044571bd-c726-4af3-8344-7df2aafcca9a/init/0.log" Jan 27 10:31:44 crc kubenswrapper[4799]: I0127 10:31:44.375365 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-6c4dfc9d78-dch6f_044571bd-c726-4af3-8344-7df2aafcca9a/octavia-api-provider-agent/0.log" Jan 27 10:31:44 crc kubenswrapper[4799]: I0127 10:31:44.521214 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-xglg8_de098d43-15c6-4e83-8ff9-704c15633680/init/0.log" Jan 27 10:31:44 crc kubenswrapper[4799]: I0127 10:31:44.565647 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-6c4dfc9d78-dch6f_044571bd-c726-4af3-8344-7df2aafcca9a/octavia-api/0.log" Jan 27 10:31:44 crc kubenswrapper[4799]: I0127 10:31:44.637802 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-xglg8_de098d43-15c6-4e83-8ff9-704c15633680/init/0.log" Jan 27 10:31:44 crc kubenswrapper[4799]: I0127 10:31:44.732758 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-xglg8_de098d43-15c6-4e83-8ff9-704c15633680/octavia-healthmanager/0.log" Jan 27 10:31:44 crc kubenswrapper[4799]: I0127 10:31:44.766897 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-7dfgl_f3006a64-ba6c-45e8-a92d-309dbe2daaf9/init/0.log" Jan 27 10:31:45 crc kubenswrapper[4799]: I0127 10:31:45.057323 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-tdxwr_93bb1d47-c669-4b0e-a8a3-5b272962a266/init/0.log" Jan 27 10:31:45 crc kubenswrapper[4799]: I0127 10:31:45.081351 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-7dfgl_f3006a64-ba6c-45e8-a92d-309dbe2daaf9/octavia-housekeeping/0.log" Jan 27 10:31:45 crc kubenswrapper[4799]: I0127 10:31:45.085462 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-7dfgl_f3006a64-ba6c-45e8-a92d-309dbe2daaf9/init/0.log" Jan 27 10:31:45 crc kubenswrapper[4799]: I0127 10:31:45.265766 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-tdxwr_93bb1d47-c669-4b0e-a8a3-5b272962a266/init/0.log" Jan 27 10:31:45 crc kubenswrapper[4799]: I0127 10:31:45.291418 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-tdxwr_93bb1d47-c669-4b0e-a8a3-5b272962a266/octavia-amphora-httpd/0.log" Jan 27 10:31:45 crc kubenswrapper[4799]: I0127 10:31:45.398295 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-qml5p_aa282712-e1dc-48b8-99ce-d801c095eac0/init/0.log" Jan 27 10:31:45 crc kubenswrapper[4799]: I0127 10:31:45.785081 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-qml5p_aa282712-e1dc-48b8-99ce-d801c095eac0/octavia-rsyslog/0.log" Jan 27 10:31:45 crc kubenswrapper[4799]: I0127 10:31:45.864810 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-p4lqq_8f550278-caf7-42a5-9747-446a53632485/init/0.log" Jan 27 10:31:45 crc kubenswrapper[4799]: I0127 10:31:45.872635 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-qml5p_aa282712-e1dc-48b8-99ce-d801c095eac0/init/0.log" Jan 27 10:31:46 crc kubenswrapper[4799]: I0127 10:31:46.018786 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-p4lqq_8f550278-caf7-42a5-9747-446a53632485/init/0.log" Jan 27 10:31:46 crc kubenswrapper[4799]: I0127 10:31:46.097331 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f1c13e78-9e9e-4b56-aba4-df7d2a77339d/mysql-bootstrap/0.log" Jan 27 10:31:46 crc kubenswrapper[4799]: I0127 10:31:46.199674 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-p4lqq_8f550278-caf7-42a5-9747-446a53632485/octavia-worker/0.log" Jan 27 10:31:46 crc kubenswrapper[4799]: I0127 10:31:46.298100 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f1c13e78-9e9e-4b56-aba4-df7d2a77339d/mysql-bootstrap/0.log" Jan 27 10:31:46 crc kubenswrapper[4799]: I0127 10:31:46.355647 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f1c13e78-9e9e-4b56-aba4-df7d2a77339d/galera/0.log" Jan 27 10:31:46 crc kubenswrapper[4799]: I0127 10:31:46.455426 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fbe8b2ce-30cb-4738-b519-85e0a829bcd4/mysql-bootstrap/0.log" Jan 27 10:31:46 crc kubenswrapper[4799]: I0127 10:31:46.662566 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fbe8b2ce-30cb-4738-b519-85e0a829bcd4/mysql-bootstrap/0.log" Jan 27 10:31:46 crc kubenswrapper[4799]: I0127 10:31:46.671962 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d2be17ba-8bd6-43b5-ad28-f0dd7f5e6e80/openstackclient/0.log" Jan 27 10:31:46 crc kubenswrapper[4799]: I0127 10:31:46.699692 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fbe8b2ce-30cb-4738-b519-85e0a829bcd4/galera/0.log" Jan 27 10:31:46 crc kubenswrapper[4799]: I0127 10:31:46.861742 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pd9m7_9005d037-5f85-4c10-a08d-dd696195e149/ovsdb-server-init/0.log" Jan 27 10:31:46 crc kubenswrapper[4799]: I0127 10:31:46.904912 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-wqj9j_3c28bb76-25ef-4f36-ad9c-011fc5c4687d/openstack-network-exporter/0.log" Jan 27 10:31:47 crc kubenswrapper[4799]: I0127 10:31:47.120879 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pd9m7_9005d037-5f85-4c10-a08d-dd696195e149/ovsdb-server/0.log" Jan 27 10:31:47 crc kubenswrapper[4799]: I0127 10:31:47.136046 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pd9m7_9005d037-5f85-4c10-a08d-dd696195e149/ovsdb-server-init/0.log" Jan 27 10:31:47 crc kubenswrapper[4799]: I0127 10:31:47.189026 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pd9m7_9005d037-5f85-4c10-a08d-dd696195e149/ovs-vswitchd/0.log" Jan 27 10:31:47 crc kubenswrapper[4799]: I0127 10:31:47.316357 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-q5btc_1570f261-65d6-442d-8b5b-237d9497476f/ovn-controller/0.log" Jan 27 10:31:47 crc kubenswrapper[4799]: I0127 10:31:47.410504 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_4c8362f6-7376-4b8b-a8ac-6bc38be236a8/adoption/0.log" Jan 27 10:31:47 crc kubenswrapper[4799]: I0127 10:31:47.523342 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d7451de4-4685-4848-9df5-27eb6334da4e/openstack-network-exporter/0.log" Jan 27 10:31:47 crc kubenswrapper[4799]: I0127 10:31:47.580019 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d7451de4-4685-4848-9df5-27eb6334da4e/ovn-northd/0.log" Jan 27 10:31:47 crc kubenswrapper[4799]: I0127 10:31:47.610707 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_8f447dcf-0d25-4c2d-aec3-bfd8dacc5ac7/memcached/0.log" Jan 27 10:31:47 crc kubenswrapper[4799]: I0127 10:31:47.719470 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_db6c91f2-0dfa-4126-becf-cbd05d330a85/ovsdbserver-nb/0.log" Jan 27 10:31:47 crc kubenswrapper[4799]: I0127 10:31:47.719814 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_db6c91f2-0dfa-4126-becf-cbd05d330a85/openstack-network-exporter/0.log" Jan 27 10:31:47 crc kubenswrapper[4799]: I0127 10:31:47.820797 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_52aa5460-8ac3-46cf-bd19-cb2384cb1740/openstack-network-exporter/0.log" Jan 27 10:31:47 crc kubenswrapper[4799]: I0127 10:31:47.841377 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_52aa5460-8ac3-46cf-bd19-cb2384cb1740/ovsdbserver-nb/0.log" Jan 27 10:31:47 crc kubenswrapper[4799]: I0127 10:31:47.902648 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_7d2b3b2a-25ed-4404-a93f-ebae05f98ba3/openstack-network-exporter/0.log" Jan 27 10:31:48 crc kubenswrapper[4799]: I0127 10:31:48.009992 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_7d2b3b2a-25ed-4404-a93f-ebae05f98ba3/ovsdbserver-nb/0.log" Jan 27 10:31:48 crc kubenswrapper[4799]: I0127 10:31:48.052665 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_68c601d3-ce33-4bf0-9b39-811233938733/openstack-network-exporter/0.log" Jan 27 10:31:48 crc kubenswrapper[4799]: I0127 10:31:48.148552 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_68c601d3-ce33-4bf0-9b39-811233938733/ovsdbserver-sb/0.log" Jan 27 10:31:48 crc kubenswrapper[4799]: I0127 10:31:48.175080 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_f4461ea5-728b-42b7-a411-b160417adb11/openstack-network-exporter/0.log" Jan 27 10:31:48 crc kubenswrapper[4799]: I0127 10:31:48.279566 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_f4461ea5-728b-42b7-a411-b160417adb11/ovsdbserver-sb/0.log" Jan 27 10:31:48 crc kubenswrapper[4799]: I0127 10:31:48.318464 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_d0cbe319-884e-4ddf-b7e5-95711b219241/openstack-network-exporter/0.log" Jan 27 10:31:48 crc kubenswrapper[4799]: I0127 10:31:48.377164 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_d0cbe319-884e-4ddf-b7e5-95711b219241/ovsdbserver-sb/0.log" Jan 27 10:31:48 crc kubenswrapper[4799]: I0127 10:31:48.558588 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6b699b8774-fd6pb_56fbee0b-130b-44ee-ab60-336327c2e8c2/placement-log/0.log" Jan 27 10:31:48 crc kubenswrapper[4799]: I0127 10:31:48.577311 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6b699b8774-fd6pb_56fbee0b-130b-44ee-ab60-336327c2e8c2/placement-api/0.log" Jan 27 10:31:48 crc kubenswrapper[4799]: I0127 10:31:48.579805 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_73389771-5c03-49f6-96ac-a57864153a5f/setup-container/0.log" Jan 27 10:31:48 crc kubenswrapper[4799]: I0127 10:31:48.775262 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_73389771-5c03-49f6-96ac-a57864153a5f/setup-container/0.log" Jan 27 10:31:48 crc kubenswrapper[4799]: I0127 10:31:48.829982 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_73389771-5c03-49f6-96ac-a57864153a5f/rabbitmq/0.log" Jan 27 10:31:48 crc kubenswrapper[4799]: I0127 10:31:48.877740 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_64f8476f-48e6-4190-8c0d-436a672f8e62/setup-container/0.log" Jan 27 10:31:48 crc kubenswrapper[4799]: I0127 10:31:48.978289 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_64f8476f-48e6-4190-8c0d-436a672f8e62/setup-container/0.log" Jan 27 10:31:49 crc kubenswrapper[4799]: I0127 10:31:49.033128 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_64f8476f-48e6-4190-8c0d-436a672f8e62/rabbitmq/0.log" Jan 27 10:32:09 crc kubenswrapper[4799]: I0127 10:32:09.497166 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5_a47f41ba-039c-418f-b3aa-8f5f8f108187/util/0.log" Jan 27 10:32:09 crc kubenswrapper[4799]: I0127 10:32:09.666255 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5_a47f41ba-039c-418f-b3aa-8f5f8f108187/util/0.log" Jan 27 10:32:09 crc kubenswrapper[4799]: I0127 10:32:09.672389 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5_a47f41ba-039c-418f-b3aa-8f5f8f108187/pull/0.log" Jan 27 10:32:09 crc kubenswrapper[4799]: I0127 10:32:09.705201 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5_a47f41ba-039c-418f-b3aa-8f5f8f108187/pull/0.log" Jan 27 10:32:09 crc kubenswrapper[4799]: I0127 10:32:09.847551 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5_a47f41ba-039c-418f-b3aa-8f5f8f108187/util/0.log" Jan 27 10:32:09 crc kubenswrapper[4799]: I0127 10:32:09.864176 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5_a47f41ba-039c-418f-b3aa-8f5f8f108187/pull/0.log" Jan 27 10:32:09 crc kubenswrapper[4799]: I0127 10:32:09.888128 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b7f87766c9caf050cf103b8f761c0102bae640638c4973d496dec8537fx5bz5_a47f41ba-039c-418f-b3aa-8f5f8f108187/extract/0.log" Jan 27 10:32:10 crc kubenswrapper[4799]: I0127 10:32:10.158392 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-vznwp_d94d5e1a-ae08-488f-9d43-50c9d392bb64/manager/0.log" Jan 27 10:32:10 crc kubenswrapper[4799]: I0127 10:32:10.173251 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-65ff799cfd-9qhwd_7a4d56c1-32dd-4e3b-9f11-e18d210aa5e8/manager/0.log" Jan 27 10:32:10 crc kubenswrapper[4799]: I0127 10:32:10.280387 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-hzmgz_55e4a841-f81d-438b-adc5-e826eb530cfe/manager/0.log" Jan 27 10:32:10 crc kubenswrapper[4799]: I0127 10:32:10.463547 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-dv7wf_e322d396-a1f7-4802-bba8-91bd472c24e3/manager/0.log" Jan 27 10:32:10 crc kubenswrapper[4799]: I0127 10:32:10.519627 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-575ffb885b-t6chd_8236753b-6720-430d-81cf-7b6c0de5a0ee/manager/0.log" Jan 27 10:32:10 crc kubenswrapper[4799]: I0127 10:32:10.802957 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-z4lbd_4cd87a60-6daa-4298-bc64-ff1fb8782577/manager/0.log" Jan 27 10:32:11 crc kubenswrapper[4799]: I0127 10:32:11.025067 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-cvlvn_fad3c440-9e3f-4f25-b420-f1f1beb8976e/manager/0.log" Jan 27 10:32:11 crc kubenswrapper[4799]: I0127 10:32:11.271276 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-tl2kj_3017331e-6f47-4b7e-b9ad-607c6be8c20e/manager/0.log" Jan 27 10:32:11 crc kubenswrapper[4799]: I0127 10:32:11.304368 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-kww5k_5aca3207-fa5d-485b-ac2c-a9c3e17081a4/manager/0.log" Jan 27 10:32:11 crc kubenswrapper[4799]: I0127 10:32:11.355932 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-nc7r7_34178e14-d22f-4fbb-80e8-2a18fd062606/manager/0.log" Jan 27 10:32:11 crc kubenswrapper[4799]: I0127 10:32:11.523985 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-msdv6_ff4f4931-e9c9-4b38-87e0-58a46c02b98d/manager/0.log" Jan 27 10:32:11 crc kubenswrapper[4799]: I0127 10:32:11.662996 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-gwzxq_6424eee1-bc8b-46e6-86d5-405a13b0ccc9/manager/0.log" Jan 27 10:32:11 crc kubenswrapper[4799]: I0127 10:32:11.920121 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7875d7675-92h9x_ab6077f2-cd61-4c1a-aa99-a5aa8afc7c3f/manager/0.log" Jan 27 10:32:11 crc kubenswrapper[4799]: I0127 10:32:11.925132 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7f54b7d6d4-phjqb_0ecf0624-a24f-4ece-bc11-481d049df28e/manager/0.log" Jan 27 10:32:11 crc kubenswrapper[4799]: I0127 10:32:11.968391 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854ml5wb_e7032d0e-676f-4153-87b6-0fce33337997/manager/0.log" Jan 27 10:32:12 crc kubenswrapper[4799]: I0127 10:32:12.179499 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-54bd478649-rblcz_a03715cb-387a-4bac-8dfe-55ce28fae844/operator/0.log" Jan 27 10:32:12 crc kubenswrapper[4799]: I0127 10:32:12.376507 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-v88ds_b87380ed-955a-485e-9157-549df541f5d2/registry-server/0.log" Jan 27 10:32:12 crc kubenswrapper[4799]: I0127 10:32:12.701600 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-9hj5w_037daa10-fc4e-42d1-9ef8-7484fd944508/manager/0.log" Jan 27 10:32:12 crc kubenswrapper[4799]: I0127 10:32:12.734006 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-cwpd7_d536c693-c313-4de1-a636-edf8d0e3504b/manager/0.log" Jan 27 10:32:13 crc kubenswrapper[4799]: I0127 10:32:13.468734 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-v4vnl_8ac41e51-af98-4db3-bdde-9d0d2d90767f/operator/0.log" Jan 27 10:32:13 crc kubenswrapper[4799]: I0127 10:32:13.526643 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-77ttp_7b6ea7e6-0b30-432b-a1e2-c11570a47ee7/manager/0.log" Jan 27 10:32:13 crc kubenswrapper[4799]: I0127 10:32:13.698937 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-m5npz_f5024a15-240a-410c-980d-109db1b46c03/manager/0.log" Jan 27 10:32:13 crc kubenswrapper[4799]: I0127 10:32:13.726017 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-54bc44cbfd-w99km_ac37a700-e0d3-4751-b72f-bc48bd3ef0cb/manager/0.log" Jan 27 10:32:13 crc kubenswrapper[4799]: I0127 10:32:13.757770 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-kcrfp_c62ef33b-0827-4909-b88a-a48396df7ddd/manager/0.log" Jan 27 10:32:13 crc kubenswrapper[4799]: I0127 10:32:13.849839 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75db85654f-hs4t2_48899916-aa13-4d02-89e3-11721dc22821/manager/0.log" Jan 27 10:32:37 crc kubenswrapper[4799]: I0127 10:32:37.916644 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-fmzz6_17f2f9b7-aad3-4959-8193-3e3e1d525141/control-plane-machine-set-operator/0.log" Jan 27 10:32:38 crc kubenswrapper[4799]: I0127 10:32:38.054275 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-g9lhq_5dc5c15b-696b-49fe-9593-102cc1e00398/kube-rbac-proxy/0.log" Jan 27 10:32:38 crc kubenswrapper[4799]: I0127 10:32:38.069789 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-g9lhq_5dc5c15b-696b-49fe-9593-102cc1e00398/machine-api-operator/0.log" Jan 27 10:32:52 crc kubenswrapper[4799]: I0127 10:32:52.974279 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-88mbp_487704d6-26db-4d0c-9e50-443375daf632/cert-manager-controller/0.log" Jan 27 10:32:53 crc kubenswrapper[4799]: I0127 10:32:53.154865 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-gdhvd_778c26b2-0e97-445d-98bf-054a3457ff9b/cert-manager-cainjector/0.log" Jan 27 10:32:53 crc kubenswrapper[4799]: I0127 10:32:53.190289 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-4tz6c_d26751de-dc67-4950-8105-b3a479a70119/cert-manager-webhook/0.log" Jan 27 10:33:08 crc kubenswrapper[4799]: I0127 10:33:08.857588 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-dbtdq_e7291c9f-4df4-41fd-b55c-ec9e771c4088/nmstate-console-plugin/0.log" Jan 27 10:33:09 crc kubenswrapper[4799]: I0127 10:33:09.174171 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-wpbqw_1a6a281a-0d48-4e63-abd5-1cf7fc08baf7/nmstate-handler/0.log" Jan 27 10:33:09 crc kubenswrapper[4799]: I0127 10:33:09.310405 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-76m8g_05d2b510-84eb-45e6-851f-f3c8ead6c49f/kube-rbac-proxy/0.log" Jan 27 10:33:09 crc kubenswrapper[4799]: I0127 10:33:09.324869 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-76m8g_05d2b510-84eb-45e6-851f-f3c8ead6c49f/nmstate-metrics/0.log" Jan 27 10:33:09 crc kubenswrapper[4799]: I0127 10:33:09.446531 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-pcqcj_3b644f64-e142-4c2a-89b2-f1e8a2c9f5ff/nmstate-operator/0.log" Jan 27 10:33:09 crc kubenswrapper[4799]: I0127 10:33:09.517366 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-7tglw_b3bbbfb1-9446-4db4-a64c-4124bdb3609f/nmstate-webhook/0.log" Jan 27 10:33:23 crc kubenswrapper[4799]: I0127 10:33:23.731660 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:33:23 crc kubenswrapper[4799]: I0127 10:33:23.732541 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:33:40 crc kubenswrapper[4799]: I0127 10:33:40.767644 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-mdvm9_bf75d430-27db-44eb-b2f3-7921d18f0dc1/kube-rbac-proxy/0.log" Jan 27 10:33:41 crc kubenswrapper[4799]: I0127 10:33:41.014178 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tjzjd_da662e44-679d-4336-975b-374c7f799f27/cp-frr-files/0.log" Jan 27 10:33:41 crc kubenswrapper[4799]: I0127 10:33:41.179272 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-mdvm9_bf75d430-27db-44eb-b2f3-7921d18f0dc1/controller/0.log" Jan 27 10:33:41 crc kubenswrapper[4799]: I0127 10:33:41.271636 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tjzjd_da662e44-679d-4336-975b-374c7f799f27/cp-reloader/0.log" Jan 27 10:33:41 crc kubenswrapper[4799]: I0127 10:33:41.289981 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tjzjd_da662e44-679d-4336-975b-374c7f799f27/cp-frr-files/0.log" Jan 27 10:33:41 crc kubenswrapper[4799]: I0127 10:33:41.319010 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tjzjd_da662e44-679d-4336-975b-374c7f799f27/cp-metrics/0.log" Jan 27 10:33:41 crc kubenswrapper[4799]: I0127 10:33:41.381918 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tjzjd_da662e44-679d-4336-975b-374c7f799f27/cp-reloader/0.log" Jan 27 10:33:41 crc kubenswrapper[4799]: I0127 10:33:41.531703 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tjzjd_da662e44-679d-4336-975b-374c7f799f27/cp-frr-files/0.log" Jan 27 10:33:41 crc kubenswrapper[4799]: I0127 10:33:41.531845 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tjzjd_da662e44-679d-4336-975b-374c7f799f27/cp-metrics/0.log" Jan 27 10:33:41 crc kubenswrapper[4799]: I0127 10:33:41.541216 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tjzjd_da662e44-679d-4336-975b-374c7f799f27/cp-reloader/0.log" Jan 27 10:33:41 crc kubenswrapper[4799]: I0127 10:33:41.549362 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tjzjd_da662e44-679d-4336-975b-374c7f799f27/cp-metrics/0.log" Jan 27 10:33:41 crc kubenswrapper[4799]: I0127 10:33:41.734232 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tjzjd_da662e44-679d-4336-975b-374c7f799f27/cp-reloader/0.log" Jan 27 10:33:41 crc kubenswrapper[4799]: I0127 10:33:41.755211 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tjzjd_da662e44-679d-4336-975b-374c7f799f27/cp-frr-files/0.log" Jan 27 10:33:41 crc kubenswrapper[4799]: I0127 10:33:41.761459 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tjzjd_da662e44-679d-4336-975b-374c7f799f27/cp-metrics/0.log" Jan 27 10:33:41 crc kubenswrapper[4799]: I0127 10:33:41.794227 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tjzjd_da662e44-679d-4336-975b-374c7f799f27/controller/0.log" Jan 27 10:33:41 crc kubenswrapper[4799]: I0127 10:33:41.933982 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tjzjd_da662e44-679d-4336-975b-374c7f799f27/frr-metrics/0.log" Jan 27 10:33:41 crc kubenswrapper[4799]: I0127 10:33:41.994550 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tjzjd_da662e44-679d-4336-975b-374c7f799f27/kube-rbac-proxy/0.log" Jan 27 10:33:42 crc kubenswrapper[4799]: I0127 10:33:42.008801 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tjzjd_da662e44-679d-4336-975b-374c7f799f27/kube-rbac-proxy-frr/0.log" Jan 27 10:33:42 crc kubenswrapper[4799]: I0127 10:33:42.176643 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tjzjd_da662e44-679d-4336-975b-374c7f799f27/reloader/0.log" Jan 27 10:33:42 crc kubenswrapper[4799]: I0127 10:33:42.246628 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-p95bt_f1e9dc09-f278-46d6-8a6f-0c617a7446f9/frr-k8s-webhook-server/0.log" Jan 27 10:33:42 crc kubenswrapper[4799]: I0127 10:33:42.389095 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6d5c6f5f66-nmqdn_06ea268e-b6bc-4056-8cb0-5113c8d2a54f/manager/0.log" Jan 27 10:33:42 crc kubenswrapper[4799]: I0127 10:33:42.540908 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-55fb679855-n7lbv_dc463ea2-ef36-43be-82ac-dab18b86c215/webhook-server/0.log" Jan 27 10:33:42 crc kubenswrapper[4799]: I0127 10:33:42.724997 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2klzt_9bc57c87-6dfd-4725-81b3-f8dadfb587a3/kube-rbac-proxy/0.log" Jan 27 10:33:43 crc kubenswrapper[4799]: I0127 10:33:43.587229 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2klzt_9bc57c87-6dfd-4725-81b3-f8dadfb587a3/speaker/0.log" Jan 27 10:33:44 crc kubenswrapper[4799]: I0127 10:33:44.537725 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tjzjd_da662e44-679d-4336-975b-374c7f799f27/frr/0.log" Jan 27 10:33:53 crc kubenswrapper[4799]: I0127 10:33:53.731154 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:33:53 crc kubenswrapper[4799]: I0127 10:33:53.731730 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:33:55 crc kubenswrapper[4799]: I0127 10:33:55.049104 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sg4cs"] Jan 27 10:33:55 crc kubenswrapper[4799]: E0127 10:33:55.051592 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee" containerName="container-00" Jan 27 10:33:55 crc kubenswrapper[4799]: I0127 10:33:55.051713 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee" containerName="container-00" Jan 27 10:33:55 crc kubenswrapper[4799]: I0127 10:33:55.052046 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="54f2ac05-ed38-4f2f-b72f-c30d5c9d08ee" containerName="container-00" Jan 27 10:33:55 crc kubenswrapper[4799]: I0127 10:33:55.055100 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sg4cs" Jan 27 10:33:55 crc kubenswrapper[4799]: I0127 10:33:55.066104 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sg4cs"] Jan 27 10:33:55 crc kubenswrapper[4799]: I0127 10:33:55.194706 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmkvt\" (UniqueName: \"kubernetes.io/projected/c4bbe90b-244e-4aa2-915b-9a5e0a51340a-kube-api-access-wmkvt\") pod \"community-operators-sg4cs\" (UID: \"c4bbe90b-244e-4aa2-915b-9a5e0a51340a\") " pod="openshift-marketplace/community-operators-sg4cs" Jan 27 10:33:55 crc kubenswrapper[4799]: I0127 10:33:55.194801 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4bbe90b-244e-4aa2-915b-9a5e0a51340a-catalog-content\") pod \"community-operators-sg4cs\" (UID: \"c4bbe90b-244e-4aa2-915b-9a5e0a51340a\") " pod="openshift-marketplace/community-operators-sg4cs" Jan 27 10:33:55 crc kubenswrapper[4799]: I0127 10:33:55.194846 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4bbe90b-244e-4aa2-915b-9a5e0a51340a-utilities\") pod \"community-operators-sg4cs\" (UID: \"c4bbe90b-244e-4aa2-915b-9a5e0a51340a\") " pod="openshift-marketplace/community-operators-sg4cs" Jan 27 10:33:55 crc kubenswrapper[4799]: I0127 10:33:55.296169 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmkvt\" (UniqueName: \"kubernetes.io/projected/c4bbe90b-244e-4aa2-915b-9a5e0a51340a-kube-api-access-wmkvt\") pod \"community-operators-sg4cs\" (UID: \"c4bbe90b-244e-4aa2-915b-9a5e0a51340a\") " pod="openshift-marketplace/community-operators-sg4cs" Jan 27 10:33:55 crc kubenswrapper[4799]: I0127 10:33:55.296338 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4bbe90b-244e-4aa2-915b-9a5e0a51340a-catalog-content\") pod \"community-operators-sg4cs\" (UID: \"c4bbe90b-244e-4aa2-915b-9a5e0a51340a\") " pod="openshift-marketplace/community-operators-sg4cs" Jan 27 10:33:55 crc kubenswrapper[4799]: I0127 10:33:55.296410 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4bbe90b-244e-4aa2-915b-9a5e0a51340a-utilities\") pod \"community-operators-sg4cs\" (UID: \"c4bbe90b-244e-4aa2-915b-9a5e0a51340a\") " pod="openshift-marketplace/community-operators-sg4cs" Jan 27 10:33:55 crc kubenswrapper[4799]: I0127 10:33:55.297221 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4bbe90b-244e-4aa2-915b-9a5e0a51340a-catalog-content\") pod \"community-operators-sg4cs\" (UID: \"c4bbe90b-244e-4aa2-915b-9a5e0a51340a\") " pod="openshift-marketplace/community-operators-sg4cs" Jan 27 10:33:55 crc kubenswrapper[4799]: I0127 10:33:55.297234 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4bbe90b-244e-4aa2-915b-9a5e0a51340a-utilities\") pod \"community-operators-sg4cs\" (UID: \"c4bbe90b-244e-4aa2-915b-9a5e0a51340a\") " pod="openshift-marketplace/community-operators-sg4cs" Jan 27 10:33:55 crc kubenswrapper[4799]: I0127 10:33:55.316072 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmkvt\" (UniqueName: \"kubernetes.io/projected/c4bbe90b-244e-4aa2-915b-9a5e0a51340a-kube-api-access-wmkvt\") pod \"community-operators-sg4cs\" (UID: \"c4bbe90b-244e-4aa2-915b-9a5e0a51340a\") " pod="openshift-marketplace/community-operators-sg4cs" Jan 27 10:33:55 crc kubenswrapper[4799]: I0127 10:33:55.380993 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sg4cs" Jan 27 10:33:57 crc kubenswrapper[4799]: I0127 10:33:57.166596 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sg4cs"] Jan 27 10:33:58 crc kubenswrapper[4799]: I0127 10:33:58.141378 4799 generic.go:334] "Generic (PLEG): container finished" podID="c4bbe90b-244e-4aa2-915b-9a5e0a51340a" containerID="9fa8efb41aa7453676442ede77798ef7417f75906bbe9a68b03638f0feb02483" exitCode=0 Jan 27 10:33:58 crc kubenswrapper[4799]: I0127 10:33:58.141516 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sg4cs" event={"ID":"c4bbe90b-244e-4aa2-915b-9a5e0a51340a","Type":"ContainerDied","Data":"9fa8efb41aa7453676442ede77798ef7417f75906bbe9a68b03638f0feb02483"} Jan 27 10:33:58 crc kubenswrapper[4799]: I0127 10:33:58.142582 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sg4cs" event={"ID":"c4bbe90b-244e-4aa2-915b-9a5e0a51340a","Type":"ContainerStarted","Data":"cfca0deba5d96a66a636a54b97937721bf8a46c774092a936ccd25f661cc0675"} Jan 27 10:33:59 crc kubenswrapper[4799]: I0127 10:33:59.821449 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5_f7d3f13c-4213-485e-bbf6-88453f2abd8b/util/0.log" Jan 27 10:34:00 crc kubenswrapper[4799]: I0127 10:34:00.077174 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5_f7d3f13c-4213-485e-bbf6-88453f2abd8b/util/0.log" Jan 27 10:34:00 crc kubenswrapper[4799]: I0127 10:34:00.078294 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5_f7d3f13c-4213-485e-bbf6-88453f2abd8b/pull/0.log" Jan 27 10:34:00 crc kubenswrapper[4799]: I0127 10:34:00.102236 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5_f7d3f13c-4213-485e-bbf6-88453f2abd8b/pull/0.log" Jan 27 10:34:00 crc kubenswrapper[4799]: I0127 10:34:00.208367 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5_f7d3f13c-4213-485e-bbf6-88453f2abd8b/util/0.log" Jan 27 10:34:00 crc kubenswrapper[4799]: I0127 10:34:00.278554 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5_f7d3f13c-4213-485e-bbf6-88453f2abd8b/extract/0.log" Jan 27 10:34:00 crc kubenswrapper[4799]: I0127 10:34:00.303593 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahpdb5_f7d3f13c-4213-485e-bbf6-88453f2abd8b/pull/0.log" Jan 27 10:34:00 crc kubenswrapper[4799]: I0127 10:34:00.967624 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286_401be67a-e5de-4ad0-bf00-9294434cc929/util/0.log" Jan 27 10:34:01 crc kubenswrapper[4799]: I0127 10:34:01.149278 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286_401be67a-e5de-4ad0-bf00-9294434cc929/pull/0.log" Jan 27 10:34:01 crc kubenswrapper[4799]: I0127 10:34:01.176055 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286_401be67a-e5de-4ad0-bf00-9294434cc929/util/0.log" Jan 27 10:34:01 crc kubenswrapper[4799]: I0127 10:34:01.187536 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286_401be67a-e5de-4ad0-bf00-9294434cc929/pull/0.log" Jan 27 10:34:01 crc kubenswrapper[4799]: I0127 10:34:01.369636 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286_401be67a-e5de-4ad0-bf00-9294434cc929/util/0.log" Jan 27 10:34:01 crc kubenswrapper[4799]: I0127 10:34:01.404163 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286_401be67a-e5de-4ad0-bf00-9294434cc929/extract/0.log" Jan 27 10:34:01 crc kubenswrapper[4799]: I0127 10:34:01.404308 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxj286_401be67a-e5de-4ad0-bf00-9294434cc929/pull/0.log" Jan 27 10:34:01 crc kubenswrapper[4799]: I0127 10:34:01.635674 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927_a3c8980e-d12a-4646-bc2a-ab79fa15f95e/util/0.log" Jan 27 10:34:01 crc kubenswrapper[4799]: I0127 10:34:01.711586 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927_a3c8980e-d12a-4646-bc2a-ab79fa15f95e/util/0.log" Jan 27 10:34:01 crc kubenswrapper[4799]: I0127 10:34:01.723559 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927_a3c8980e-d12a-4646-bc2a-ab79fa15f95e/pull/0.log" Jan 27 10:34:01 crc kubenswrapper[4799]: I0127 10:34:01.748910 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927_a3c8980e-d12a-4646-bc2a-ab79fa15f95e/pull/0.log" Jan 27 10:34:01 crc kubenswrapper[4799]: I0127 10:34:01.890741 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927_a3c8980e-d12a-4646-bc2a-ab79fa15f95e/util/0.log" Jan 27 10:34:01 crc kubenswrapper[4799]: I0127 10:34:01.949341 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927_a3c8980e-d12a-4646-bc2a-ab79fa15f95e/extract/0.log" Jan 27 10:34:01 crc kubenswrapper[4799]: I0127 10:34:01.949949 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fz927_a3c8980e-d12a-4646-bc2a-ab79fa15f95e/pull/0.log" Jan 27 10:34:02 crc kubenswrapper[4799]: I0127 10:34:02.084150 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpdx6_2d44c7a9-27c0-4266-a833-0932010c632a/extract-utilities/0.log" Jan 27 10:34:02 crc kubenswrapper[4799]: I0127 10:34:02.243944 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpdx6_2d44c7a9-27c0-4266-a833-0932010c632a/extract-utilities/0.log" Jan 27 10:34:02 crc kubenswrapper[4799]: I0127 10:34:02.247792 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpdx6_2d44c7a9-27c0-4266-a833-0932010c632a/extract-content/0.log" Jan 27 10:34:02 crc kubenswrapper[4799]: I0127 10:34:02.292948 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpdx6_2d44c7a9-27c0-4266-a833-0932010c632a/extract-content/0.log" Jan 27 10:34:02 crc kubenswrapper[4799]: I0127 10:34:02.426274 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpdx6_2d44c7a9-27c0-4266-a833-0932010c632a/extract-content/0.log" Jan 27 10:34:02 crc kubenswrapper[4799]: I0127 10:34:02.523320 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpdx6_2d44c7a9-27c0-4266-a833-0932010c632a/extract-utilities/0.log" Jan 27 10:34:02 crc kubenswrapper[4799]: I0127 10:34:02.675807 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rcbv8_94b25278-7417-40b6-beea-2640e1fadd55/extract-utilities/0.log" Jan 27 10:34:02 crc kubenswrapper[4799]: I0127 10:34:02.830435 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rcbv8_94b25278-7417-40b6-beea-2640e1fadd55/extract-content/0.log" Jan 27 10:34:02 crc kubenswrapper[4799]: I0127 10:34:02.851529 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rcbv8_94b25278-7417-40b6-beea-2640e1fadd55/extract-utilities/0.log" Jan 27 10:34:02 crc kubenswrapper[4799]: I0127 10:34:02.890547 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rcbv8_94b25278-7417-40b6-beea-2640e1fadd55/extract-content/0.log" Jan 27 10:34:03 crc kubenswrapper[4799]: I0127 10:34:03.126654 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rcbv8_94b25278-7417-40b6-beea-2640e1fadd55/extract-utilities/0.log" Jan 27 10:34:03 crc kubenswrapper[4799]: I0127 10:34:03.139723 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rcbv8_94b25278-7417-40b6-beea-2640e1fadd55/extract-content/0.log" Jan 27 10:34:03 crc kubenswrapper[4799]: I0127 10:34:03.303967 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sg4cs_c4bbe90b-244e-4aa2-915b-9a5e0a51340a/extract-utilities/0.log" Jan 27 10:34:03 crc kubenswrapper[4799]: I0127 10:34:03.550979 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sg4cs_c4bbe90b-244e-4aa2-915b-9a5e0a51340a/extract-utilities/0.log" Jan 27 10:34:03 crc kubenswrapper[4799]: I0127 10:34:03.776927 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sg4cs_c4bbe90b-244e-4aa2-915b-9a5e0a51340a/extract-utilities/0.log" Jan 27 10:34:04 crc kubenswrapper[4799]: I0127 10:34:04.013136 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-b8j25_69748a95-cef3-4ad3-99aa-7e59a1f7683c/marketplace-operator/0.log" Jan 27 10:34:04 crc kubenswrapper[4799]: I0127 10:34:04.070152 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rcbv8_94b25278-7417-40b6-beea-2640e1fadd55/registry-server/0.log" Jan 27 10:34:04 crc kubenswrapper[4799]: I0127 10:34:04.226534 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6qcrh_53c54203-e089-4140-af14-4223823e95f8/extract-utilities/0.log" Jan 27 10:34:04 crc kubenswrapper[4799]: I0127 10:34:04.332529 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6qcrh_53c54203-e089-4140-af14-4223823e95f8/extract-utilities/0.log" Jan 27 10:34:04 crc kubenswrapper[4799]: I0127 10:34:04.387975 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6qcrh_53c54203-e089-4140-af14-4223823e95f8/extract-content/0.log" Jan 27 10:34:04 crc kubenswrapper[4799]: I0127 10:34:04.415130 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6qcrh_53c54203-e089-4140-af14-4223823e95f8/extract-content/0.log" Jan 27 10:34:04 crc kubenswrapper[4799]: I0127 10:34:04.565543 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6qcrh_53c54203-e089-4140-af14-4223823e95f8/extract-utilities/0.log" Jan 27 10:34:04 crc kubenswrapper[4799]: I0127 10:34:04.567553 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6qcrh_53c54203-e089-4140-af14-4223823e95f8/extract-content/0.log" Jan 27 10:34:04 crc kubenswrapper[4799]: I0127 10:34:04.727545 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vr7hb_285a5405-cd08-4633-b71e-ba771ebba82f/extract-utilities/0.log" Jan 27 10:34:04 crc kubenswrapper[4799]: I0127 10:34:04.948073 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vr7hb_285a5405-cd08-4633-b71e-ba771ebba82f/extract-content/0.log" Jan 27 10:34:04 crc kubenswrapper[4799]: I0127 10:34:04.984987 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vr7hb_285a5405-cd08-4633-b71e-ba771ebba82f/extract-content/0.log" Jan 27 10:34:05 crc kubenswrapper[4799]: I0127 10:34:05.010130 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vr7hb_285a5405-cd08-4633-b71e-ba771ebba82f/extract-utilities/0.log" Jan 27 10:34:05 crc kubenswrapper[4799]: I0127 10:34:05.135471 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6qcrh_53c54203-e089-4140-af14-4223823e95f8/registry-server/0.log" Jan 27 10:34:05 crc kubenswrapper[4799]: I0127 10:34:05.194996 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sg4cs" event={"ID":"c4bbe90b-244e-4aa2-915b-9a5e0a51340a","Type":"ContainerStarted","Data":"17a398c6813264e94bb586563e18366e0849977b676bc25ee0b8855c4f95609e"} Jan 27 10:34:05 crc kubenswrapper[4799]: I0127 10:34:05.199042 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpdx6_2d44c7a9-27c0-4266-a833-0932010c632a/registry-server/0.log" Jan 27 10:34:05 crc kubenswrapper[4799]: I0127 10:34:05.240040 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vr7hb_285a5405-cd08-4633-b71e-ba771ebba82f/extract-utilities/0.log" Jan 27 10:34:05 crc kubenswrapper[4799]: I0127 10:34:05.254145 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vr7hb_285a5405-cd08-4633-b71e-ba771ebba82f/extract-content/0.log" Jan 27 10:34:06 crc kubenswrapper[4799]: I0127 10:34:06.233375 4799 generic.go:334] "Generic (PLEG): container finished" podID="c4bbe90b-244e-4aa2-915b-9a5e0a51340a" containerID="17a398c6813264e94bb586563e18366e0849977b676bc25ee0b8855c4f95609e" exitCode=0 Jan 27 10:34:06 crc kubenswrapper[4799]: I0127 10:34:06.233601 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sg4cs" event={"ID":"c4bbe90b-244e-4aa2-915b-9a5e0a51340a","Type":"ContainerDied","Data":"17a398c6813264e94bb586563e18366e0849977b676bc25ee0b8855c4f95609e"} Jan 27 10:34:06 crc kubenswrapper[4799]: I0127 10:34:06.450116 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vr7hb_285a5405-cd08-4633-b71e-ba771ebba82f/registry-server/0.log" Jan 27 10:34:07 crc kubenswrapper[4799]: I0127 10:34:07.244654 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sg4cs" event={"ID":"c4bbe90b-244e-4aa2-915b-9a5e0a51340a","Type":"ContainerStarted","Data":"f39db780e1c73f2cf0b69437ca14f084963e823b6ccdfdb06047454cf479b0a7"} Jan 27 10:34:07 crc kubenswrapper[4799]: I0127 10:34:07.272041 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sg4cs" podStartSLOduration=3.6400831609999997 podStartE2EDuration="12.27201882s" podCreationTimestamp="2026-01-27 10:33:55 +0000 UTC" firstStartedPulling="2026-01-27 10:33:58.144344454 +0000 UTC m=+10104.455448559" lastFinishedPulling="2026-01-27 10:34:06.776280163 +0000 UTC m=+10113.087384218" observedRunningTime="2026-01-27 10:34:07.264991229 +0000 UTC m=+10113.576095294" watchObservedRunningTime="2026-01-27 10:34:07.27201882 +0000 UTC m=+10113.583122885" Jan 27 10:34:15 crc kubenswrapper[4799]: I0127 10:34:15.381351 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sg4cs" Jan 27 10:34:15 crc kubenswrapper[4799]: I0127 10:34:15.382010 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sg4cs" Jan 27 10:34:15 crc kubenswrapper[4799]: I0127 10:34:15.449188 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sg4cs" Jan 27 10:34:16 crc kubenswrapper[4799]: I0127 10:34:16.423479 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sg4cs" Jan 27 10:34:16 crc kubenswrapper[4799]: I0127 10:34:16.514684 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sg4cs"] Jan 27 10:34:16 crc kubenswrapper[4799]: I0127 10:34:16.563779 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rcbv8"] Jan 27 10:34:16 crc kubenswrapper[4799]: I0127 10:34:16.564108 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rcbv8" podUID="94b25278-7417-40b6-beea-2640e1fadd55" containerName="registry-server" containerID="cri-o://05ffb26f03a6f8f75fb8409058142157030bc5496562b68fa3bdf0203984b9a9" gracePeriod=2 Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.050676 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rcbv8" Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.091451 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ftxl\" (UniqueName: \"kubernetes.io/projected/94b25278-7417-40b6-beea-2640e1fadd55-kube-api-access-5ftxl\") pod \"94b25278-7417-40b6-beea-2640e1fadd55\" (UID: \"94b25278-7417-40b6-beea-2640e1fadd55\") " Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.091517 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94b25278-7417-40b6-beea-2640e1fadd55-catalog-content\") pod \"94b25278-7417-40b6-beea-2640e1fadd55\" (UID: \"94b25278-7417-40b6-beea-2640e1fadd55\") " Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.091546 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94b25278-7417-40b6-beea-2640e1fadd55-utilities\") pod \"94b25278-7417-40b6-beea-2640e1fadd55\" (UID: \"94b25278-7417-40b6-beea-2640e1fadd55\") " Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.092360 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94b25278-7417-40b6-beea-2640e1fadd55-utilities" (OuterVolumeSpecName: "utilities") pod "94b25278-7417-40b6-beea-2640e1fadd55" (UID: "94b25278-7417-40b6-beea-2640e1fadd55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.096766 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94b25278-7417-40b6-beea-2640e1fadd55-kube-api-access-5ftxl" (OuterVolumeSpecName: "kube-api-access-5ftxl") pod "94b25278-7417-40b6-beea-2640e1fadd55" (UID: "94b25278-7417-40b6-beea-2640e1fadd55"). InnerVolumeSpecName "kube-api-access-5ftxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.142341 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94b25278-7417-40b6-beea-2640e1fadd55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94b25278-7417-40b6-beea-2640e1fadd55" (UID: "94b25278-7417-40b6-beea-2640e1fadd55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.194392 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ftxl\" (UniqueName: \"kubernetes.io/projected/94b25278-7417-40b6-beea-2640e1fadd55-kube-api-access-5ftxl\") on node \"crc\" DevicePath \"\"" Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.194707 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94b25278-7417-40b6-beea-2640e1fadd55-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.194717 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94b25278-7417-40b6-beea-2640e1fadd55-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.350680 4799 generic.go:334] "Generic (PLEG): container finished" podID="94b25278-7417-40b6-beea-2640e1fadd55" containerID="05ffb26f03a6f8f75fb8409058142157030bc5496562b68fa3bdf0203984b9a9" exitCode=0 Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.351575 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rcbv8" Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.354343 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcbv8" event={"ID":"94b25278-7417-40b6-beea-2640e1fadd55","Type":"ContainerDied","Data":"05ffb26f03a6f8f75fb8409058142157030bc5496562b68fa3bdf0203984b9a9"} Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.354374 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcbv8" event={"ID":"94b25278-7417-40b6-beea-2640e1fadd55","Type":"ContainerDied","Data":"7c464dd34960b082cd96ea7d8e2c48438ab83a4f61fdd212389af2c563a87438"} Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.354412 4799 scope.go:117] "RemoveContainer" containerID="05ffb26f03a6f8f75fb8409058142157030bc5496562b68fa3bdf0203984b9a9" Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.377728 4799 scope.go:117] "RemoveContainer" containerID="4ace08b68844a9083f2a814fe24e484f9b5659035fc9256d61e312e60a1168f6" Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.385941 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rcbv8"] Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.392621 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rcbv8"] Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.434132 4799 scope.go:117] "RemoveContainer" containerID="16b7627d66f5268f963ef45ba6d0c4bdbbe73909a490ce90df1c403ba4197df0" Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.452937 4799 scope.go:117] "RemoveContainer" containerID="05ffb26f03a6f8f75fb8409058142157030bc5496562b68fa3bdf0203984b9a9" Jan 27 10:34:17 crc kubenswrapper[4799]: E0127 10:34:17.453330 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05ffb26f03a6f8f75fb8409058142157030bc5496562b68fa3bdf0203984b9a9\": container with ID starting with 05ffb26f03a6f8f75fb8409058142157030bc5496562b68fa3bdf0203984b9a9 not found: ID does not exist" containerID="05ffb26f03a6f8f75fb8409058142157030bc5496562b68fa3bdf0203984b9a9" Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.453369 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05ffb26f03a6f8f75fb8409058142157030bc5496562b68fa3bdf0203984b9a9"} err="failed to get container status \"05ffb26f03a6f8f75fb8409058142157030bc5496562b68fa3bdf0203984b9a9\": rpc error: code = NotFound desc = could not find container \"05ffb26f03a6f8f75fb8409058142157030bc5496562b68fa3bdf0203984b9a9\": container with ID starting with 05ffb26f03a6f8f75fb8409058142157030bc5496562b68fa3bdf0203984b9a9 not found: ID does not exist" Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.453392 4799 scope.go:117] "RemoveContainer" containerID="4ace08b68844a9083f2a814fe24e484f9b5659035fc9256d61e312e60a1168f6" Jan 27 10:34:17 crc kubenswrapper[4799]: E0127 10:34:17.453656 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ace08b68844a9083f2a814fe24e484f9b5659035fc9256d61e312e60a1168f6\": container with ID starting with 4ace08b68844a9083f2a814fe24e484f9b5659035fc9256d61e312e60a1168f6 not found: ID does not exist" containerID="4ace08b68844a9083f2a814fe24e484f9b5659035fc9256d61e312e60a1168f6" Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.453685 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ace08b68844a9083f2a814fe24e484f9b5659035fc9256d61e312e60a1168f6"} err="failed to get container status \"4ace08b68844a9083f2a814fe24e484f9b5659035fc9256d61e312e60a1168f6\": rpc error: code = NotFound desc = could not find container \"4ace08b68844a9083f2a814fe24e484f9b5659035fc9256d61e312e60a1168f6\": container with ID starting with 4ace08b68844a9083f2a814fe24e484f9b5659035fc9256d61e312e60a1168f6 not found: ID does not exist" Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.453735 4799 scope.go:117] "RemoveContainer" containerID="16b7627d66f5268f963ef45ba6d0c4bdbbe73909a490ce90df1c403ba4197df0" Jan 27 10:34:17 crc kubenswrapper[4799]: E0127 10:34:17.454001 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16b7627d66f5268f963ef45ba6d0c4bdbbe73909a490ce90df1c403ba4197df0\": container with ID starting with 16b7627d66f5268f963ef45ba6d0c4bdbbe73909a490ce90df1c403ba4197df0 not found: ID does not exist" containerID="16b7627d66f5268f963ef45ba6d0c4bdbbe73909a490ce90df1c403ba4197df0" Jan 27 10:34:17 crc kubenswrapper[4799]: I0127 10:34:17.454043 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16b7627d66f5268f963ef45ba6d0c4bdbbe73909a490ce90df1c403ba4197df0"} err="failed to get container status \"16b7627d66f5268f963ef45ba6d0c4bdbbe73909a490ce90df1c403ba4197df0\": rpc error: code = NotFound desc = could not find container \"16b7627d66f5268f963ef45ba6d0c4bdbbe73909a490ce90df1c403ba4197df0\": container with ID starting with 16b7627d66f5268f963ef45ba6d0c4bdbbe73909a490ce90df1c403ba4197df0 not found: ID does not exist" Jan 27 10:34:18 crc kubenswrapper[4799]: I0127 10:34:18.463702 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94b25278-7417-40b6-beea-2640e1fadd55" path="/var/lib/kubelet/pods/94b25278-7417-40b6-beea-2640e1fadd55/volumes" Jan 27 10:34:23 crc kubenswrapper[4799]: I0127 10:34:23.731905 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:34:23 crc kubenswrapper[4799]: I0127 10:34:23.732592 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:34:23 crc kubenswrapper[4799]: I0127 10:34:23.732679 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 10:34:23 crc kubenswrapper[4799]: I0127 10:34:23.733793 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4c3254e058931cf30af93b06e3a03336364d8eae7744ad2c786efe39969a52b"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 10:34:23 crc kubenswrapper[4799]: I0127 10:34:23.733880 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://f4c3254e058931cf30af93b06e3a03336364d8eae7744ad2c786efe39969a52b" gracePeriod=600 Jan 27 10:34:24 crc kubenswrapper[4799]: I0127 10:34:24.436894 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="f4c3254e058931cf30af93b06e3a03336364d8eae7744ad2c786efe39969a52b" exitCode=0 Jan 27 10:34:24 crc kubenswrapper[4799]: I0127 10:34:24.436993 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"f4c3254e058931cf30af93b06e3a03336364d8eae7744ad2c786efe39969a52b"} Jan 27 10:34:24 crc kubenswrapper[4799]: I0127 10:34:24.437497 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerStarted","Data":"bac937bb5e96885fa612b0a47f9a3531db920738a861d83936eac3b4ea921286"} Jan 27 10:34:24 crc kubenswrapper[4799]: I0127 10:34:24.437537 4799 scope.go:117] "RemoveContainer" containerID="ec329faff1f0a899695bfc3d26f1d1f4e6a5582da50bf8b32bfba410efb172d6" Jan 27 10:34:36 crc kubenswrapper[4799]: I0127 10:34:36.437986 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sfdzw"] Jan 27 10:34:36 crc kubenswrapper[4799]: E0127 10:34:36.439127 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94b25278-7417-40b6-beea-2640e1fadd55" containerName="extract-content" Jan 27 10:34:36 crc kubenswrapper[4799]: I0127 10:34:36.439143 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="94b25278-7417-40b6-beea-2640e1fadd55" containerName="extract-content" Jan 27 10:34:36 crc kubenswrapper[4799]: E0127 10:34:36.439174 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94b25278-7417-40b6-beea-2640e1fadd55" containerName="extract-utilities" Jan 27 10:34:36 crc kubenswrapper[4799]: I0127 10:34:36.439183 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="94b25278-7417-40b6-beea-2640e1fadd55" containerName="extract-utilities" Jan 27 10:34:36 crc kubenswrapper[4799]: E0127 10:34:36.439200 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94b25278-7417-40b6-beea-2640e1fadd55" containerName="registry-server" Jan 27 10:34:36 crc kubenswrapper[4799]: I0127 10:34:36.439211 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="94b25278-7417-40b6-beea-2640e1fadd55" containerName="registry-server" Jan 27 10:34:36 crc kubenswrapper[4799]: I0127 10:34:36.439536 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="94b25278-7417-40b6-beea-2640e1fadd55" containerName="registry-server" Jan 27 10:34:36 crc kubenswrapper[4799]: I0127 10:34:36.441820 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sfdzw" Jan 27 10:34:36 crc kubenswrapper[4799]: I0127 10:34:36.448597 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sfdzw"] Jan 27 10:34:36 crc kubenswrapper[4799]: I0127 10:34:36.562636 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmh24\" (UniqueName: \"kubernetes.io/projected/c9213fb7-b587-4dc8-ad15-6d1c838e2277-kube-api-access-bmh24\") pod \"redhat-marketplace-sfdzw\" (UID: \"c9213fb7-b587-4dc8-ad15-6d1c838e2277\") " pod="openshift-marketplace/redhat-marketplace-sfdzw" Jan 27 10:34:36 crc kubenswrapper[4799]: I0127 10:34:36.563022 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9213fb7-b587-4dc8-ad15-6d1c838e2277-utilities\") pod \"redhat-marketplace-sfdzw\" (UID: \"c9213fb7-b587-4dc8-ad15-6d1c838e2277\") " pod="openshift-marketplace/redhat-marketplace-sfdzw" Jan 27 10:34:36 crc kubenswrapper[4799]: I0127 10:34:36.563114 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9213fb7-b587-4dc8-ad15-6d1c838e2277-catalog-content\") pod \"redhat-marketplace-sfdzw\" (UID: \"c9213fb7-b587-4dc8-ad15-6d1c838e2277\") " pod="openshift-marketplace/redhat-marketplace-sfdzw" Jan 27 10:34:36 crc kubenswrapper[4799]: I0127 10:34:36.664192 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9213fb7-b587-4dc8-ad15-6d1c838e2277-utilities\") pod \"redhat-marketplace-sfdzw\" (UID: \"c9213fb7-b587-4dc8-ad15-6d1c838e2277\") " pod="openshift-marketplace/redhat-marketplace-sfdzw" Jan 27 10:34:36 crc kubenswrapper[4799]: I0127 10:34:36.664253 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9213fb7-b587-4dc8-ad15-6d1c838e2277-catalog-content\") pod \"redhat-marketplace-sfdzw\" (UID: \"c9213fb7-b587-4dc8-ad15-6d1c838e2277\") " pod="openshift-marketplace/redhat-marketplace-sfdzw" Jan 27 10:34:36 crc kubenswrapper[4799]: I0127 10:34:36.664413 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmh24\" (UniqueName: \"kubernetes.io/projected/c9213fb7-b587-4dc8-ad15-6d1c838e2277-kube-api-access-bmh24\") pod \"redhat-marketplace-sfdzw\" (UID: \"c9213fb7-b587-4dc8-ad15-6d1c838e2277\") " pod="openshift-marketplace/redhat-marketplace-sfdzw" Jan 27 10:34:36 crc kubenswrapper[4799]: I0127 10:34:36.665748 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9213fb7-b587-4dc8-ad15-6d1c838e2277-catalog-content\") pod \"redhat-marketplace-sfdzw\" (UID: \"c9213fb7-b587-4dc8-ad15-6d1c838e2277\") " pod="openshift-marketplace/redhat-marketplace-sfdzw" Jan 27 10:34:36 crc kubenswrapper[4799]: I0127 10:34:36.666013 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9213fb7-b587-4dc8-ad15-6d1c838e2277-utilities\") pod \"redhat-marketplace-sfdzw\" (UID: \"c9213fb7-b587-4dc8-ad15-6d1c838e2277\") " pod="openshift-marketplace/redhat-marketplace-sfdzw" Jan 27 10:34:36 crc kubenswrapper[4799]: I0127 10:34:36.688540 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmh24\" (UniqueName: \"kubernetes.io/projected/c9213fb7-b587-4dc8-ad15-6d1c838e2277-kube-api-access-bmh24\") pod \"redhat-marketplace-sfdzw\" (UID: \"c9213fb7-b587-4dc8-ad15-6d1c838e2277\") " pod="openshift-marketplace/redhat-marketplace-sfdzw" Jan 27 10:34:36 crc kubenswrapper[4799]: I0127 10:34:36.781927 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sfdzw" Jan 27 10:34:37 crc kubenswrapper[4799]: I0127 10:34:37.327809 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sfdzw"] Jan 27 10:34:37 crc kubenswrapper[4799]: I0127 10:34:37.610955 4799 generic.go:334] "Generic (PLEG): container finished" podID="c9213fb7-b587-4dc8-ad15-6d1c838e2277" containerID="7146b45baf60489e130898151ff193c2f0eb5ad4bfe6b7d1909d9a66b8e0af30" exitCode=0 Jan 27 10:34:37 crc kubenswrapper[4799]: I0127 10:34:37.611295 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sfdzw" event={"ID":"c9213fb7-b587-4dc8-ad15-6d1c838e2277","Type":"ContainerDied","Data":"7146b45baf60489e130898151ff193c2f0eb5ad4bfe6b7d1909d9a66b8e0af30"} Jan 27 10:34:37 crc kubenswrapper[4799]: I0127 10:34:37.611367 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sfdzw" event={"ID":"c9213fb7-b587-4dc8-ad15-6d1c838e2277","Type":"ContainerStarted","Data":"95aa658dd5c2ddcefbc5a271a2fe97a1ee6e3440b736ce8b9f983a3c47f6b3bf"} Jan 27 10:34:39 crc kubenswrapper[4799]: I0127 10:34:39.649845 4799 generic.go:334] "Generic (PLEG): container finished" podID="c9213fb7-b587-4dc8-ad15-6d1c838e2277" containerID="306a6fd5d4845523f5c741212d2c4dd66a98660bb1d3611b1f210a2d4af4411f" exitCode=0 Jan 27 10:34:39 crc kubenswrapper[4799]: I0127 10:34:39.649931 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sfdzw" event={"ID":"c9213fb7-b587-4dc8-ad15-6d1c838e2277","Type":"ContainerDied","Data":"306a6fd5d4845523f5c741212d2c4dd66a98660bb1d3611b1f210a2d4af4411f"} Jan 27 10:34:40 crc kubenswrapper[4799]: I0127 10:34:40.660648 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sfdzw" event={"ID":"c9213fb7-b587-4dc8-ad15-6d1c838e2277","Type":"ContainerStarted","Data":"4461bafd02dacf93b2f71826a8946d0952c85ff3b2f609c89ac9d16d1e1ca2af"} Jan 27 10:34:40 crc kubenswrapper[4799]: I0127 10:34:40.681358 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sfdzw" podStartSLOduration=2.232884727 podStartE2EDuration="4.681342393s" podCreationTimestamp="2026-01-27 10:34:36 +0000 UTC" firstStartedPulling="2026-01-27 10:34:37.613732459 +0000 UTC m=+10143.924836524" lastFinishedPulling="2026-01-27 10:34:40.062190125 +0000 UTC m=+10146.373294190" observedRunningTime="2026-01-27 10:34:40.676949013 +0000 UTC m=+10146.988053078" watchObservedRunningTime="2026-01-27 10:34:40.681342393 +0000 UTC m=+10146.992446458" Jan 27 10:34:46 crc kubenswrapper[4799]: I0127 10:34:46.782129 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sfdzw" Jan 27 10:34:46 crc kubenswrapper[4799]: I0127 10:34:46.782784 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sfdzw" Jan 27 10:34:46 crc kubenswrapper[4799]: I0127 10:34:46.863783 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sfdzw" Jan 27 10:34:47 crc kubenswrapper[4799]: I0127 10:34:47.796690 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sfdzw" Jan 27 10:34:47 crc kubenswrapper[4799]: I0127 10:34:47.855398 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sfdzw"] Jan 27 10:34:49 crc kubenswrapper[4799]: I0127 10:34:49.750266 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sfdzw" podUID="c9213fb7-b587-4dc8-ad15-6d1c838e2277" containerName="registry-server" containerID="cri-o://4461bafd02dacf93b2f71826a8946d0952c85ff3b2f609c89ac9d16d1e1ca2af" gracePeriod=2 Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.299092 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sfdzw" Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.465091 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9213fb7-b587-4dc8-ad15-6d1c838e2277-catalog-content\") pod \"c9213fb7-b587-4dc8-ad15-6d1c838e2277\" (UID: \"c9213fb7-b587-4dc8-ad15-6d1c838e2277\") " Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.465704 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmh24\" (UniqueName: \"kubernetes.io/projected/c9213fb7-b587-4dc8-ad15-6d1c838e2277-kube-api-access-bmh24\") pod \"c9213fb7-b587-4dc8-ad15-6d1c838e2277\" (UID: \"c9213fb7-b587-4dc8-ad15-6d1c838e2277\") " Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.465854 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9213fb7-b587-4dc8-ad15-6d1c838e2277-utilities\") pod \"c9213fb7-b587-4dc8-ad15-6d1c838e2277\" (UID: \"c9213fb7-b587-4dc8-ad15-6d1c838e2277\") " Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.467538 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9213fb7-b587-4dc8-ad15-6d1c838e2277-utilities" (OuterVolumeSpecName: "utilities") pod "c9213fb7-b587-4dc8-ad15-6d1c838e2277" (UID: "c9213fb7-b587-4dc8-ad15-6d1c838e2277"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.476023 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9213fb7-b587-4dc8-ad15-6d1c838e2277-kube-api-access-bmh24" (OuterVolumeSpecName: "kube-api-access-bmh24") pod "c9213fb7-b587-4dc8-ad15-6d1c838e2277" (UID: "c9213fb7-b587-4dc8-ad15-6d1c838e2277"). InnerVolumeSpecName "kube-api-access-bmh24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.513128 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9213fb7-b587-4dc8-ad15-6d1c838e2277-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9213fb7-b587-4dc8-ad15-6d1c838e2277" (UID: "c9213fb7-b587-4dc8-ad15-6d1c838e2277"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.568532 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9213fb7-b587-4dc8-ad15-6d1c838e2277-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.568566 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmh24\" (UniqueName: \"kubernetes.io/projected/c9213fb7-b587-4dc8-ad15-6d1c838e2277-kube-api-access-bmh24\") on node \"crc\" DevicePath \"\"" Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.568580 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9213fb7-b587-4dc8-ad15-6d1c838e2277-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.784880 4799 generic.go:334] "Generic (PLEG): container finished" podID="c9213fb7-b587-4dc8-ad15-6d1c838e2277" containerID="4461bafd02dacf93b2f71826a8946d0952c85ff3b2f609c89ac9d16d1e1ca2af" exitCode=0 Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.784979 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sfdzw" Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.785019 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sfdzw" event={"ID":"c9213fb7-b587-4dc8-ad15-6d1c838e2277","Type":"ContainerDied","Data":"4461bafd02dacf93b2f71826a8946d0952c85ff3b2f609c89ac9d16d1e1ca2af"} Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.787139 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sfdzw" event={"ID":"c9213fb7-b587-4dc8-ad15-6d1c838e2277","Type":"ContainerDied","Data":"95aa658dd5c2ddcefbc5a271a2fe97a1ee6e3440b736ce8b9f983a3c47f6b3bf"} Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.787183 4799 scope.go:117] "RemoveContainer" containerID="4461bafd02dacf93b2f71826a8946d0952c85ff3b2f609c89ac9d16d1e1ca2af" Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.820452 4799 scope.go:117] "RemoveContainer" containerID="306a6fd5d4845523f5c741212d2c4dd66a98660bb1d3611b1f210a2d4af4411f" Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.846551 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sfdzw"] Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.861726 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sfdzw"] Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.881089 4799 scope.go:117] "RemoveContainer" containerID="7146b45baf60489e130898151ff193c2f0eb5ad4bfe6b7d1909d9a66b8e0af30" Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.925860 4799 scope.go:117] "RemoveContainer" containerID="4461bafd02dacf93b2f71826a8946d0952c85ff3b2f609c89ac9d16d1e1ca2af" Jan 27 10:34:50 crc kubenswrapper[4799]: E0127 10:34:50.926573 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4461bafd02dacf93b2f71826a8946d0952c85ff3b2f609c89ac9d16d1e1ca2af\": container with ID starting with 4461bafd02dacf93b2f71826a8946d0952c85ff3b2f609c89ac9d16d1e1ca2af not found: ID does not exist" containerID="4461bafd02dacf93b2f71826a8946d0952c85ff3b2f609c89ac9d16d1e1ca2af" Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.926644 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4461bafd02dacf93b2f71826a8946d0952c85ff3b2f609c89ac9d16d1e1ca2af"} err="failed to get container status \"4461bafd02dacf93b2f71826a8946d0952c85ff3b2f609c89ac9d16d1e1ca2af\": rpc error: code = NotFound desc = could not find container \"4461bafd02dacf93b2f71826a8946d0952c85ff3b2f609c89ac9d16d1e1ca2af\": container with ID starting with 4461bafd02dacf93b2f71826a8946d0952c85ff3b2f609c89ac9d16d1e1ca2af not found: ID does not exist" Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.926686 4799 scope.go:117] "RemoveContainer" containerID="306a6fd5d4845523f5c741212d2c4dd66a98660bb1d3611b1f210a2d4af4411f" Jan 27 10:34:50 crc kubenswrapper[4799]: E0127 10:34:50.927197 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"306a6fd5d4845523f5c741212d2c4dd66a98660bb1d3611b1f210a2d4af4411f\": container with ID starting with 306a6fd5d4845523f5c741212d2c4dd66a98660bb1d3611b1f210a2d4af4411f not found: ID does not exist" containerID="306a6fd5d4845523f5c741212d2c4dd66a98660bb1d3611b1f210a2d4af4411f" Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.927248 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"306a6fd5d4845523f5c741212d2c4dd66a98660bb1d3611b1f210a2d4af4411f"} err="failed to get container status \"306a6fd5d4845523f5c741212d2c4dd66a98660bb1d3611b1f210a2d4af4411f\": rpc error: code = NotFound desc = could not find container \"306a6fd5d4845523f5c741212d2c4dd66a98660bb1d3611b1f210a2d4af4411f\": container with ID starting with 306a6fd5d4845523f5c741212d2c4dd66a98660bb1d3611b1f210a2d4af4411f not found: ID does not exist" Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.927277 4799 scope.go:117] "RemoveContainer" containerID="7146b45baf60489e130898151ff193c2f0eb5ad4bfe6b7d1909d9a66b8e0af30" Jan 27 10:34:50 crc kubenswrapper[4799]: E0127 10:34:50.927734 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7146b45baf60489e130898151ff193c2f0eb5ad4bfe6b7d1909d9a66b8e0af30\": container with ID starting with 7146b45baf60489e130898151ff193c2f0eb5ad4bfe6b7d1909d9a66b8e0af30 not found: ID does not exist" containerID="7146b45baf60489e130898151ff193c2f0eb5ad4bfe6b7d1909d9a66b8e0af30" Jan 27 10:34:50 crc kubenswrapper[4799]: I0127 10:34:50.927784 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7146b45baf60489e130898151ff193c2f0eb5ad4bfe6b7d1909d9a66b8e0af30"} err="failed to get container status \"7146b45baf60489e130898151ff193c2f0eb5ad4bfe6b7d1909d9a66b8e0af30\": rpc error: code = NotFound desc = could not find container \"7146b45baf60489e130898151ff193c2f0eb5ad4bfe6b7d1909d9a66b8e0af30\": container with ID starting with 7146b45baf60489e130898151ff193c2f0eb5ad4bfe6b7d1909d9a66b8e0af30 not found: ID does not exist" Jan 27 10:34:52 crc kubenswrapper[4799]: I0127 10:34:52.484929 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9213fb7-b587-4dc8-ad15-6d1c838e2277" path="/var/lib/kubelet/pods/c9213fb7-b587-4dc8-ad15-6d1c838e2277/volumes" Jan 27 10:35:44 crc kubenswrapper[4799]: I0127 10:35:44.415547 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tscsr/must-gather-sdlhf" event={"ID":"a245d31a-a78c-4805-a3d8-7336f85f7cff","Type":"ContainerDied","Data":"6abdd27a533555e5804f1514b13f05082790f99317ce917942ef4b028ac1de0f"} Jan 27 10:35:44 crc kubenswrapper[4799]: I0127 10:35:44.415438 4799 generic.go:334] "Generic (PLEG): container finished" podID="a245d31a-a78c-4805-a3d8-7336f85f7cff" containerID="6abdd27a533555e5804f1514b13f05082790f99317ce917942ef4b028ac1de0f" exitCode=0 Jan 27 10:35:44 crc kubenswrapper[4799]: I0127 10:35:44.417194 4799 scope.go:117] "RemoveContainer" containerID="6abdd27a533555e5804f1514b13f05082790f99317ce917942ef4b028ac1de0f" Jan 27 10:35:45 crc kubenswrapper[4799]: I0127 10:35:45.313958 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tscsr_must-gather-sdlhf_a245d31a-a78c-4805-a3d8-7336f85f7cff/gather/0.log" Jan 27 10:35:52 crc kubenswrapper[4799]: I0127 10:35:52.558180 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tscsr/must-gather-sdlhf"] Jan 27 10:35:52 crc kubenswrapper[4799]: I0127 10:35:52.559574 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-tscsr/must-gather-sdlhf" podUID="a245d31a-a78c-4805-a3d8-7336f85f7cff" containerName="copy" containerID="cri-o://815a35240be15129dc2dc6d5ed9b53ece124546a32b9c4764afa6105c497f6a9" gracePeriod=2 Jan 27 10:35:52 crc kubenswrapper[4799]: I0127 10:35:52.574532 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tscsr/must-gather-sdlhf"] Jan 27 10:35:53 crc kubenswrapper[4799]: I0127 10:35:53.114162 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tscsr_must-gather-sdlhf_a245d31a-a78c-4805-a3d8-7336f85f7cff/copy/0.log" Jan 27 10:35:53 crc kubenswrapper[4799]: I0127 10:35:53.114797 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tscsr/must-gather-sdlhf" Jan 27 10:35:53 crc kubenswrapper[4799]: I0127 10:35:53.182474 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a245d31a-a78c-4805-a3d8-7336f85f7cff-must-gather-output\") pod \"a245d31a-a78c-4805-a3d8-7336f85f7cff\" (UID: \"a245d31a-a78c-4805-a3d8-7336f85f7cff\") " Jan 27 10:35:53 crc kubenswrapper[4799]: I0127 10:35:53.182540 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxwmt\" (UniqueName: \"kubernetes.io/projected/a245d31a-a78c-4805-a3d8-7336f85f7cff-kube-api-access-hxwmt\") pod \"a245d31a-a78c-4805-a3d8-7336f85f7cff\" (UID: \"a245d31a-a78c-4805-a3d8-7336f85f7cff\") " Jan 27 10:35:53 crc kubenswrapper[4799]: I0127 10:35:53.189633 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a245d31a-a78c-4805-a3d8-7336f85f7cff-kube-api-access-hxwmt" (OuterVolumeSpecName: "kube-api-access-hxwmt") pod "a245d31a-a78c-4805-a3d8-7336f85f7cff" (UID: "a245d31a-a78c-4805-a3d8-7336f85f7cff"). InnerVolumeSpecName "kube-api-access-hxwmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:35:53 crc kubenswrapper[4799]: I0127 10:35:53.283856 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxwmt\" (UniqueName: \"kubernetes.io/projected/a245d31a-a78c-4805-a3d8-7336f85f7cff-kube-api-access-hxwmt\") on node \"crc\" DevicePath \"\"" Jan 27 10:35:53 crc kubenswrapper[4799]: I0127 10:35:53.328238 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a245d31a-a78c-4805-a3d8-7336f85f7cff-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a245d31a-a78c-4805-a3d8-7336f85f7cff" (UID: "a245d31a-a78c-4805-a3d8-7336f85f7cff"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:35:53 crc kubenswrapper[4799]: I0127 10:35:53.385598 4799 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a245d31a-a78c-4805-a3d8-7336f85f7cff-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 10:35:53 crc kubenswrapper[4799]: I0127 10:35:53.519849 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tscsr_must-gather-sdlhf_a245d31a-a78c-4805-a3d8-7336f85f7cff/copy/0.log" Jan 27 10:35:53 crc kubenswrapper[4799]: I0127 10:35:53.520393 4799 generic.go:334] "Generic (PLEG): container finished" podID="a245d31a-a78c-4805-a3d8-7336f85f7cff" containerID="815a35240be15129dc2dc6d5ed9b53ece124546a32b9c4764afa6105c497f6a9" exitCode=143 Jan 27 10:35:53 crc kubenswrapper[4799]: I0127 10:35:53.520441 4799 scope.go:117] "RemoveContainer" containerID="815a35240be15129dc2dc6d5ed9b53ece124546a32b9c4764afa6105c497f6a9" Jan 27 10:35:53 crc kubenswrapper[4799]: I0127 10:35:53.520554 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tscsr/must-gather-sdlhf" Jan 27 10:35:53 crc kubenswrapper[4799]: I0127 10:35:53.557525 4799 scope.go:117] "RemoveContainer" containerID="6abdd27a533555e5804f1514b13f05082790f99317ce917942ef4b028ac1de0f" Jan 27 10:35:54 crc kubenswrapper[4799]: I0127 10:35:54.378257 4799 scope.go:117] "RemoveContainer" containerID="815a35240be15129dc2dc6d5ed9b53ece124546a32b9c4764afa6105c497f6a9" Jan 27 10:35:54 crc kubenswrapper[4799]: E0127 10:35:54.382904 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"815a35240be15129dc2dc6d5ed9b53ece124546a32b9c4764afa6105c497f6a9\": container with ID starting with 815a35240be15129dc2dc6d5ed9b53ece124546a32b9c4764afa6105c497f6a9 not found: ID does not exist" containerID="815a35240be15129dc2dc6d5ed9b53ece124546a32b9c4764afa6105c497f6a9" Jan 27 10:35:54 crc kubenswrapper[4799]: I0127 10:35:54.382949 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"815a35240be15129dc2dc6d5ed9b53ece124546a32b9c4764afa6105c497f6a9"} err="failed to get container status \"815a35240be15129dc2dc6d5ed9b53ece124546a32b9c4764afa6105c497f6a9\": rpc error: code = NotFound desc = could not find container \"815a35240be15129dc2dc6d5ed9b53ece124546a32b9c4764afa6105c497f6a9\": container with ID starting with 815a35240be15129dc2dc6d5ed9b53ece124546a32b9c4764afa6105c497f6a9 not found: ID does not exist" Jan 27 10:35:54 crc kubenswrapper[4799]: I0127 10:35:54.382971 4799 scope.go:117] "RemoveContainer" containerID="6abdd27a533555e5804f1514b13f05082790f99317ce917942ef4b028ac1de0f" Jan 27 10:35:54 crc kubenswrapper[4799]: E0127 10:35:54.402547 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6abdd27a533555e5804f1514b13f05082790f99317ce917942ef4b028ac1de0f\": container with ID starting with 6abdd27a533555e5804f1514b13f05082790f99317ce917942ef4b028ac1de0f not found: ID does not exist" containerID="6abdd27a533555e5804f1514b13f05082790f99317ce917942ef4b028ac1de0f" Jan 27 10:35:54 crc kubenswrapper[4799]: I0127 10:35:54.402589 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6abdd27a533555e5804f1514b13f05082790f99317ce917942ef4b028ac1de0f"} err="failed to get container status \"6abdd27a533555e5804f1514b13f05082790f99317ce917942ef4b028ac1de0f\": rpc error: code = NotFound desc = could not find container \"6abdd27a533555e5804f1514b13f05082790f99317ce917942ef4b028ac1de0f\": container with ID starting with 6abdd27a533555e5804f1514b13f05082790f99317ce917942ef4b028ac1de0f not found: ID does not exist" Jan 27 10:35:54 crc kubenswrapper[4799]: I0127 10:35:54.479159 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a245d31a-a78c-4805-a3d8-7336f85f7cff" path="/var/lib/kubelet/pods/a245d31a-a78c-4805-a3d8-7336f85f7cff/volumes" Jan 27 10:35:59 crc kubenswrapper[4799]: I0127 10:35:59.631930 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xk9gs"] Jan 27 10:35:59 crc kubenswrapper[4799]: E0127 10:35:59.633141 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a245d31a-a78c-4805-a3d8-7336f85f7cff" containerName="copy" Jan 27 10:35:59 crc kubenswrapper[4799]: I0127 10:35:59.633154 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a245d31a-a78c-4805-a3d8-7336f85f7cff" containerName="copy" Jan 27 10:35:59 crc kubenswrapper[4799]: E0127 10:35:59.633167 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9213fb7-b587-4dc8-ad15-6d1c838e2277" containerName="extract-utilities" Jan 27 10:35:59 crc kubenswrapper[4799]: I0127 10:35:59.633174 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9213fb7-b587-4dc8-ad15-6d1c838e2277" containerName="extract-utilities" Jan 27 10:35:59 crc kubenswrapper[4799]: E0127 10:35:59.633206 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9213fb7-b587-4dc8-ad15-6d1c838e2277" containerName="extract-content" Jan 27 10:35:59 crc kubenswrapper[4799]: I0127 10:35:59.633214 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9213fb7-b587-4dc8-ad15-6d1c838e2277" containerName="extract-content" Jan 27 10:35:59 crc kubenswrapper[4799]: E0127 10:35:59.633252 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9213fb7-b587-4dc8-ad15-6d1c838e2277" containerName="registry-server" Jan 27 10:35:59 crc kubenswrapper[4799]: I0127 10:35:59.633259 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9213fb7-b587-4dc8-ad15-6d1c838e2277" containerName="registry-server" Jan 27 10:35:59 crc kubenswrapper[4799]: E0127 10:35:59.633280 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a245d31a-a78c-4805-a3d8-7336f85f7cff" containerName="gather" Jan 27 10:35:59 crc kubenswrapper[4799]: I0127 10:35:59.633286 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a245d31a-a78c-4805-a3d8-7336f85f7cff" containerName="gather" Jan 27 10:35:59 crc kubenswrapper[4799]: I0127 10:35:59.633668 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9213fb7-b587-4dc8-ad15-6d1c838e2277" containerName="registry-server" Jan 27 10:35:59 crc kubenswrapper[4799]: I0127 10:35:59.633689 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a245d31a-a78c-4805-a3d8-7336f85f7cff" containerName="copy" Jan 27 10:35:59 crc kubenswrapper[4799]: I0127 10:35:59.633703 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a245d31a-a78c-4805-a3d8-7336f85f7cff" containerName="gather" Jan 27 10:35:59 crc kubenswrapper[4799]: I0127 10:35:59.636187 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xk9gs" Jan 27 10:35:59 crc kubenswrapper[4799]: I0127 10:35:59.645398 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xk9gs"] Jan 27 10:35:59 crc kubenswrapper[4799]: I0127 10:35:59.791530 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67a7db00-9033-4706-8fba-aeb152163a82-catalog-content\") pod \"redhat-operators-xk9gs\" (UID: \"67a7db00-9033-4706-8fba-aeb152163a82\") " pod="openshift-marketplace/redhat-operators-xk9gs" Jan 27 10:35:59 crc kubenswrapper[4799]: I0127 10:35:59.791583 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwlcf\" (UniqueName: \"kubernetes.io/projected/67a7db00-9033-4706-8fba-aeb152163a82-kube-api-access-vwlcf\") pod \"redhat-operators-xk9gs\" (UID: \"67a7db00-9033-4706-8fba-aeb152163a82\") " pod="openshift-marketplace/redhat-operators-xk9gs" Jan 27 10:35:59 crc kubenswrapper[4799]: I0127 10:35:59.791617 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67a7db00-9033-4706-8fba-aeb152163a82-utilities\") pod \"redhat-operators-xk9gs\" (UID: \"67a7db00-9033-4706-8fba-aeb152163a82\") " pod="openshift-marketplace/redhat-operators-xk9gs" Jan 27 10:35:59 crc kubenswrapper[4799]: I0127 10:35:59.893290 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67a7db00-9033-4706-8fba-aeb152163a82-catalog-content\") pod \"redhat-operators-xk9gs\" (UID: \"67a7db00-9033-4706-8fba-aeb152163a82\") " pod="openshift-marketplace/redhat-operators-xk9gs" Jan 27 10:35:59 crc kubenswrapper[4799]: I0127 10:35:59.893351 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwlcf\" (UniqueName: \"kubernetes.io/projected/67a7db00-9033-4706-8fba-aeb152163a82-kube-api-access-vwlcf\") pod \"redhat-operators-xk9gs\" (UID: \"67a7db00-9033-4706-8fba-aeb152163a82\") " pod="openshift-marketplace/redhat-operators-xk9gs" Jan 27 10:35:59 crc kubenswrapper[4799]: I0127 10:35:59.893376 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67a7db00-9033-4706-8fba-aeb152163a82-utilities\") pod \"redhat-operators-xk9gs\" (UID: \"67a7db00-9033-4706-8fba-aeb152163a82\") " pod="openshift-marketplace/redhat-operators-xk9gs" Jan 27 10:35:59 crc kubenswrapper[4799]: I0127 10:35:59.893864 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67a7db00-9033-4706-8fba-aeb152163a82-utilities\") pod \"redhat-operators-xk9gs\" (UID: \"67a7db00-9033-4706-8fba-aeb152163a82\") " pod="openshift-marketplace/redhat-operators-xk9gs" Jan 27 10:35:59 crc kubenswrapper[4799]: I0127 10:35:59.894075 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67a7db00-9033-4706-8fba-aeb152163a82-catalog-content\") pod \"redhat-operators-xk9gs\" (UID: \"67a7db00-9033-4706-8fba-aeb152163a82\") " pod="openshift-marketplace/redhat-operators-xk9gs" Jan 27 10:35:59 crc kubenswrapper[4799]: I0127 10:35:59.911721 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwlcf\" (UniqueName: \"kubernetes.io/projected/67a7db00-9033-4706-8fba-aeb152163a82-kube-api-access-vwlcf\") pod \"redhat-operators-xk9gs\" (UID: \"67a7db00-9033-4706-8fba-aeb152163a82\") " pod="openshift-marketplace/redhat-operators-xk9gs" Jan 27 10:35:59 crc kubenswrapper[4799]: I0127 10:35:59.981999 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xk9gs" Jan 27 10:36:00 crc kubenswrapper[4799]: I0127 10:36:00.433398 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xk9gs"] Jan 27 10:36:00 crc kubenswrapper[4799]: I0127 10:36:00.584897 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xk9gs" event={"ID":"67a7db00-9033-4706-8fba-aeb152163a82","Type":"ContainerStarted","Data":"74340bbe3ffef00059ac41068cd2150e0f4fa54576a564ae9ee5404c6a82682f"} Jan 27 10:36:01 crc kubenswrapper[4799]: I0127 10:36:01.593069 4799 generic.go:334] "Generic (PLEG): container finished" podID="67a7db00-9033-4706-8fba-aeb152163a82" containerID="d9e8f83c04defc44fcdc6c302c2b4425536a74807c224ad707b9cb32a910636b" exitCode=0 Jan 27 10:36:01 crc kubenswrapper[4799]: I0127 10:36:01.593314 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xk9gs" event={"ID":"67a7db00-9033-4706-8fba-aeb152163a82","Type":"ContainerDied","Data":"d9e8f83c04defc44fcdc6c302c2b4425536a74807c224ad707b9cb32a910636b"} Jan 27 10:36:01 crc kubenswrapper[4799]: I0127 10:36:01.595261 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 10:36:03 crc kubenswrapper[4799]: I0127 10:36:03.617924 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xk9gs" event={"ID":"67a7db00-9033-4706-8fba-aeb152163a82","Type":"ContainerStarted","Data":"f8c510936b562b592a062c94eb160184de7ac6826d9fa7fcbf3725411b087131"} Jan 27 10:36:04 crc kubenswrapper[4799]: I0127 10:36:04.629862 4799 generic.go:334] "Generic (PLEG): container finished" podID="67a7db00-9033-4706-8fba-aeb152163a82" containerID="f8c510936b562b592a062c94eb160184de7ac6826d9fa7fcbf3725411b087131" exitCode=0 Jan 27 10:36:04 crc kubenswrapper[4799]: I0127 10:36:04.629903 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xk9gs" event={"ID":"67a7db00-9033-4706-8fba-aeb152163a82","Type":"ContainerDied","Data":"f8c510936b562b592a062c94eb160184de7ac6826d9fa7fcbf3725411b087131"} Jan 27 10:36:05 crc kubenswrapper[4799]: I0127 10:36:05.647439 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xk9gs" event={"ID":"67a7db00-9033-4706-8fba-aeb152163a82","Type":"ContainerStarted","Data":"86eb25030181067340d5612db518a368f29dadf0d87e34748dafe9417a73a4ee"} Jan 27 10:36:05 crc kubenswrapper[4799]: I0127 10:36:05.679937 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xk9gs" podStartSLOduration=3.218477227 podStartE2EDuration="6.679922778s" podCreationTimestamp="2026-01-27 10:35:59 +0000 UTC" firstStartedPulling="2026-01-27 10:36:01.595079714 +0000 UTC m=+10227.906183769" lastFinishedPulling="2026-01-27 10:36:05.056525245 +0000 UTC m=+10231.367629320" observedRunningTime="2026-01-27 10:36:05.676597967 +0000 UTC m=+10231.987702082" watchObservedRunningTime="2026-01-27 10:36:05.679922778 +0000 UTC m=+10231.991026843" Jan 27 10:36:09 crc kubenswrapper[4799]: I0127 10:36:09.982603 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xk9gs" Jan 27 10:36:09 crc kubenswrapper[4799]: I0127 10:36:09.983255 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xk9gs" Jan 27 10:36:11 crc kubenswrapper[4799]: I0127 10:36:11.050691 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xk9gs" podUID="67a7db00-9033-4706-8fba-aeb152163a82" containerName="registry-server" probeResult="failure" output=< Jan 27 10:36:11 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Jan 27 10:36:11 crc kubenswrapper[4799]: > Jan 27 10:36:20 crc kubenswrapper[4799]: I0127 10:36:20.067735 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xk9gs" Jan 27 10:36:20 crc kubenswrapper[4799]: I0127 10:36:20.851429 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xk9gs" Jan 27 10:36:20 crc kubenswrapper[4799]: I0127 10:36:20.905172 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xk9gs"] Jan 27 10:36:21 crc kubenswrapper[4799]: I0127 10:36:21.864255 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xk9gs" podUID="67a7db00-9033-4706-8fba-aeb152163a82" containerName="registry-server" containerID="cri-o://86eb25030181067340d5612db518a368f29dadf0d87e34748dafe9417a73a4ee" gracePeriod=2 Jan 27 10:36:22 crc kubenswrapper[4799]: I0127 10:36:22.433905 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xk9gs" Jan 27 10:36:22 crc kubenswrapper[4799]: I0127 10:36:22.437360 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67a7db00-9033-4706-8fba-aeb152163a82-utilities\") pod \"67a7db00-9033-4706-8fba-aeb152163a82\" (UID: \"67a7db00-9033-4706-8fba-aeb152163a82\") " Jan 27 10:36:22 crc kubenswrapper[4799]: I0127 10:36:22.437460 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67a7db00-9033-4706-8fba-aeb152163a82-catalog-content\") pod \"67a7db00-9033-4706-8fba-aeb152163a82\" (UID: \"67a7db00-9033-4706-8fba-aeb152163a82\") " Jan 27 10:36:22 crc kubenswrapper[4799]: I0127 10:36:22.437522 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwlcf\" (UniqueName: \"kubernetes.io/projected/67a7db00-9033-4706-8fba-aeb152163a82-kube-api-access-vwlcf\") pod \"67a7db00-9033-4706-8fba-aeb152163a82\" (UID: \"67a7db00-9033-4706-8fba-aeb152163a82\") " Jan 27 10:36:22 crc kubenswrapper[4799]: I0127 10:36:22.439064 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67a7db00-9033-4706-8fba-aeb152163a82-utilities" (OuterVolumeSpecName: "utilities") pod "67a7db00-9033-4706-8fba-aeb152163a82" (UID: "67a7db00-9033-4706-8fba-aeb152163a82"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:36:22 crc kubenswrapper[4799]: I0127 10:36:22.439274 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67a7db00-9033-4706-8fba-aeb152163a82-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:36:22 crc kubenswrapper[4799]: I0127 10:36:22.610002 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67a7db00-9033-4706-8fba-aeb152163a82-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67a7db00-9033-4706-8fba-aeb152163a82" (UID: "67a7db00-9033-4706-8fba-aeb152163a82"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:36:22 crc kubenswrapper[4799]: I0127 10:36:22.642356 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67a7db00-9033-4706-8fba-aeb152163a82-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:36:22 crc kubenswrapper[4799]: I0127 10:36:22.882441 4799 generic.go:334] "Generic (PLEG): container finished" podID="67a7db00-9033-4706-8fba-aeb152163a82" containerID="86eb25030181067340d5612db518a368f29dadf0d87e34748dafe9417a73a4ee" exitCode=0 Jan 27 10:36:22 crc kubenswrapper[4799]: I0127 10:36:22.882506 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xk9gs" event={"ID":"67a7db00-9033-4706-8fba-aeb152163a82","Type":"ContainerDied","Data":"86eb25030181067340d5612db518a368f29dadf0d87e34748dafe9417a73a4ee"} Jan 27 10:36:22 crc kubenswrapper[4799]: I0127 10:36:22.882607 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xk9gs" event={"ID":"67a7db00-9033-4706-8fba-aeb152163a82","Type":"ContainerDied","Data":"74340bbe3ffef00059ac41068cd2150e0f4fa54576a564ae9ee5404c6a82682f"} Jan 27 10:36:22 crc kubenswrapper[4799]: I0127 10:36:22.882531 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xk9gs" Jan 27 10:36:22 crc kubenswrapper[4799]: I0127 10:36:22.882650 4799 scope.go:117] "RemoveContainer" containerID="86eb25030181067340d5612db518a368f29dadf0d87e34748dafe9417a73a4ee" Jan 27 10:36:22 crc kubenswrapper[4799]: I0127 10:36:22.919335 4799 scope.go:117] "RemoveContainer" containerID="f8c510936b562b592a062c94eb160184de7ac6826d9fa7fcbf3725411b087131" Jan 27 10:36:23 crc kubenswrapper[4799]: I0127 10:36:23.252737 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67a7db00-9033-4706-8fba-aeb152163a82-kube-api-access-vwlcf" (OuterVolumeSpecName: "kube-api-access-vwlcf") pod "67a7db00-9033-4706-8fba-aeb152163a82" (UID: "67a7db00-9033-4706-8fba-aeb152163a82"). InnerVolumeSpecName "kube-api-access-vwlcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:36:23 crc kubenswrapper[4799]: I0127 10:36:23.254750 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwlcf\" (UniqueName: \"kubernetes.io/projected/67a7db00-9033-4706-8fba-aeb152163a82-kube-api-access-vwlcf\") on node \"crc\" DevicePath \"\"" Jan 27 10:36:23 crc kubenswrapper[4799]: I0127 10:36:23.303733 4799 scope.go:117] "RemoveContainer" containerID="d9e8f83c04defc44fcdc6c302c2b4425536a74807c224ad707b9cb32a910636b" Jan 27 10:36:23 crc kubenswrapper[4799]: I0127 10:36:23.483406 4799 scope.go:117] "RemoveContainer" containerID="86eb25030181067340d5612db518a368f29dadf0d87e34748dafe9417a73a4ee" Jan 27 10:36:23 crc kubenswrapper[4799]: E0127 10:36:23.484353 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86eb25030181067340d5612db518a368f29dadf0d87e34748dafe9417a73a4ee\": container with ID starting with 86eb25030181067340d5612db518a368f29dadf0d87e34748dafe9417a73a4ee not found: ID does not exist" containerID="86eb25030181067340d5612db518a368f29dadf0d87e34748dafe9417a73a4ee" Jan 27 10:36:23 crc kubenswrapper[4799]: I0127 10:36:23.484409 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86eb25030181067340d5612db518a368f29dadf0d87e34748dafe9417a73a4ee"} err="failed to get container status \"86eb25030181067340d5612db518a368f29dadf0d87e34748dafe9417a73a4ee\": rpc error: code = NotFound desc = could not find container \"86eb25030181067340d5612db518a368f29dadf0d87e34748dafe9417a73a4ee\": container with ID starting with 86eb25030181067340d5612db518a368f29dadf0d87e34748dafe9417a73a4ee not found: ID does not exist" Jan 27 10:36:23 crc kubenswrapper[4799]: I0127 10:36:23.484442 4799 scope.go:117] "RemoveContainer" containerID="f8c510936b562b592a062c94eb160184de7ac6826d9fa7fcbf3725411b087131" Jan 27 10:36:23 crc kubenswrapper[4799]: E0127 10:36:23.485512 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8c510936b562b592a062c94eb160184de7ac6826d9fa7fcbf3725411b087131\": container with ID starting with f8c510936b562b592a062c94eb160184de7ac6826d9fa7fcbf3725411b087131 not found: ID does not exist" containerID="f8c510936b562b592a062c94eb160184de7ac6826d9fa7fcbf3725411b087131" Jan 27 10:36:23 crc kubenswrapper[4799]: I0127 10:36:23.485551 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8c510936b562b592a062c94eb160184de7ac6826d9fa7fcbf3725411b087131"} err="failed to get container status \"f8c510936b562b592a062c94eb160184de7ac6826d9fa7fcbf3725411b087131\": rpc error: code = NotFound desc = could not find container \"f8c510936b562b592a062c94eb160184de7ac6826d9fa7fcbf3725411b087131\": container with ID starting with f8c510936b562b592a062c94eb160184de7ac6826d9fa7fcbf3725411b087131 not found: ID does not exist" Jan 27 10:36:23 crc kubenswrapper[4799]: I0127 10:36:23.485576 4799 scope.go:117] "RemoveContainer" containerID="d9e8f83c04defc44fcdc6c302c2b4425536a74807c224ad707b9cb32a910636b" Jan 27 10:36:23 crc kubenswrapper[4799]: E0127 10:36:23.490909 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9e8f83c04defc44fcdc6c302c2b4425536a74807c224ad707b9cb32a910636b\": container with ID starting with d9e8f83c04defc44fcdc6c302c2b4425536a74807c224ad707b9cb32a910636b not found: ID does not exist" containerID="d9e8f83c04defc44fcdc6c302c2b4425536a74807c224ad707b9cb32a910636b" Jan 27 10:36:23 crc kubenswrapper[4799]: I0127 10:36:23.490967 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9e8f83c04defc44fcdc6c302c2b4425536a74807c224ad707b9cb32a910636b"} err="failed to get container status \"d9e8f83c04defc44fcdc6c302c2b4425536a74807c224ad707b9cb32a910636b\": rpc error: code = NotFound desc = could not find container \"d9e8f83c04defc44fcdc6c302c2b4425536a74807c224ad707b9cb32a910636b\": container with ID starting with d9e8f83c04defc44fcdc6c302c2b4425536a74807c224ad707b9cb32a910636b not found: ID does not exist" Jan 27 10:36:23 crc kubenswrapper[4799]: I0127 10:36:23.540560 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xk9gs"] Jan 27 10:36:23 crc kubenswrapper[4799]: I0127 10:36:23.548558 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xk9gs"] Jan 27 10:36:24 crc kubenswrapper[4799]: I0127 10:36:24.464016 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67a7db00-9033-4706-8fba-aeb152163a82" path="/var/lib/kubelet/pods/67a7db00-9033-4706-8fba-aeb152163a82/volumes" Jan 27 10:36:45 crc kubenswrapper[4799]: I0127 10:36:45.762741 4799 scope.go:117] "RemoveContainer" containerID="51c10f25579ffe39685dc23654ed5fbdf74e29a65707a45aea91fa19afb78d2b" Jan 27 10:36:53 crc kubenswrapper[4799]: I0127 10:36:53.731286 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:36:53 crc kubenswrapper[4799]: I0127 10:36:53.732117 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:37:23 crc kubenswrapper[4799]: I0127 10:37:23.731175 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:37:23 crc kubenswrapper[4799]: I0127 10:37:23.731660 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:37:53 crc kubenswrapper[4799]: I0127 10:37:53.730924 4799 patch_prober.go:28] interesting pod/machine-config-daemon-sqpcz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:37:53 crc kubenswrapper[4799]: I0127 10:37:53.731833 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:37:53 crc kubenswrapper[4799]: I0127 10:37:53.731937 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" Jan 27 10:37:53 crc kubenswrapper[4799]: I0127 10:37:53.733061 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bac937bb5e96885fa612b0a47f9a3531db920738a861d83936eac3b4ea921286"} pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 10:37:53 crc kubenswrapper[4799]: I0127 10:37:53.733164 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerName="machine-config-daemon" containerID="cri-o://bac937bb5e96885fa612b0a47f9a3531db920738a861d83936eac3b4ea921286" gracePeriod=600 Jan 27 10:37:53 crc kubenswrapper[4799]: E0127 10:37:53.876128 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c" Jan 27 10:37:53 crc kubenswrapper[4799]: I0127 10:37:53.881964 4799 generic.go:334] "Generic (PLEG): container finished" podID="058f98c8-1b84-48d3-8167-ad1a5584351c" containerID="bac937bb5e96885fa612b0a47f9a3531db920738a861d83936eac3b4ea921286" exitCode=0 Jan 27 10:37:53 crc kubenswrapper[4799]: I0127 10:37:53.882036 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" event={"ID":"058f98c8-1b84-48d3-8167-ad1a5584351c","Type":"ContainerDied","Data":"bac937bb5e96885fa612b0a47f9a3531db920738a861d83936eac3b4ea921286"} Jan 27 10:37:53 crc kubenswrapper[4799]: I0127 10:37:53.882078 4799 scope.go:117] "RemoveContainer" containerID="f4c3254e058931cf30af93b06e3a03336364d8eae7744ad2c786efe39969a52b" Jan 27 10:37:53 crc kubenswrapper[4799]: I0127 10:37:53.882755 4799 scope.go:117] "RemoveContainer" containerID="bac937bb5e96885fa612b0a47f9a3531db920738a861d83936eac3b4ea921286" Jan 27 10:37:53 crc kubenswrapper[4799]: E0127 10:37:53.883171 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqpcz_openshift-machine-config-operator(058f98c8-1b84-48d3-8167-ad1a5584351c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqpcz" podUID="058f98c8-1b84-48d3-8167-ad1a5584351c"